-
Posts
949 -
Joined
-
Last visited
About Marshalleq
- Birthday October 17
Converted
-
Gender
Male
-
URL
https://www.tech-knowhow.com
-
Location
New Zealand
-
Personal Text
TT
Recent Profile Visitors
3166 profile views
Marshalleq's Achievements
Collaborator (7/14)
135
Reputation
-
Thankyou! Will try it thanks.
-
Marshalleq started following Nvidia Driver filling up logs
-
Hi all, does anyone know why this is happening? It actually ends up stopping my VM's and dockers in the end. Despite what it eludes to below, the driver is actually installed. Is there some new method I am not aware of? Oh, this log for me is at: /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/239d6e30c683fd56432d60fe02198e271c6d56ad3d7b562a74e1c9e51be78d68 Which is why it keeps filling up my drive. It's a small 32MB Tmpfs partition. Thanks! ``` cat log.json {"level":"info","msg":"Running with config:\n{\n \"AcceptEnvvarUnprivileged\": true,\n \"NVIDIAContainerCLIConfig\": {\n \"Root\": \"\"\n },\n \"NVIDIACTKConfig\": {\n \"Path\": \"nvidia-ctk\"\n },\n \"NVIDIAContainerRuntimeConfig\": {\n \"DebugFilePath\": \"/dev/null\",\n \"LogLevel\": \"info\",\n \"Runtimes\": [\n \"docker-runc\",\n \"runc\"\n ],\n \"Mode\": \"auto\",\n \"Modes\": {\n \"CSV\": {\n \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n },\n \"CDI\": {\n \"SpecDirs\": null,\n \"DefaultKind\": \"nvidia.com/gpu\",\n \"AnnotationPrefixes\": [\n \"cdi.k8s.io/\"\n ]\n }\n }\n },\n \"NVIDIAContainerRuntimeHookConfig\": {\n \"SkipModeDetection\": false\n }\n}","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Using OCI specification file path: /var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/239d6e30c683fd56432d60fe02198e271c6d56ad3d7b562a74e1c9e51be78d68/config.json","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Auto-detected mode as 'legacy'","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Ignoring 32-bit libraries for libcuda.so: [/usr/lib/libcuda.so.535.54.03]","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Using prestart hook path: /usr/bin/nvidia-container-runtime-hook","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Selecting /dev/dri/card0 as /dev/dri/card0","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Selecting /dev/dri/renderD128 as /dev/dri/renderD128","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Selecting /usr/lib64/libnvidia-egl-gbm.so.1.1.0 as /usr/lib64/libnvidia-egl-gbm.so.1.1.0","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate glvnd/egl_vendor.d/10_nvidia.json: pattern glvnd/egl_vendor.d/10_nvidia.json not found","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Selecting /etc/vulkan/icd.d/nvidia_icd.json as /etc/vulkan/icd.d/nvidia_icd.json","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Selecting /etc/vulkan/implicit_layer.d/nvidia_layers.json as /etc/vulkan/implicit_layer.d/nvidia_layers.json","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate egl/egl_external_platform.d/15_nvidia_gbm.json: pattern egl/egl_external_platform.d/15_nvidia_gbm.json not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate egl/egl_external_platform.d/10_nvidia_wayland.json: pattern egl/egl_external_platform.d/10_nvidia_wayland.json not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate nvidia/xorg/nvidia_drv.so: pattern nvidia/xorg/nvidia_drv.so not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate nvidia/xorg/libglxserver_nvidia.so.535.54.03: pattern nvidia/xorg/libglxserver_nvidia.so.535.54.03 not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate X11/xorg.conf.d/10-nvidia.conf: pattern X11/xorg.conf.d/10-nvidia.conf not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate nvidia/xorg/nvidia_drv.so: pattern nvidia/xorg/nvidia_drv.so not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate nvidia/xorg/libglxserver_nvidia.so.535.54.03: pattern nvidia/xorg/libglxserver_nvidia.so.535.54.03 not found","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Mounts:","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Mounting /usr/lib64/libnvidia-egl-gbm.so.1.1.0 at /usr/lib64/libnvidia-egl-gbm.so.1.1.0","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Mounting /etc/vulkan/icd.d/nvidia_icd.json at /etc/vulkan/icd.d/nvidia_icd.json","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Mounting /etc/vulkan/implicit_layer.d/nvidia_layers.json at /etc/vulkan/implicit_layer.d/nvidia_layers.json","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Devices:","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Injecting /dev/dri/card0","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Injecting /dev/dri/renderD128","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Hooks:","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Injecting /usr/bin/nvidia-ctk [nvidia-ctk hook create-symlinks --link ../card0::/dev/dri/by-path/pci-0000:81:00.0-card --link ../renderD128::/dev/dri/by-path/pci-0000:81:00.0-render]","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Applied required modification to OCI specification","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Forwarding command to runtime","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Running with config:\n{\n \"AcceptEnvvarUnprivileged\": true,\n \"NVIDIAContainerCLIConfig\": {\n \"Root\": \"\"\n },\n \"NVIDIACTKConfig\": {\n \"Path\": \"nvidia-ctk\"\n },\n \"NVIDIAContainerRuntimeConfig\": {\n \"DebugFilePath\": \"/dev/null\",\n \"LogLevel\": \"info\",\n \"Runtimes\": [\n \"docker-runc\",\n \"runc\"\n ],\n \"Mode\": \"auto\",\n \"Modes\": {\n \"CSV\": {\n \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n },\n \"CDI\": {\n \"SpecDirs\": null,\n \"DefaultKind\": \"nvidia.com/gpu\",\n \"AnnotationPrefixes\": [\n \"cdi.k8s.io/\"\n ]\n }\n }\n },\n \"NVIDIAContainerRuntimeHookConfig\": {\n \"SkipModeDetection\": false\n }\n}","time":"2023-06-26T11:08:28+12:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-06-26T11:08:28+12:00"} {"level":"info","msg":"Running with config:\n{\n \"AcceptEnvvarUnprivileged\": true,\n \"NVIDIAContainerCLIConfig\": {\n \"Root\": \"\"\n },\n \"NVIDIACTKConfig\": {\n \"Path\": \"nvidia-ctk\"\n },\n \"NVIDIAContainerRuntimeConfig\": {\n \"DebugFilePath\": \"/dev/null\",\n \"LogLevel\": \"info\",\n \"Runtimes\": [\n \"docker-runc\",\n \"runc\"\n ],\n \"Mode\": \"auto\",\n \"Modes\": {\n \"CSV\": {\n \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n },\n \"CDI\": {\n \"SpecDirs\": null,\n \"DefaultKind\": \"nvidia.com/gpu\",\n \"AnnotationPrefixes\": [\n \"cdi.k8s.io/\"\n ]\n }\n }\n },\n \"NVIDIAContainerRuntimeHookConfig\": {\n \"SkipModeDetection\": false\n }\n}","time":"2023-06-26T11:08:33+12:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-06-26T11:08:33+12:00"} {"level":"info","msg":"Running with config:\n{\n \"AcceptEnvvarUnprivileged\": true,\n \"NVIDIAContainerCLIConfig\": {\n \"Root\": \"\"\n },\n \"NVIDIACTKConfig\": {\n \"Path\": \"nvidia-ctk\"\n },\n \"NVIDIAContainerRuntimeConfig\": {\n \"DebugFilePath\": \"/dev/null\",\n \"LogLevel\": \"info\",\n \"Runtimes\": [\n \"docker-runc\",\n \"runc\"\n ],\n \"Mode\": \"auto\",\n \"Modes\": {\n \"CSV\": {\n \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n },\n \"CDI\": {\n \"SpecDirs\": null,\n \"DefaultKind\": \"nvidia.com/gpu\",\n \"AnnotationPrefixes\": [\n \"cdi.k8s.io/\"\n ]\n }\n }\n },\n \"NVIDIAContainerRuntimeHookConfig\": {\n \"SkipModeDetection\": false\n }\n}","time":"2023-06-26T11:08:39+12:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-06-26T11:08:39+12:00"} ```
-
Plex Harware Transcoding with Quadro P2000
Marshalleq replied to chriswheelz's topic in General Support
Hi there are not many replies are there. Did you ever get this going? There seems to be something different about Plex and I'm wondering if it's the official Plex container maybe doesn't support it properly. Will try that tonight. -
I'm having the same issue. I have NVIDIA_VISIBLE_DEVICES and my GPU pasted in there, I have a screen plugged in (it wasn't turned on though - I assume I don't have to actually have it on - that would be annoying - it was on at the wall though). I also have --runtime=nvidia in extra parameters, no spaces. I have the two checkboxes on in in Plex for hardware acceleration. Have restarted docker, and the whole machine. I can see the GPU is detected using the GPU stats plugin. There used to be some other parameters needed - driver capabilities and such, these I assume are no longer required with the new driver? I'm running unraid 6.12.0-rc6. Also, I'm running official Plex docker. Anything I've missed? Thanks.
-
Container docker hub and volume mapping to appdata
Marshalleq replied to TestarossaDrive's topic in Docker Engine
Just chiming in here because I also get this issue (different container) and have done a few times. I don't understand why it's happening. Sometimes parts of the directory come through, like just the folder name but without the files. Another time one directory will come through completely but not the other. The main difference I note is I'm using the extra dockerhub link rather than Unraid App Store default in this case for idrive backup container here: https://hub.docker.com/r/taverty/idrive/ I'd bet if I set this up manually in docker on ubuntu or something it'd work. I think there some fancy unraid stuff meant to make it easier getting in the way or something, I don't think it's me but could be wrong! The first time I ran this container, I had an empty scripts directory and no /etc directory. I deleted it and tried again, this time I have no scripts directory and a full /etc directory. I've triple checked my paths are right, it makes no sense. And to make matters worse the data is lost both inside and outside the container. i.e in this case there is nothing under /opt/idrive/IDriveForLinux/scripts/ or the hosts equivalent mountpoint. If I remove the line it all comes back with a container restart. Seems like some kind of weird bug. I've removed the docker file a few times including deleting the whole image and the host docker directory simultaneously, still the same. If anyone knows any tricks around this, that would be appreciated! Thanks, Marshalleq. -
I just upgraded to rc5. It works perfectly, so the bug I had is gone, well done to the team and thank you to @unraid! Still have the USB key though.
-
I think this is still manual. I don't expect any zfs gui to come out in this release. Someone may correct me on that. RC5 came around fast. Anyone know if I can get rid of my USB key that is used to boot the unraid array yet? I'm still on RC2 because RC3 had problems. I have a horrible feeling these will still be present in RC5 - but will see! Slightly surprised to hear them say that they expect a stable in a few weeks.
-
Edit the smb extras file in /boot. I’m sure there will be a gui method coming in the future but as far as I know this is still it.
-
RC3 VM Configuration not available and docker starts but doesn't work
Marshalleq commented on Marshalleq's report in Prereleases
Yes ZFS. Thanks, but I still think this is a bug given rolling back to rc2 the problem goes away. What you highlight above are correct paths. I did reboot a few times to confirm consistent problem. The paths were mounted already when navigating via bash. Somehow the unraid system wasn't seeing them though. Obviously I shouldn't have to do it manually either. So if there's something specific you want me to test I'm happy to do that. You want me to upgrade and manually mount zfs again? Given I've already tried that I am a little reluctant but will do it if it is needed to convince for further investigation. -
[6.12.0-RC3] Docker directory mode creates massive slowdowns
Marshalleq commented on Jclendineng's report in Prereleases
I hadn't noticed this issue, will look. I've used docker directory on zfs for quite a while too, including up to RC2. So we now have to use an image, with BTRFS on it? That seems a bit wrong. -
After upgrading to RC3 the following two symptoms occurred. Going back to RC2 they went away (diagnostics attached). Symptom 1: The VM service showed as being started but all VM's we missing under the VM menu. Symptom 2: All dockers show as started and can be navigated to, but were not able to see the disks. Symptom 3: The disks were still mounted and navigable via ssh - which I wasn't expecting - at least the ones I checked which was ssd1pool and hdd2pool. skywalker-diagnostics-20230416-0801.zip
-
It is still unclear though what exactly changed in RC3, as some of this is most certainly not new, in fact since rc1. I think this is normal for unraid though right? They sort of call it an rc3 changeling but lump everything else in. But if I recall correctly they used to put rc1/2/3 next to the items that they applied to so you could tell. For example I am uncertain if anything in the ZFS section is rc3 specific. Certainly the first half of it isn't.
-
Aha, it's there now! Perhaps my browser didn't update, or they just added it?! Weird!
-
Thanks, I think you meant this page right? Because I've already looked there, it has up to RC2, but no RC3 that I can find. Beca
-
Has anyone found a changelog for RC3? There are links and headings that say they're a changelog but they all seem to be older releases...