• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

xtrap225's Achievements


Rookie (2/14)




Community Answers

  1. upgrading seemed to fix whatever my issue was .. Uptime 1 month 2 days 20 hours 49 minutes. now that everything is running so well i am reluctant to update again Current unRaid Version: 6.12.2 Upgrade unRaid Version: 6.12.3. i guess i will read the release notes on .3 and see how many changes there are to weigh the risks.
  2. my upgrade went as smoothly as one could hope. i am not crazy about how shares seemed to come from no where when i added my zfs. its a bit weird. i will need to stop the array when possible and clean that up. i will likely convert my multiple cache's to zfs, and fix those shares. but other than that i am happy.. now i will monitor for stability and if i don't crash after say 1 month, i will mark this post as the solution to my crashing issue.
  3. i've decided to do the upgrade to the 6.12.2 stable branch from 6.11.5 that i am on now. hopefully it goes smoothly and if i am super lucky (since when?) it may even fix my problem. if you were looking into my issue, i thank you very much. and please please feel free to enlighten us.(me).
  4. indeed i was, but i put that script in place and had increased my /run size. so i don't think that is the reason for my crash anymore. certainly was or wasn't helping. that was on or around June 5th when i marked @Polar as solving it. my dumb solution was to increase the size of /run to 256MB which is a bit of a waste of RAM. to also be fair to myself i did say i was gonna clear the log on cron in the thread on a post i did on March 29th, but didn't actually go through with it until Polar posted on June 5th. i have since shrunk /run to 64MB but i do have plenty of ram. once i am convinced there is no longer an issue i will disable the cron but leave it in place and comment out the resizing of /run on boot.
  5. every couple of weeks or so my system crashes, and i have to hard reboot really hoping someone can read the diagnositics and help figure out why before i blindly upgrade to the latest stable version that i just noticed is available today. i have been trying to read the syslog which i am mirring to my usb since i noticed this issue. i had an issue with the tmpfs filling up but that has since been mitigated and things did get a bit better. but still i woke up this morning to a completely downed server, i had to hard reboot using meshcommander talking to the intel amt i have setup on it. any help would be greatly appreciated. please let me know if you require any further details or information at all. previous boot ... so crash happened right before this system boot 2023-06-20 20:04 this current boot was ... but i am unsure exactly when the system started having trouble as i was likely sleeping. system boot 2023-07-01 09:20 dell-pc-diagnostics-20230701-1000.zip
  6. i don't know what is going on but i am convinced it has to. do with the nvidia drivers. i updated to the 'New Feature Branch'. Just to change it up, and it is no longer the same errors in the log. just the same line over and over tail -F /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/501f72c7fc3a92557935aab9479c1fb048e40ac95c9833c44efb4ee18e671884/log.json {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-29T22:57:18-04:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-29T22:57:23-04:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-29T22:57:28-04:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-29T22:57:33-04:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-29T22:57:38-04:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-29T22:57:43-04:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-29T22:57:48-04:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-29T22:57:53-04:00"} seems the nvidia is only working in my one plex docker, tdar and handbrake for example don't seem to work . (i was wrong plex and tdar are working; it was just handbrake that wasn't but i don't know if or what that is, maybe its a totally differnet issue.) does anyone know how to fix this. properly rip out the nvidia drivers and start from scratch maybe?
  7. so /etc/nvidia-container-runtime/host-files-for-container.d doesn't exist. the only thing in that folder is .. /etc/nvidia-container-runtime/config.toml also tried running 'runc list' and there was nothing. probably doing something wrong though.. /usr/bin/runc list ID PID STATUS BUNDLE CREATED OWNER i might have to just clear the log on cron for a while until a new update comes for the nvidia driver and fixes the problem (hopefully)... not sure if anyone has any better thoughts/ideas?
  8. so its would seem that my plex container is filling up a log.json file with 'stuff' from the nvidia i have passed through to it. looks like the snippet below. While it is just a snippet, it does seem to just repeat over and over, so far its up to 9.8MB. I checked plex and don't have any debug or verbose enabled. I am running Nvidia Driver Package on Production Branch which is currently v525.116.04. anyone recognize this issue? something about "NVIDIAContainerRuntimeConfig" and "MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\" and "Path\": \"nvidia-ctk\" i should probably also note that transcoding and what not seems to work fine when i tested using 'watch nvidia-smi' while purposefully forcing a transcode. i mean to hit submit on this last night and in the meantime its gone from just under 10MB to 16MB tail -F /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/501f72c7fc3a92557935aab9479c1fb048e40ac95c9833c44efb4ee18e671884/log.json {"level":"info","msg":"Running with config:\n{\n \"AcceptEnvvarUnprivileged\": true,\n \"NVIDIAContainerCLIConfig\": {\n \"Root\": \"\"\n },\n \"NVIDIACTKConfig\": {\n \"Path\": \"nvidia-ctk\"\n },\n \"NVIDIAContainerRuntimeConfig\": {\n \"DebugFilePath\": \"/dev/null\",\n \"LogLevel\": \"info\",\n \"Runtimes\": [\n \"docker-runc\",\n \"runc\"\n ],\n \"Mode\": \"auto\",\n \"Modes\": {\n \"CSV\": {\n \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n },\n \"CDI\": {\n \"SpecDirs\": null,\n \"DefaultKind\": \"nvidia.com/gpu\",\n \"AnnotationPrefixes\": [\n \"cdi.k8s.io/\"\n ]\n }\n }\n },\n \"NVIDIAContainerRuntimeHookConfig\": {\n \"SkipModeDetection\": false\n }\n}","time":"2023-05-28T01:03:23-04:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-28T01:03:23-04:00"} {"level":"info","msg":"Running with config:\n{\n \"AcceptEnvvarUnprivileged\": true,\n \"NVIDIAContainerCLIConfig\": {\n \"Root\": \"\"\n },\n \"NVIDIACTKConfig\": {\n \"Path\": \"nvidia-ctk\"\n },\n \"NVIDIAContainerRuntimeConfig\": {\n \"DebugFilePath\": \"/dev/null\",\n \"LogLevel\": \"info\",\n \"Runtimes\": [\n \"docker-runc\",\n \"runc\"\n ],\n \"Mode\": \"auto\",\n \"Modes\": {\n \"CSV\": {\n \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n },\n \"CDI\": {\n \"SpecDirs\": null,\n \"DefaultKind\": \"nvidia.com/gpu\",\n \"AnnotationPrefixes\": [\n \"cdi.k8s.io/\"\n ]\n }\n }\n },\n \"NVIDIAContainerRuntimeHookConfig\": {\n \"SkipModeDetection\": false\n }\n}","time":"2023-05-28T01:03:28-04:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-28T01:03:28-04:00"}
  9. thanks for the response, i am running the following for as long as it takes to hopefully figure this out. nohup watch -n600 '(df -h |grep /run; echo; echo) | tee -a /boot/run.filling_up.txt; (du -h --max-depth=1 /run; echo; echo) | tee -a /boot/run.filling_up.txt' & tail -F nohup.out
  10. stopped the array (i don't know if i had to do that or not, kinda wish i tried before doing that). then ran 'mount -o remount,size=10G /run' it seemed to work. but i doubt that will survive a reboot. anyone know how to make that change permanent? can i put it in my /boot/config/go or /boot/config/extra.cfg (i don't know how to use that one, tried to look it up in the manual but couldn't find it). ** so i didn't have to stop my array to do the change cause i. did it again and made it 256M after thinking for more than a second and realizing i was potentially wasting a ton of RAM **
  11. i keep needing to reboot to release 32M /run do you have any suggestions on how to mitigate this issue? tmpfs 32M 32M 0 100% /run
  12. i have many computers(windows, mac, linux) and have tried all their respective browsers, chrome, firefox, edge, safari. i can't just bring down the server to try safe mode? not sure i have used safe mode on unraid, is that an option when it boots? or is that like maintenance mode? i will have to plan that sort of shutdown.
  13. roger that, and i am not. the window pops up just fine, its the result that is no good.
  14. my log button on my unraid webpage hasn't worked for several versions. that is http://X.X.X.X/webterminal/syslog since syslog is working both internally; meaning it works from command line of course, but also from the browser at http://X.X.X.X/Syslog and externally(but really internal) to an observium docker. Since that is the case it has meant that i haven't bothered to open forum post for help. I thought it would be nice to fix this issue now. if anyone can help it would be appreciated. UNRAID-diagnostics-20221212-1418.zip