BomB191

Members
  • Posts

    84
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

BomB191's Achievements

Rookie

Rookie (2/14)

11

Reputation

  1. Yes when I went to create a fresh container i noticed it under 'show more settings' so on my container i had 2x NVIDIA_VISIBLE_DEVICES one with my GPU and one with 'all' in the field. I deleted the variable I created and used the one in the container already and renamed it to my gpu So the container now has the below in the settings regarding the GPU
  2. yes confirmed now working! Thank you very much 💖
  3. I require a dunce hat for tonight. Went to make a new container and notice these 2 prams hiding under more settings. Figures it would be something extremally stupid. I didn't even contemplate checking in there. The disappointment in myself is unmeasurable. TIL check 'Show more settings ...' Sorry for wasting your time. and thank you immensely for the assistance
  4. After running 'kill $(pidof nvidia-persistenced)' I get the same error docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #1:: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy' nvidia-container-cli: device error: false: unknown device: unknown. Can also confirm both required variables are in the docker Key : NVIDIA_VISIBLE_DEVICES Value : GPU-9ef5c7e3-966f-cd37-8881-73507c0b7e0a Key : NVIDIA_DRIVER_CAPABILITIES Value : all This is in the unmanic container. I assume I'm not at the point of the container itself having issues with it yet. I am on Version: 6.10.3 should I hop onto the 6.11.0-rc3?
  5. Unfortunately I attempted those fixes before posting. The only nerd pack item I had installed was perl (cant even remember what I installed it for to be fair) But all has been removed completely and rebooted, I also tried reinstalling the driver after this aslo - same result nvidia-persistenced in the cmd line is accepted but no change. NVIDIA_VISIBLE_DEVICES I think is where my issue might be I'm copying the information from Confirmed no spaces, Tried re copy pasting Correct "GPU-9ef5c7e3-966f-cd37-8881-73507c0b7e0a" Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #1:: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy' nvidia-container-cli: device error: false: unknown device: unknown. incorrect "asfa" I triel 'all' also as i saw that somewhere when I was searching. Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #1:: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy' nvidia-container-cli: device error: false: unknown device: unknown. Item is as per instructions on first post NVIDIA_DRIVER_CAPABILITIES however spits a different error when I set it to 'some' Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #1:: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy' unsupported capabilities found in 'some' (allowed ''): unknown. with the correct 'all' I get Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #1:: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy' nvidia-container-cli: device error: false: unknown device: unknown. My final attempt was put ' --runtime=nvidia' in extra pram fail the "save/compile' Go back in and edit the template and repasted the 'GPU-9ef5c7e3-966f-cd37-8881-73507c0b7e0a' Failed with the same error as NVIDIA_VISIBLE_DEVICES as above.
  6. This usually indicates that the runtime is not working properly and also is logged in your syslog. What packages have you installed from the Nerd Pack? I can only imagine that you have something installed that is interfering with the Nvidia Driver. Have you changed anything recently in your system, may it be the hardware or software (Docker, Plugins,...) So I appear to be having this issue. Fresh install though so never worked before. Just uninstalled Nerd Pack and rebooted Getting docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #1:: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy' nvidia-container-cli: device error: false: unknown device: unknown. I'm Sure I'm missing something. Done the usual reboot/reinstall etc etc. (initially I was having VIFO problems, system used to pass it through now it doesn't) Driver gets the below and appears to be A ok GPU stats is getting pulled correctly too I'm like 99% sure I'm missing something dumb. what logs would you need? Edit: Also confirmed these are ok too.
  7. I have rolled back but now I'm getting /bin/sh: 1: apk: not found Nothing else has changed
  8. Always recommend testing on a set of files before you let it loose
  9. interesting read. But it doesn't seem anyone has solid evidence that changing that will improve anything from what I can see. I guess I will need to just keep an eye on it and hope its holds out until I can afford a new NVME drive. Thanks for the assistance though
  10. yeah just create new paths. all the container needs to see is /library then you can sub folder that out.
  11. There's a new alignment? I'll have to dig into the patch notes and find out what your talking about. Update: well the link to it on the patch notes goes to a dead page https://wiki.unraid.net/Unraid_OS_6.9.0#SSD_1_MiB_Partition_Alignment Google is proving useless too. Why would I want to change this?
  12. To be fair since I removed my VHD from the drive I usually float around 300GB (plex images and all that fluff). Mover never operates as I have moved downloads to a HDD to you know cut back on read writes, I also have moved plex transcoding to RAM. So its primary function now is just appdata and plex stuff
  13. @Zervun @regorian So I seem to have resolved the issue. My pinning was screwed up. I also forced all the database optimisations to run and extended my daily maintenance window
  14. So I have a 1TB Samsung_SSD_970_EVO. That I have abused to death (though it still functions) I cannot afford a new drive for a while so I'm thinking to cheaping out if my logic is correct on the matter. If i can shrink the volume to 500GB that should give the drive 500GB spare for all the dead areas. In theory extending its life because now it has 50% of the drive at disposal instead of the standard 10% I however cannot find anything speaking towards shrinking a volume of a single drive just drive swapping and cache pool changes. Is this something that A. Would work B. something that can be done and C. not completely stupid. Basically I need to extend the life of the cache drive for as long as I can until I can afford a new NVME drive (I need an array drive first)