u13rr1

Members
  • Posts

    10
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

u13rr1's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Still no luck with this. I'm out of ideas, limited as they were to start with.
  2. I did roll back to 6.6.7 but I'll try it again later.
  3. No, I'm not using it I have previously been hw transcoding with Intel Quicksync on my CPU. Is there a legacy issue with that? May 19 15:59:33 Tower kernel: nvidia 0000:01:00.0: swiotlb buffer is full (sz: 4194304 bytes) May 19 15:59:33 Tower kernel: nvidia 0000:01:00.0: swiotlb buffer is full What the hell is swiotlb?
  4. The fields were all typed, not copied and pasted. I've tried multiple times. Version is 6.7.0 Im using a GTX1050, which I can see from nvidia-smi so not a hardware issue.
  5. I've been kicking around with this all day and hoping someone can help. I've successfully added the --runtime=nvidia parameter and the NVIDIA_DRIVER_CAPABILITIES variable and plex updates and starts. However, adding NVIDIA_VISIBLE_DEVICES variable with the UUID value (or 0 or all) throws up the following: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='plex' --net='host' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -e 'VERSION'='latest' -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-0e64b88f-fa47-56a5-d740-5f989b63fc65' -v '/mnt/user/appdata/plex':'/config':'rw' --runtime=nvidia 'linuxserver/plex' 458cbf493f1f14c2f87df492eb789360284d4ed61b3f24bcf8e23fd61fbb405e /usr/bin/docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:424: container init caused "process_linux.go:407: running prestart hook 0 caused \"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/local/bin/nvidia-container-cli --load-kmods --debug=/var/log/nvidia-container-runtime-hook.log --ldcache=/etc/ld.so.cache configure --ldconfig=@/sbin/ldconfig --device=GPU-0e64b88f-fa47-56a5-d740-5f989b63fc65 --compute --compat32 --graphics --utility --video --display --pid=11393 /var/lib/docker/btrfs/subvolumes/bbb1ee999ab52a47f5db1e25ffe306072910df3fafff7d19c1c72d092e7e8f8d]\\nnvidia-container-cli: initialization error: cuda error: initialization error\\n\""": unknown. The command failed. The log: May 19 15:57:08 Tower login[10845]: ROOT LOGIN on '/dev/pts/0' May 19 15:57:15 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000d4000-0x000d7fff window] May 19 15:57:15 Tower kernel: caller _nv000934rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs May 19 15:59:33 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000d4000-0x000d7fff window] May 19 15:59:33 Tower kernel: caller _nv000934rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs May 19 15:59:33 Tower kernel: nvidia 0000:01:00.0: swiotlb buffer is full (sz: 4194304 bytes) May 19 15:59:33 Tower kernel: nvidia 0000:01:00.0: swiotlb buffer is full
  6. So I can't use a username with a '.' in it?
  7. Running 6.0 14b, for some reason my SMB permissions are set to 'Public' but I still can't see them from the laptop when I attempt to map the share to a drive. I have filled in my domain correctly in 'Workgroup settings' so I'm not sure what the issue is? I have tried to add (via the GUI) a user with username the same as my domain username 'xxxx.xxxxxxxx' (all lower case) and various passwords (no capitals, funny characters etc.) Click 'Save' and the fields go blank but no user is created.