Thirs

Members
  • Posts

    7
  • Joined

  • Last visited

Thirs's Achievements

Noob

Noob (1/14)

2

Reputation

  1. AFAIK Nvidia GPU support is an open ticket with iSpy-Agent. If it does work might have to try different settings in the iSpy-Agent program itself to see if that helps. I've never gotten the GPU support working but haven't tried on newer versions of this program. See: https://github.com/doitandbedone/ispyagentdvr-docker/issues/79 What does work is using an Nvidia GPU with DeepStack for object detection. See: https://hub.docker.com/r/deepquestai/deepstack Tag: deepquestai/deepstack:gpu This specific version seems to work with older and recent iSpy-Agent: deepquestai/deepstack:gpu-2021.09.1 I have an Nvidia 1050Ti GPU passed to the iSpy-Agent, DeepStack and Plex containers. I recall having to flash the GPU .rom file a long time ago so it would work with Unraid. However I think that was before the Nvidia driver support feature was available so that might not be needed anymore. Just mentioning this to see if that's something missing here. I tested this by shutting down the DeepStack container in Unraid with iSpy-Agent running and ran the "nvidia-smi" console command. This returned no running processes. I then started up the DeepStack container and left the iSpy-Agent container running as well. This time "nvidia-smi" returned two processes running which is typical when the DeepStack container is running. Regarding the Plex container not showing a running process in "nvidia-smi" this will only show when transcoding is occurring. I have verified this works when a transcode stream is running. Otherwise you won't see Plex listed as a running process in "nvidia-smi". An easy way to see processes and other stats of the GPU on the Unraid dashboard if you don't have it already is the GPU Statistics plugin. A bit more real-time then the "nvidia-smi" console command having to be re-entered. Anyways, just some extra info @Panics and testing to show what I've seen with behavior of these containers and an Nvidia GPU.
  2. @Panics It seems you may be missing a bit of configuration in your Docker containers. Try adding: Variable named 'NVIDIA_DRIVER_CAPABILITIES' and set the value to all See: https://github.com/binhex/documentation/blob/master/docker/faq/plex.md Q3/A3: How do I configure Plex to use my GPU for encoding/decoding (sometimes referred to as hardware transcoding)? These instructions should be the same depending on the Docker container configured. Also might want to update the Nvidia Driver if that hasn't been done already using the plugin:
  3. I am able to reproduce the memory leak issue with an Unraid VM running Shinobi - 6 camera system, Nvidia 1050Ti pass-through to the VM. In this case the camera is used for the TensorFlow object detection which is supported by Shinobi. Tested with Unraid v.6.8.3 and v.6.92 Ubuntu 20.04 LTS: 3 cores 3 threads pinned / isolated, 8gb memory assigned. Machine: Q35-4.2 Bios: OVMF Vdisk location: Cache;only Unassigned drive pass-through to the VM for recording from the cameras. It seems I can also reproduce the memory leak issue with a Docker container running a different NVR software. In this case the software is: doitandbedone/ispyagentdvr. In 24 hours the memory consumed by this Docker container has doubled. The test has run for a limited time so I will update this thread as time progresses. Update: After a few days the memory consumed may have stabilized. I'll continue to log the memory consumed and update this thread. Date | Container | Memory consumed | Memory limit assigned to container 10/18 8am doitandbedone/ispyagentdvr 342.1MiB / 8GiB 10/19 8am doitandbedone/ispyagentdvr 705.1MiB / 8GiB 10/20 8am doitandbedone/ispyagentdvr 762.9iB / 8GiB 10/21 8am doitandbedone/ispyagentdvr 743.3MiB / 8GiB Tested with Unraid v.6.92 Docker: 3 cores 3 threads assigned, not isolated. limited to 8gb memory. img location: Cache;only Unassigned drive pass-through to the VM for recording from the cameras. peon-diagnostics-20211019-0806.zip
  4. VM: Ubuntu 20.04 LTS server VM with a Nvidia 1050Ti set to pass-through. Configuration: IPC NVR with Tensorflow for object detection and email alerts. Behavior: I observed that in 6.9.2 release applied the Tensorflow alerts would be delayed by half an hour and up to an hour. The attached email screenshots did not show the captured object as expected. Rolled back Unraid to 6.8.3 release and the object detection / alerts reported as expected. Notes: In 6.9.2 release viewing the CPU usage of the various assigned programs to PM2 in this case reported Tensorflow as using 200% of the CPU consistently. In contrast with 6.8.3 PM2 Tensorflow reported < 100% CPU usage averaging 95 - 98%. Conclusion: The only change to this Unraid server where the VM is hosted was upgrading from 6.8.3 release to 6.9.2 release. Downgrading to v.6.8.3 was the only way to resolve this issue.
  5. After restoring the boot flash drive from backup the cache array has been restored. Docker and the VM service started as expected. A parity check is now underway. It seems this issue has been resolved.
  6. Ah, okay that makes sense. Thank you for the help Hoopster, I'll work through the reverting back to 6.8.3 portion of that topic.
  7. Hi, I've run into an issue that isn't clear for the path forward. I restored Unraid From 6.9.2 To 6.8.3 through the Unraid UI and after rebooting the two array drives are showing as Unassigned. I stopped both the VM and Docker services and rebooted once again with the same results. The VM and Docker images are on the cache drives hence the services won't start due to missing image locations. Attached are the diagnostics taken after a reboot in 6.8.3. Note: The cache drives are devices are "sdd" and "sde" with ID's of "WDC". diagnostics-20210517-0754.zip