Jump to content

[SUPPORT] blakeblackshear - Frigate


Recommended Posts

5 minutes ago, irishjd said:

So I have one more issue. Frigate is working exactly as expected. It is capturing footage of the birds and recognizing them as a "bird". However, WhosAtMyFeeder is not logging anything. What is the best way to troubleshoot this issue?

Did you followed the steps to make the required changes on the frigate config in all the cameras that are you going to use alongside whosatmyfeeder?

 

https://github.com/mmcc-xx/WhosAtMyFeeder/

Link to comment

I did, I used the sample as a template. Here is what my frigate config.yml looks like:

 

mqtt:

host: mqtt

port: 1883

topic_prefix: frigate

# user: mqtt_username_here

# password: mqtt_password_here

stats_interval: 60

detectors:

coral:

type: edgetpu

device: usb

ffmpeg:

global_args: -hide_banner -loglevel warning

# hwaccel_args: -hwaccel_output_format qsv -c:v h264_qsv

input_args: preset-rtsp-generic

output_args:

# Optional: output args for detect streams (default: shown below)

detect: -threads 2 -f rawvideo -pix_fmt yuv420p

# Optional: output args for record streams (default: shown below)

record: preset-record-generic

detect:

width: 1920

height: 1080

objects:

track:

- bird

snapshots:

enabled: true

cameras:

birdcam:

record:

enabled: True

events:

pre_capture: 5

post_capture: 5

objects:

- bird

ffmpeg:

# hwaccel_args: -hwaccel_output_format qsv -c:v h264_qsv

inputs:

- path: rtsps://192.168.100.1:7441/uN84YwyHVF8TCCd8?enableSrtp

roles:

- detect

- record

mqtt:

enabled: True

bounding_box: False #this will get rid of the box around the bird. We already know it is a bird. Sheesh.

timestamp: False #this will get rid of the time stamp in the image.

quality: 95 #default quality is 70, which will get you lots of compression artifacts

 

I had to comment out the hardware acceleration settings as I still don't have a discrete GPU (on order).

 

Link to comment

Would love some advice on getting mini pcie coral up and running.

 

underlying hardware might be causing issues - unraid is running virtualized on esxi 7 on a dell r730. coral is passed through to unraid

shows up in system devices - driver is installed but *doesn't* show on coral driver status screen, subsequently doesn't show in frigate container. driver version installed is 2023.07.31

 

[1ac1:089a]03:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU

 

any suggestions would be much appreciated.

2023-09-22_17h24_08.png

2023-09-22_17h25_47.png

2023-09-22_17h27_20.png

Link to comment
16 hours ago, tiresome-stag5095 said:

Would love some advice on getting mini pcie coral up and running.

 

underlying hardware might be causing issues - unraid is running virtualized on esxi 7 on a dell r730. coral is passed through to unraid

shows up in system devices - driver is installed but *doesn't* show on coral driver status screen, subsequently doesn't show in frigate container. driver version installed is 2023.07.31

 

[1ac1:089a]03:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU

 

any suggestions would be much appreciated.

2023-09-22_17h24_08.png

2023-09-22_17h25_47.png

2023-09-22_17h27_20.png

@ich777 do you have any advice for virtualized unraids?

Link to comment
12 minutes ago, yayitazale said:

@ich777 do you have any advice for virtualized unraids?

Sadly enough no, there are too many variables involved in virtualized environments.

 

16 hours ago, jeremiahchurch said:

[1ac1:089a]03:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU

Please post your Diagnostics.

I‘m not sure but most of the times it has to do with the Host system.

I‘ve seen such issues a few times with Nvidia GPUs too.

Link to comment
On 9/14/2023 at 4:42 PM, yayitazale said:

Now that the beta1 is official I have added the beta1 and beta1-tensor tags to the deploy selector. Anyone interested can test the beta1 just installing a second instance with the beta tag. I strongly suggest you to use a different paths than the stable frigate for config and media folders.

 

Steps to securely test betas:

  • Create a new folder on appdata called frigate-beta.
  • Create a new media folder again with a different name.
  • Just stop the running stable frigate app, don't delete.
  • Copy and paste the config file in the new folder and edit it modifying it with the new requirements.
  • Optionally, copy and paste the database file in the new folder.
  • Launch the new frigate beta as a second instance with a different name, like "frigate-beta" and change the path of the config file and media path to the new ones. In this way you can have both old and new frigate (only one running but both containers):

imagen.png.4a9b82377a335c77db867adf7d040bfe.pngimagen.png.8ab125a14cf0c3876039906e49b3d789.png

  • You can make trials to make beta to work, but if you don't make it in just one try, you can just stop the frigate beta and start the stable one as many times as you need.
  • Don't forget to delete the unused orphan images clicking in the advanced view in the docker container page:

imagen.png.37a3f9582bb710c8062b40d06228d411.png

 

Pushed a change to the template to replace beta 1 with beta 2, anyone who wants to try has to reinstall it again following the same steps as for beta 1. Will be available in the CA store shortly.

Edited by yayitazale
Link to comment

For 0.12 what are folks doing to get the intel gpu stats?

Currently I'm running as privileged but would rather not.

 

Configuring Intel GPU Stats in Docker

Additional configuration is needed for the Docker container to be able to access the intel_gpu_top command for GPU stats. Three possible changes can be made:

  • Run the container as privileged.
  • Adding the CAP_PERFMON capability.
  • Setting the perf_event_paranoid low enough to allow access to the performance event system.

 

Run as privileged

 

This method works, but it gives more permissions to the container than are actually needed.

 

Docker Compose - Privileged

services: frigate: ... image: ghcr.io/blakeblackshear/frigate:stable privileged: true

Docker Run CLI - Privileged

docker run -d \ --name frigate \ ... --privileged \ ghcr.io/blakeblackshear/frigate:stable

 

 

CAP_PERFMON

 

Only recent versions of Docker support the CAP_PERFMON capability. You can test to see if yours supports it by running: docker run --cap-add=CAP_PERFMON hello-world

 

Docker Compose - CAP_PERFMON

services: frigate: ... image: ghcr.io/blakeblackshear/frigate:stable cap_add: - CAP_PERFMON

Docker Run CLI - CAP_PERFMON

docker run -d \ --name frigate \ ... --cap-add=CAP_PERFMON \ ghcr.io/blakeblackshear/frigate:stable

 

 

perf_event_paranoid

 

Note: This setting must be changed for the entire system.

For more information on the various values across different distributions, see https://askubuntu.com/questions/1400874/what-does-perf-paranoia-level-four-do.

 

Depending on your OS and kernel configuration, you may need to change the /proc/sys/kernel/perf_event_paranoid kernel tunable. You can test the change by running sudo sh -c 'echo 2 >/proc/sys/kernel/perf_event_paranoid' which will persist until a reboot. Make it permanent by running sudo sh -c 'echo kernel.perf_event_paranoid=1 >> /etc/sysctl.d/local.conf'

 

Edited by dopeytree
Link to comment

I'm trying to get my Coral USB working but I'm running into issues. I have /dev/bus/usb in the docker setting and use the below for the config settings:

  coral:
    type: edgetpu
    device: usb

 

But I get this error and the container keeps restarting. If I comment the coral out and just use my nvidia gpu it works fine. Any ideas what I need to fix this?

 

[2023-09-27 20:59:54] frigate.detectors.plugins.edgetpu_tfl INFO    : TPU found
[2023-09-27 20:59:54] frigate.detectors.plugins.edgetpu_tfl ERROR   : No EdgeTPU was detected. If you do not have a Coral device yet, you must configure CPU detectors.

 The docker is running in privilaged mode

Link to comment
9 minutes ago, BurningSky said:

I'm trying to get my Coral USB working but I'm running into issues. I have /dev/bus/usb in the docker setting and use the below for the config settings:

  coral:
    type: edgetpu
    device: usb

 

But I get this error and the container keeps restarting. If I comment the coral out and just use my nvidia gpu it works fine. Any ideas what I need to fix this?

 

[2023-09-27 20:59:54] frigate.detectors.plugins.edgetpu_tfl INFO    : TPU found
[2023-09-27 20:59:54] frigate.detectors.plugins.edgetpu_tfl ERROR   : No EdgeTPU was detected. If you do not have a Coral device yet, you must configure CPU detectors.

 The docker is running in privilaged mode

Can you test it with another computer with any of the examples of https://coral.ai/examples/#code-examples?

 

Can you see it listed on the devices on unraid?

Edited by yayitazale
Link to comment

I am having a similar issue when it comes to installing the tensorrt models. However, mine show me that permission is denied when the docker app tries accessing the model. Do you know how I can fix this issue? I have tried changing the permissions on the file to read/write for all but it doesn't seem to change anything. I realize that this may be an issue pertaining to permissions but i really want to make sure i am setting everything up correctly as well.

 

Here is my docker config.

image.thumb.png.e251ad25d2eef6d92c5967c0b98ef28d.png

 

I have placed this file under the trt-models directory.

https://raw.githubusercontent.com/blakeblackshear/frigate/master/docker/tensorrt_models.sh

 

Below is what I get in the Logs after running the docker app.

 

/opt/nvidia/nvidia_entrypoint.sh: line 49: /tensorrt_models.sh: Permission denied
/opt/nvidia/nvidia_entrypoint.sh: line 49: exec: /tensorrt_models.sh: cannot execute: Permission denied


=====================
== NVIDIA TensorRT ==
=====================

NVIDIA Release 22.07 (build 40077977)
NVIDIA TensorRT Version 8.4.1
Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

Container image Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

https://developer.nvidia.com/tensorrt

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh

To install the open-source samples corresponding to this TensorRT release version
run /opt/tensorrt/install_opensource.sh.  To build the open source parsers,
plugins, and samples for current top-of-tree on master or a different branch,
run /opt/tensorrt/install_opensource.sh -b <branch>
See https://github.com/NVIDIA/TensorRT for more information.

Link to comment
On 9/27/2023 at 9:31 PM, yayitazale said:

Can you test it with another computer with any of the examples of https://coral.ai/examples/#code-examples?

 

Can you see it listed on the devices on unraid?

Looks like the module works so I assume it's the usb passthrough? Is there another method to passthrough I should try?

 

python3 examples/classify_image.py \
--model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
--labels test_data/inat_bird_labels.txt \
--input test_data/parrot.jpg
/Users/burningsky/Downloads/edgetpu_runtime/coral/pycoral/examples/classify_image.py:79: DeprecationWarning: ANTIALIAS is deprecated and will be removed in Pillow 10 (2023-07-01). Use LANCZOS or Resampling.LANCZOS instead.
  image = Image.open(args.input).convert('RGB').resize(size, Image.ANTIALIAS)
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.
13.2ms
2.9ms
2.9ms
2.8ms
2.9ms
-------RESULTS--------
Ara macao (Scarlet Macaw): 0.75781

 

Link to comment
1 hour ago, BurningSky said:

Looks like the module works so I assume it's the usb passthrough? Is there another method to passthrough I should try?

 

python3 examples/classify_image.py \
--model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
--labels test_data/inat_bird_labels.txt \
--input test_data/parrot.jpg
/Users/burningsky/Downloads/edgetpu_runtime/coral/pycoral/examples/classify_image.py:79: DeprecationWarning: ANTIALIAS is deprecated and will be removed in Pillow 10 (2023-07-01). Use LANCZOS or Resampling.LANCZOS instead.
  image = Image.open(args.input).convert('RGB').resize(size, Image.ANTIALIAS)
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.
13.2ms
2.9ms
2.9ms
2.8ms
2.9ms
-------RESULTS--------
Ara macao (Scarlet Macaw): 0.75781

 

Try with privileged mode I think.

Edited by yayitazale
Link to comment
13 hours ago, yayitazale said:

Are you using the original cable and a 3.0 USB port?

Just had a look at lsusb on the host and in the container and noticed it's started misbehaving...

 

Unraid:

root@Ragon:~# lsusb
Bus 006 Device 002: ID 1a6e:089a Global Unichip Corp.
Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 005 Device 002: ID 0781:5567 SanDisk Corp. Cruzer Blade
Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 002: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 1cf1:0030 Dresden Elektronik ZigBee gateway [ConBee II]
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

 

Frigate:

# lsusb
Bus 006 Device 002: ID 1a6e:089a  
Bus 006 Device 001: ID 1d6b:0003 Linux 6.1.49-Unraid xhci-hcd xHCI Host Controller
Bus 005 Device 002: ID 0781:5567 SanDisk Cruzer Blade
Bus 005 Device 001: ID 1d6b:0002 Linux 6.1.49-Unraid xhci-hcd xHCI Host Controller
Bus 004 Device 001: ID 1d6b:0003 Linux 6.1.49-Unraid xhci-hcd xHCI Host Controller
Bus 003 Device 002: ID 051d:0002 American Power Conversion Back-UPS RS 900G FW:879.L4 .I USB FW:L4  
Bus 003 Device 001: ID 1d6b:0002 Linux 6.1.49-Unraid xhci-hcd xHCI Host Controller
Bus 002 Device 001: ID 1d6b:0003 Linux 6.1.49-Unraid xhci-hcd xHCI Host Controller
Bus 001 Device 002: ID 1cf1:0030 dresden elektronik ingenieurtechnik GmbH ConBee II
Bus 001 Device 001: ID 1d6b:0002 Linux 6.1.49-Unraid xhci-hcd xHCI Host Controller

 

Link to comment
On 10/1/2023 at 10:54 AM, BurningSky said:

Just had a look at lsusb on the host and in the container and noticed it's started misbehaving...

 

Unraid:

root@Ragon:~# lsusb
Bus 006 Device 002: ID 1a6e:089a Global Unichip Corp.
Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 005 Device 002: ID 0781:5567 SanDisk Corp. Cruzer Blade
Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 002: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 1cf1:0030 Dresden Elektronik ZigBee gateway [ConBee II]
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

 

Frigate:

# lsusb
Bus 006 Device 002: ID 1a6e:089a  
Bus 006 Device 001: ID 1d6b:0003 Linux 6.1.49-Unraid xhci-hcd xHCI Host Controller
Bus 005 Device 002: ID 0781:5567 SanDisk Cruzer Blade
Bus 005 Device 001: ID 1d6b:0002 Linux 6.1.49-Unraid xhci-hcd xHCI Host Controller
Bus 004 Device 001: ID 1d6b:0003 Linux 6.1.49-Unraid xhci-hcd xHCI Host Controller
Bus 003 Device 002: ID 051d:0002 American Power Conversion Back-UPS RS 900G FW:879.L4 .I USB FW:L4  
Bus 003 Device 001: ID 1d6b:0002 Linux 6.1.49-Unraid xhci-hcd xHCI Host Controller
Bus 002 Device 001: ID 1d6b:0003 Linux 6.1.49-Unraid xhci-hcd xHCI Host Controller
Bus 001 Device 002: ID 1cf1:0030 dresden elektronik ingenieurtechnik GmbH ConBee II
Bus 001 Device 001: ID 1d6b:0002 Linux 6.1.49-Unraid xhci-hcd xHCI Host Controller

 

Do you install the coral driver on the host? If yes, uninstall it.

Link to comment
On 9/29/2023 at 5:30 PM, UninspiredENVY said:

I am having a similar issue when it comes to installing the tensorrt models. However, mine show me that permission is denied when the docker app tries accessing the model. Do you know how I can fix this issue? I have tried changing the permissions on the file to read/write for all but it doesn't seem to change anything. I realize that this may be an issue pertaining to permissions but i really want to make sure i am setting everything up correctly as well.

 

Here is my docker config.

image.thumb.png.e251ad25d2eef6d92c5967c0b98ef28d.png

 

I have placed this file under the trt-models directory.

https://raw.githubusercontent.com/blakeblackshear/frigate/master/docker/tensorrt_models.sh

 

Below is what I get in the Logs after running the docker app.

 

/opt/nvidia/nvidia_entrypoint.sh: line 49: /tensorrt_models.sh: Permission denied
/opt/nvidia/nvidia_entrypoint.sh: line 49: exec: /tensorrt_models.sh: cannot execute: Permission denied


=====================
== NVIDIA TensorRT ==
=====================

NVIDIA Release 22.07 (build 40077977)
NVIDIA TensorRT Version 8.4.1
Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

Container image Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

https://developer.nvidia.com/tensorrt

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh

To install the open-source samples corresponding to this TensorRT release version
run /opt/tensorrt/install_opensource.sh.  To build the open source parsers,
plugins, and samples for current top-of-tree on master or a different branch,
run /opt/tensorrt/install_opensource.sh -b <branch>
See https://github.com/NVIDIA/TensorRT for more information.

Did you follow the steps of the requirements?

 

- Create a new folder to save the models (for example on /appdata/trt-models)

- Download the script and save it in the previously created path (https://raw.githubusercontent.com/blakeblackshear/frigate/dev/docker/tensorrt_models.sh)

- Open the unraid console and launch the following command without quotes(pointing to your script path):

chmod +x /mnt/user/appdata/trt-models/tensorrt_models.sh


- Then launch the container that will run the script and stop then container at the end.

Link to comment

Is there any documentation available for setting up a NVIDIA GPU for use with Frigate on unRAID? I installed a NVIDIA Quattro in my unRAID server and then installed the unRAID NVIDIA Driver package. Per the instructions, I then disabled Docker and then re-enabled Docker. According to the Frigate documentation, "Additional configuration is needed for the Docker container to be able to access the NVIDIA GPU", but their instructions appear to be for a linux host running Docker. I don't see anything specific for enabling it in unRAID. Anyway, if I turn on hardware acceleration via "hwaccel_args: preset-nvidia-h264". I get a bunch of errors, so I am pretty certain that the Docker Container can not talk to the NVIDIA GPU.

Link to comment
5 minutes ago, irishjd said:

Is there any documentation available for setting up a NVIDIA GPU for use with Frigate on unRAID? I installed a NVIDIA Quattro in my unRAID server and then installed the unRAID NVIDIA Driver package. Per the instructions, I then disabled Docker and then re-enabled Docker. According to the Frigate documentation, "Additional configuration is needed for the Docker container to be able to access the NVIDIA GPU", but their instructions appear to be for a linux host running Docker. I don't see anything specific for enabling it in unRAID. Anyway, if I turn on hardware acceleration via "hwaccel_args: preset-nvidia-h264". I get a bunch of errors, so I am pretty certain that the Docker Container can not talk to the NVIDIA GPU.

You should select the nvidia brunch when installing frigate from CA APPs, then read the instructions of the template and fill the required entries with the required information. You should also edit the config file following the instructions of the frigate docs and then check if everything works.

 

If you have specific doubts I would try to help you.

 

PS: if you only plan to use nvidia card for hard. accel, you don't need to install the nvidia branch, but the rest of the steps are the same.

Edited by yayitazale
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...