Jump to content

[SUPPORT] blakeblackshear - Frigate


Recommended Posts

1 hour ago, maxistviews said:

I recently purchased a USB Coral and tried installing it onto unraid with a new frigate install. I first plugged it into my PC and installed the Windows drivers, turning on the full power mode. Then when I inserted into my unraid box, it showed up as "Global" which I know to be correct, but shortly after this, it no longer appears. No google or global USB devices. Has anyone else had this issue? Was there a driver update I was supposed to do on top of this or something?

Did you install th ecoral drivers from Unraid App store ?

Link to comment
2 hours ago, maxistviews said:

I recently purchased a USB Coral and tried installing it onto unraid with a new frigate install. I first plugged it into my PC and installed the Windows drivers, turning on the full power mode. Then when I inserted into my unraid box, it showed up as "Global" which I know to be correct, but shortly after this, it no longer appears. No google or global USB devices. Has anyone else had this issue? Was there a driver update I was supposed to do on top of this or something?

29 minutes ago, mikey6283 said:

Did you install th ecoral drivers from Unraid App store ?

  

For the USB Coral there is no need to install any drivers in unraid. The things you have to check:

 

  • Did plug in to a USB-C port?
  • Are you using the original cable or another one?
  • Do you have enough power in the server?

Anyway, I never did that "full power mode", I just use it in a regular way as it has enough power to run frigate so I'm not sure if there is something there.

  • Like 1
Link to comment
4 hours ago, yayitazale said:

  

For the USB Coral there is no need to install any drivers in unraid. The things you have to check:

 

  • Did plug in to a USB-C port?
  • Are you using the original cable or another one?
  • Do you have enough power in the server?

Anyway, I never did that "full power mode", I just use it in a regular way as it has enough power to run frigate so I'm not sure if there is something there.

The server is using 100W and I have at least 600W power supply if I recall correctly. I used a USB 3 port, original cable. I’ll check today if I can reset it back to normal power. My understanding is that it might be per device, so maybe what I did had no effect on the server environment .

Link to comment
1 minute ago, maxistviews said:

The server is using 100W and I have at least 600W power supply if I recall correctly. I used a USB 3 port, original cable. I’ll check today if I can reset it back to normal power. My understanding is that it might be per device, so maybe what I did had no effect on the server environment .

Some people had similar problems in the past and it was a device issue...

  • Upvote 1
Link to comment
On 9/11/2023 at 3:34 PM, dopeytree said:

Does anyone know how to swap to the beta branch which is V0.13.0-7C629C1

Now that the beta1 is official I have added the beta1 and beta1-tensor tags to the deploy selector. Anyone interested can test the beta1 just installing a second instance with the beta tag. I strongly suggest you to use a different paths than the stable frigate for config and media folders.

 

Steps to securely test betas:

  • Create a new folder on appdata called frigate-beta.
  • Create a new media folder again with a different name.
  • Just stop the running stable frigate app, don't delete.
  • Copy and paste the config file in the new folder and edit it modifying it with the new requirements.
  • Optionally, copy and paste the database file in the new folder.
  • Launch the new frigate beta as a second instance with a different name, like "frigate-beta" and change the path of the config file and media path to the new ones. In this way you can have both old and new frigate (only one running but both containers):

imagen.png.4a9b82377a335c77db867adf7d040bfe.pngimagen.png.8ab125a14cf0c3876039906e49b3d789.png

  • You can make trials to make beta to work, but if you don't make it in just one try, you can just stop the frigate beta and start the stable one as many times as you need.
  • Don't forget to delete the unused orphan images clicking in the advanced view in the docker container page:

imagen.png.37a3f9582bb710c8062b40d06228d411.png

Edited by yayitazale
  • Like 1
Link to comment
On 9/14/2023 at 10:42 AM, yayitazale said:

Now that the beta1 is official I have added the beta1 and beta1-tensor tags to the deploy selector. Anyone interested can test the beta1 just installing a second instance with the beta tag. I strongly suggest you to use a different paths than the stable frigate for config and media folders.

 

Steps to securely test betas:

  • Create a new folder on appdata called frigate-beta.
  • Create a new media folder again with a different name.
  • Just stop the running stable frigate app, don't delete.
  • Copy and paste the config file in the new folder and edit it modifying it with the new requirements.
  • Optionally, copy and paste the database file in the new folder.
  • Launch the new frigate beta as a second instance with a different name, like "frigate-beta" and change the path of the config file and media path to the new ones. In this way you can have both old and new frigate (only one running but both containers).
  • You can make trials to make beta to work, but if you don't make it in just one try, you can just stop the frigate beta and start the stable one as many times as you need.

 

I've tried your suggestion @yayitazale but the new tensorRT model generation isn't very clear to me... I'm running into the following error when running the frigate:0.13.0-beta1-tensorrt docker

 

image.thumb.png.54301fb0ab8795cf8e6a514c6aefffad.png

 

I don't see any reference to having to build the new models myself in the new doc: https://deploy-preview-6262--frigate-docs.netlify.app/configuration/object_detectors/#generate-models

 

D̶o̶ ̶w̶e̶ ̶s̶t̶i̶l̶l̶ ̶n̶e̶e̶d̶ ̶t̶o̶ ̶u̶s̶e̶ ̶t̶h̶e̶ ̶"̶t̶e̶n̶s̶o̶r̶r̶t̶-̶m̶o̶d̶e̶l̶s̶"̶ ̶d̶o̶c̶k̶e̶r̶ ̶w̶i̶t̶h̶ ̶t̶h̶e̶ ̶p̶r̶o̶c̶e̶s̶s̶ ̶o̶u̶t̶l̶i̶n̶e̶d̶ ̶i̶n̶ ̶t̶h̶e̶ ̶p̶r̶e̶v̶i̶o̶u̶s̶ ̶d̶o̶c̶ ̶f̶o̶r̶ ̶v̶1̶2̶ ̶ (like here) ̶t̶o̶ ̶g̶e̶n̶e̶r̶a̶t̶e̶ ̶t̶h̶e̶ ̶m̶o̶d̶e̶l̶s̶ ̶o̶r̶ ̶i̶s̶ ̶t̶h̶e̶ ̶n̶e̶w̶ ̶B̶e̶t̶a̶ ̶1̶3̶ ̶t̶a̶k̶i̶n̶g̶ ̶c̶a̶r̶e̶ ̶o̶f̶ ̶i̶t̶ ̶a̶l̶l̶ ̶w̶i̶t̶h̶i̶n̶ ̶t̶h̶e̶ ̶n̶e̶w̶ ̶c̶o̶n̶t̶a̶i̶n̶e̶r̶?̶

EDIT: nevermind, found the issue, I had to re-add

--runtime=nvidia

to the extra parameter of the docker container

Edited by Jupiler
Link to comment
12 hours ago, Jupiler said:

 

I've tried your suggestion @yayitazale but the new tensorRT model generation isn't very clear to me... I'm running into the following error when running the frigate:0.13.0-beta1-tensorrt docker

 

image.thumb.png.54301fb0ab8795cf8e6a514c6aefffad.png

 

I don't see any reference to having to build the new models myself in the new doc: https://deploy-preview-6262--frigate-docs.netlify.app/configuration/object_detectors/#generate-models

 

D̶o̶ ̶w̶e̶ ̶s̶t̶i̶l̶l̶ ̶n̶e̶e̶d̶ ̶t̶o̶ ̶u̶s̶e̶ ̶t̶h̶e̶ ̶"̶t̶e̶n̶s̶o̶r̶r̶t̶-̶m̶o̶d̶e̶l̶s̶"̶ ̶d̶o̶c̶k̶e̶r̶ ̶w̶i̶t̶h̶ ̶t̶h̶e̶ ̶p̶r̶o̶c̶e̶s̶s̶ ̶o̶u̶t̶l̶i̶n̶e̶d̶ ̶i̶n̶ ̶t̶h̶e̶ ̶p̶r̶e̶v̶i̶o̶u̶s̶ ̶d̶o̶c̶ ̶f̶o̶r̶ ̶v̶1̶2̶ ̶ (like here) ̶t̶o̶ ̶g̶e̶n̶e̶r̶a̶t̶e̶ ̶t̶h̶e̶ ̶m̶o̶d̶e̶l̶s̶ ̶o̶r̶ ̶i̶s̶ ̶t̶h̶e̶ ̶n̶e̶w̶ ̶B̶e̶t̶a̶ ̶1̶3̶ ̶t̶a̶k̶i̶n̶g̶ ̶c̶a̶r̶e̶ ̶o̶f̶ ̶i̶t̶ ̶a̶l̶l̶ ̶w̶i̶t̶h̶i̶n̶ ̶t̶h̶e̶ ̶n̶e̶w̶ ̶c̶o̶n̶t̶a̶i̶n̶e̶r̶?̶

EDIT: nevermind, found the issue, I had to re-add

--runtime=nvidia

to the extra parameter of the docker container

I didn't test it yet but I understand that tensorrt docker is not longer going to be needed as the frigate itself is going to create the models at the startup process. I'll update the description, requirements and entries (delete the "trt-models" path and add the "YOLO_MODELS" and "USE_FP16" environmental variables) docs when this version is released to the general public in a stable form.

 

If you can test the model generation with several yolo models I'll appreciate it.

Link to comment

So I have Frigate working with a Coral AI USB adapter. I have been trying to get the "WhosAtMyFeeder" app working with it, but I am not having any luck. I am seeing 2 errors in the logs, but I am not sure how to fix them:

 

INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Process Process-2:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/local/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "./speciesid.py", line 239, in run_mqtt_client
    client.connect(config['frigate']['mqtt_server'])
  File "/usr/local/lib/python3.8/site-packages/paho/mqtt/client.py", line 914, in connect
    return self.reconnect()
  File "/usr/local/lib/python3.8/site-packages/paho/mqtt/client.py", line 1044, in reconnect
    sock = self._create_socket_connection()
  File "/usr/local/lib/python3.8/site-packages/paho/mqtt/client.py", line 3685, in _create_socket_connection
    return socket.create_connection(addr, timeout=self._connect_timeout, source_address=source)
  File "/usr/local/lib/python3.8/socket.py", line 787, in create_connection
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
  File "/usr/local/lib/python3.8/socket.py", line 918, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
Cannot assign requested address
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Cannot assign requested address
Process Process-2:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/local/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "./speciesid.py", line 239, in run_mqtt_client
    client.connect(config['frigate']['mqtt_server'])
  File "/usr/local/lib/python3.8/site-packages/paho/mqtt/client.py", line 914, in connect
    return self.reconnect()
  File "/usr/local/lib/python3.8/site-packages/paho/mqtt/client.py", line 1044, in reconnect
    sock = self._create_socket_connection()
  File "/usr/local/lib/python3.8/site-packages/paho/mqtt/client.py", line 3685, in _create_socket_connection
    return socket.create_connection(addr, timeout=self._connect_timeout, source_address=source)
  File "/usr/local/lib/python3.8/socket.py", line 787, in create_connection
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
  File "/usr/local/lib/python3.8/socket.py", line 918, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
Calling Main
Time: 2023-09-19 10:38:23.709
Python version
3.8.17 (default, Jun 13 2023, 16:09:51)
[GCC 10.2.1 20210110]
Version info.
sys.version_info(major=3, minor=8, micro=17, releaselevel='final', serial=0)
Starting threads for Flask and MQTT
Starting flask app
Starting MQTT client. Connecting to: 192.168.100.69:9001
 * Serving Flask app 'webui'
 * Debug mode: off
Calling Main
Time: 2023-09-19 10:46:37.442
Python version
3.8.17 (default, Jun 13 2023, 16:09:51)
[GCC 10.2.1 20210110]
Version info.
sys.version_info(major=3, minor=8, micro=17, releaselevel='final', serial=0)
Starting threads for Flask and MQTT
Starting flask app
 * Serving Flask app 'webui'
 * Debug mode: off
Starting MQTT client. Connecting to: 192.168.100.69:1883

 

Any thoughts on what the issue is?

Link to comment

 

18 hours ago, irishjd said:

So I have Frigate working with a Coral AI USB adapter. I have been trying to get the "WhosAtMyFeeder" app working with it, but I am not having any luck. I am seeing 2 errors in the logs, but I am not sure how to fix them:

 

INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Process Process-2:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/local/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "./speciesid.py", line 239, in run_mqtt_client
    client.connect(config['frigate']['mqtt_server'])
  File "/usr/local/lib/python3.8/site-packages/paho/mqtt/client.py", line 914, in connect
    return self.reconnect()
  File "/usr/local/lib/python3.8/site-packages/paho/mqtt/client.py", line 1044, in reconnect
    sock = self._create_socket_connection()
  File "/usr/local/lib/python3.8/site-packages/paho/mqtt/client.py", line 3685, in _create_socket_connection
    return socket.create_connection(addr, timeout=self._connect_timeout, source_address=source)
  File "/usr/local/lib/python3.8/socket.py", line 787, in create_connection
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
  File "/usr/local/lib/python3.8/socket.py", line 918, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
Cannot assign requested address
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Cannot assign requested address
Process Process-2:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/local/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "./speciesid.py", line 239, in run_mqtt_client
    client.connect(config['frigate']['mqtt_server'])
  File "/usr/local/lib/python3.8/site-packages/paho/mqtt/client.py", line 914, in connect
    return self.reconnect()
  File "/usr/local/lib/python3.8/site-packages/paho/mqtt/client.py", line 1044, in reconnect
    sock = self._create_socket_connection()
  File "/usr/local/lib/python3.8/site-packages/paho/mqtt/client.py", line 3685, in _create_socket_connection
    return socket.create_connection(addr, timeout=self._connect_timeout, source_address=source)
  File "/usr/local/lib/python3.8/socket.py", line 787, in create_connection
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
  File "/usr/local/lib/python3.8/socket.py", line 918, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
Calling Main
Time: 2023-09-19 10:38:23.709
Python version
3.8.17 (default, Jun 13 2023, 16:09:51)
[GCC 10.2.1 20210110]
Version info.
sys.version_info(major=3, minor=8, micro=17, releaselevel='final', serial=0)
Starting threads for Flask and MQTT
Starting flask app
Starting MQTT client. Connecting to: 192.168.100.69:9001
 * Serving Flask app 'webui'
 * Debug mode: off
Calling Main
Time: 2023-09-19 10:46:37.442
Python version
3.8.17 (default, Jun 13 2023, 16:09:51)
[GCC 10.2.1 20210110]
Version info.
sys.version_info(major=3, minor=8, micro=17, releaselevel='final', serial=0)
Starting threads for Flask and MQTT
Starting flask app
 * Serving Flask app 'webui'
 * Debug mode: off
Starting MQTT client. Connecting to: 192.168.100.69:1883

 

Any thoughts on what the issue is?

The MQTT server doesn't accept the port at the end so you should have something like:

 

mqtt_server: 192.168.100.69

 

Edited by yayitazale
Link to comment

So I removed the MQQT port number. As far as the config goes, it is really simple right now (I am trying to find documentation  on which statements are required and which are available:

 

frigate:

frigate_url: http://192.168.100.69:1984

mqtt_server: 192.168.100.69

mqtt_auth: false

mqtt_username:

mqtt_password:

main_topic: frigate

camera:

- "Remi_Cam"

object: bird

classification:

model: model.tflite

threshold: 0.7

webui:

port: 7766

host: 192.168.100.69

Link to comment
10 minutes ago, irishjd said:

So I removed the MQQT port number. As far as the config goes, it is really simple right now (I am trying to find documentation  on which statements are required and which are available:

 

frigate:

frigate_url: http://192.168.100.69:1984

mqtt_server: 192.168.100.69

mqtt_auth: false

mqtt_username:

mqtt_password:

main_topic: frigate

camera:

- "Remi_Cam"

object: bird

classification:

model: model.tflite

threshold: 0.7

webui:

port: 7766

host: 192.168.100.69

 

Test this, you have several things wrong:

 

frigate:
  frigate_url: http://192.168.100.69:5000
  mqtt_server: 192.168.100.69
  mqtt_auth: false
  mqtt_username:
  mqtt_password:
  main_topic: frigate
  camera:
    - Remi_Cam
  object: bird
classification:
  model: model.tflite
  threshold: 0.7
webui:
  port: 7766
  host: 0.0.0.0

 

I'm not sure if MQTT is with auth = false if you have to remove the mqtt_username and mqtt_password lines.

Link to comment

Thanks a lot! that got the WebUI working! I am still seeing these errors in the logs:

"socket.gaierror: [Errno -2] Name or service not known
Cannot assign requested address
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Cannot assign requested address
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Cannot assign requested address
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Cannot assign requested address
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:7766
 * Running on http://172.18.0.25:7766"

 

One other issue I noticed was with Frigate in that I can not use the HW Acceleration lines for ffmpeg as all it does is generate a lot of errors. I am trying to use "hwaccel_args: preset-vaapi" as I am running Unraid on a Dell PowerEdge R820 with Intel Xeon processors.

Link to comment
1 hour ago, irishjd said:

Thanks a lot! that got the WebUI working! I am still seeing these errors in the logs:

"socket.gaierror: [Errno -2] Name or service not known
Cannot assign requested address
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Cannot assign requested address
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Cannot assign requested address
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Cannot assign requested address
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:7766
 * Running on http://172.18.0.25:7766"

 

One other issue I noticed was with Frigate in that I can not use the HW Acceleration lines for ffmpeg as all it does is generate a lot of errors. I am trying to use "hwaccel_args: preset-vaapi" as I am running Unraid on a Dell PowerEdge R820 with Intel Xeon processors.

Did you test it deleting the username and password lines? It is not able to open a connection with the MQTT broker.

Link to comment

Ahh, I completely missed that. Those errors are now gone. I also think I know why I get errors when trying to enable hw acceleration in frigate. The R820 uses: Intel® Xeon® CPU E5-4650 0 @ 2.70GHz processors (which are in the E5 family). According to Intel, only the E6 and newer chips support hw video acceleration. Would it be worth it to install a discrete video card for hw acceleration? If so, which would work best with UnRAID?

 

Link to comment
1 minute ago, irishjd said:

Ahh, I completely missed that. Those errors are now gone. I also think I know why I get errors when trying to enable hw acceleration in frigate. The R820 uses: Intel® Xeon® CPU E5-4650 0 @ 2.70GHz processors (which are in the E5 family). According to Intel, only the E6 and newer chips support hw video acceleration. Would it be worth it to install a discrete video card for hw acceleration? If so, which would work best with UnRAID?

 

Is not exactly like that, E6 is just a higher level in the same family. https://en.wikipedia.org/wiki/List_of_Intel_Xeon_processors

To use QSV or VAAPI your CPU needs a integrated GPU.

 

Yes installing a discrete video card will lower your CPU consumption, but the bigger is your number of cameras and its resolution, the bigger is going to be the difference. If you have have several cameras you can have a look to the nvidia compatibility matrix and try to find a second hand old GPU, you can find good deals in the Quadro series that have unlimited concurrent sessions like the P2000 or go to mid-low range desktop GPU like a gtx 750. If you already have a old GPU in a box or a drawer take a look maybe it's enough for your needs.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...