[SUPPORT] blakeblackshear - Frigate


Recommended Posts

On 1/28/2024 at 8:24 PM, mintjberry said:

Hi all,

 

Having trouble getting Frigate running on docker in unRAID.

When I log in into the web UI, all I get is the frigate logo in the top left and a spinning blue icon in the middle. It never loads. 

I assume the appdata config/frigate directory should not be empty? The 'frigate' folder is created but there is nothing inside it. I get permission errors when trying to create folders/files while the docker image is running.

I can't see the logs as the log window auto closes after like half a second. I have had no issues with any of my other containers.

I've deleted the docker image and manually created the frigate folder, and created the subfolder and frigate.yaml file (with some default code), but the same issue occurs.

 

Any ideas or am I missing something obvious?

Im also having the same issue

Link to comment
1 hour ago, dopeytree said:

Ok it looks like version 13 has been pushed out on the 12 stable channel..

 

Starting Frigate (0.13.0-01e2d20)

v13.0 has been released today to stable channel as it has been tested in many beta releases and release candidates and it is indeed stable, so you should reads the changelog and the updated docs to find out what changes you need to make to your settings. Also it is recommended to ensure that you are using all the required variables in the template or reinstall it from scratch.

Edited by yayitazale
Link to comment
2 minutes ago, Bruceflix said:

When mine updated to 13 it was broken.  Moving the db to /config fixed it.

This is pointed out in the changelog:

 

Quote

New location for frigate.db: Due to the support for network shares in HA OS and the popularity of storing recordings on a NAS in general, the database has a new default location of /config/frigate.db. This change is going to be done for existing users too, so frigate will automatically move the db for you.

 

Link to comment

So I have a CUDA capable GPU - I checked.  I also have the newest NVIDIA drivers.  As of ver13 attempting to run the container results in this error:

 

[01/31/2024-16:13:22] [TRT] [W] CUDA initialization failure with error: 35. Please check your CUDA installation:  http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html

 

So, after following that link it's talking about installing the CUDA libraries, etc.  Before I break something else, is this actually needed?

Link to comment

I ended up with all my current recordings getting deleted from the drive but anyway got it up and working now with a new config.

 

Might make a script to do a monthly backup to array.

 

Only thing I did notice is v13 bit more fussy about low bit rate feeds as in not doing detection when it did on v12. So for one of the old budget cameras I changed the detection from substream to the main 1080p steam. In time I'll change the camera for another annke c500 as you can control the bit rate so 640x360 but at 3Mb/s so not blocky. Works great for low cpu detection.

 

Anyway thanks again for bringing this to unraid. 

The new sound detection is cool eh.

Edited by dopeytree
  • Like 1
Link to comment

any idea why my frigate container won't start? I'm trying to add my RTX 2060 GPU with the drivers already installed, and --runtime=nvidia set in extra paramaters with gpu id set too, I get the error
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #1: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: open failed: /proc/sys/kernel/overflowuid: permission denied: unknown.

 

image.thumb.png.78482f20c52ca87c41dd5b5726f19bbf.png

Link to comment
10 hours ago, CoZ said:

So I have a CUDA capable GPU - I checked.  I also have the newest NVIDIA drivers.  As of ver13 attempting to run the container results in this error:

 

[01/31/2024-16:13:22] [TRT] [W] CUDA initialization failure with error: 35. Please check your CUDA installation:  http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html

 

So, after following that link it's talking about installing the CUDA libraries, etc.  Before I break something else, is this actually needed?

Can you share how are you launching the container and the config?

 

7 hours ago, Hellomynameisleo said:

any idea why my frigate container won't start? I'm trying to add my RTX 2060 GPU with the drivers already installed, and --runtime=nvidia set in extra paramaters with gpu id set too, I get the error
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #1: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: open failed: /proc/sys/kernel/overflowuid: permission denied: unknown.

 

image.thumb.png.78482f20c52ca87c41dd5b5726f19bbf.png

Are you using the card in another container or pass-through to a VM?

Link to comment
36 minutes ago, yayitazale said:

Can you share how are you launching the container and the config?

 

Are you using the card in another container or pass-through to a VM?

No its not being used for anything, my GPU settings was working before the frigate docker update so not sure why its not working now

 

image.png.83476b1dabcfda666e146c081c50f633.png

Link to comment

with most recent update frigate is no longer looping / crashing.  That said; my configuration unchanged, I'm not getting any object detection events which I did in v12.

I see cuda / gpu being loaded in the logs but doesn't actual do the detection.  v12 worked great.

Link to comment
8 hours ago, yayitazale said:

Can you share how are you launching the container and the config?

 

 

 

I'm launching it as I do every other container.  Right clicking + Start Container. 

 

These are the entire logs when it first starts up:

s6-rc: info: service s6rc-fdholder: starting
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service s6rc-fdholder successfully started
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service trt-model-prepare: starting
s6-rc: info: service log-prepare: starting
s6-rc: info: service log-prepare successfully started
s6-rc: info: service nginx-log: starting
s6-rc: info: service go2rtc-log: starting
s6-rc: info: service frigate-log: starting
s6-rc: info: service nginx-log successfully started
s6-rc: info: service go2rtc-log successfully started
s6-rc: info: service go2rtc: starting
s6-rc: info: service frigate-log successfully started
s6-rc: info: service go2rtc successfully started
s6-rc: info: service go2rtc-healthcheck: starting
s6-rc: info: service go2rtc-healthcheck successfully started
Generating the following TRT Models: yolov4-416,yolov4-tiny-416
Downloading yolo weights
2024-02-01 10:30:12.079536551  [INFO] Preparing new go2rtc config...
2024-02-01 10:30:13.159361526  [INFO] Starting go2rtc...
2024-02-01 10:30:13.279687971  10:30:13.279 INF go2rtc version 1.8.4 linux/amd64
2024-02-01 10:30:13.280390371  10:30:13.280 INF [api] listen addr=:1984
2024-02-01 10:30:13.280428249  10:30:13.280 INF [rtsp] listen addr=:8554
2024-02-01 10:30:13.280808608  10:30:13.280 INF [webrtc] listen addr=:8555

Creating yolov4-tiny-416.cfg and yolov4-tiny-416.weights
Creating yolov4-416.cfg and yolov4-416.weights

Done.
2024-02-01 10:30:21.744747617  [INFO] Starting go2rtc healthcheck service...

Generating yolov4-416.trt. This may take a few minutes.

Traceback (most recent call last):
  File "/usr/local/src/tensorrt_demos/yolo/onnx_to_tensorrt.py", line 214, in <module>
    main()
  File "/usr/local/src/tensorrt_demos/yolo/onnx_to_tensorrt.py", line 202, in main
    engine = build_engine(
  File "/usr/local/src/tensorrt_demos/yolo/onnx_to_tensorrt.py", line 112, in build_engine
    with trt.Builder(TRT_LOGGER) as builder, builder.create_network(*EXPLICIT_BATCH) as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
TypeError: pybind11::init(): factory function returned nullptr
[02/01/2024-10:30:38] [TRT] [W] Unable to determine GPU memory usage
[02/01/2024-10:30:38] [TRT] [W] Unable to determine GPU memory usage
[02/01/2024-10:30:38] [TRT] [W] CUDA initialization failure with error: 35. Please check your CUDA installation:  http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
Loading the ONNX file...

Generating yolov4-tiny-416.trt. This may take a few minutes.

Traceback (most recent call last):
  File "/usr/local/src/tensorrt_demos/yolo/onnx_to_tensorrt.py", line 214, in <module>
    main()
  File "/usr/local/src/tensorrt_demos/yolo/onnx_to_tensorrt.py", line 202, in main
    engine = build_engine(
  File "/usr/local/src/tensorrt_demos/yolo/onnx_to_tensorrt.py", line 112, in build_engine
    with trt.Builder(TRT_LOGGER) as builder, builder.create_network(*EXPLICIT_BATCH) as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
TypeError: pybind11::init(): factory function returned nullptr
[02/01/2024-10:30:41] [TRT] [W] Unable to determine GPU memory usage
[02/01/2024-10:30:41] [TRT] [W] Unable to determine GPU memory usage
[02/01/2024-10:30:41] [TRT] [W] CUDA initialization failure with error: 35. Please check your CUDA installation:  http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
Loading the ONNX file...
Available tensorrt models:
ls: cannot access '*.trt': No such file or directory
s6-rc: warning: unable to start service trt-model-prepare: command exited 2

 

 

 

I had removed the original Frigate container and template and pulled down a "fresh" copy for v13 and installed the NVIDIA Branch when it asked me in Community Apps which branch I wanted to install.  So I did not upgrade from 12-13, I started with a new pull down of the container.  I wanted to be safe that there would be no issues left over. 

Link to comment
7 minutes ago, CoZ said:

 

I'm launching it as I do every other container.  Right clicking + Start Container. 

 

These are the entire logs when it first starts up:

s6-rc: info: service s6rc-fdholder: starting
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service s6rc-fdholder successfully started
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service trt-model-prepare: starting
s6-rc: info: service log-prepare: starting
s6-rc: info: service log-prepare successfully started
s6-rc: info: service nginx-log: starting
s6-rc: info: service go2rtc-log: starting
s6-rc: info: service frigate-log: starting
s6-rc: info: service nginx-log successfully started
s6-rc: info: service go2rtc-log successfully started
s6-rc: info: service go2rtc: starting
s6-rc: info: service frigate-log successfully started
s6-rc: info: service go2rtc successfully started
s6-rc: info: service go2rtc-healthcheck: starting
s6-rc: info: service go2rtc-healthcheck successfully started
Generating the following TRT Models: yolov4-416,yolov4-tiny-416
Downloading yolo weights
2024-02-01 10:30:12.079536551  [INFO] Preparing new go2rtc config...
2024-02-01 10:30:13.159361526  [INFO] Starting go2rtc...
2024-02-01 10:30:13.279687971  10:30:13.279 INF go2rtc version 1.8.4 linux/amd64
2024-02-01 10:30:13.280390371  10:30:13.280 INF [api] listen addr=:1984
2024-02-01 10:30:13.280428249  10:30:13.280 INF [rtsp] listen addr=:8554
2024-02-01 10:30:13.280808608  10:30:13.280 INF [webrtc] listen addr=:8555

Creating yolov4-tiny-416.cfg and yolov4-tiny-416.weights
Creating yolov4-416.cfg and yolov4-416.weights

Done.
2024-02-01 10:30:21.744747617  [INFO] Starting go2rtc healthcheck service...

Generating yolov4-416.trt. This may take a few minutes.

Traceback (most recent call last):
  File "/usr/local/src/tensorrt_demos/yolo/onnx_to_tensorrt.py", line 214, in <module>
    main()
  File "/usr/local/src/tensorrt_demos/yolo/onnx_to_tensorrt.py", line 202, in main
    engine = build_engine(
  File "/usr/local/src/tensorrt_demos/yolo/onnx_to_tensorrt.py", line 112, in build_engine
    with trt.Builder(TRT_LOGGER) as builder, builder.create_network(*EXPLICIT_BATCH) as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
TypeError: pybind11::init(): factory function returned nullptr
[02/01/2024-10:30:38] [TRT] [W] Unable to determine GPU memory usage
[02/01/2024-10:30:38] [TRT] [W] Unable to determine GPU memory usage
[02/01/2024-10:30:38] [TRT] [W] CUDA initialization failure with error: 35. Please check your CUDA installation:  http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
Loading the ONNX file...

Generating yolov4-tiny-416.trt. This may take a few minutes.

Traceback (most recent call last):
  File "/usr/local/src/tensorrt_demos/yolo/onnx_to_tensorrt.py", line 214, in <module>
    main()
  File "/usr/local/src/tensorrt_demos/yolo/onnx_to_tensorrt.py", line 202, in main
    engine = build_engine(
  File "/usr/local/src/tensorrt_demos/yolo/onnx_to_tensorrt.py", line 112, in build_engine
    with trt.Builder(TRT_LOGGER) as builder, builder.create_network(*EXPLICIT_BATCH) as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
TypeError: pybind11::init(): factory function returned nullptr
[02/01/2024-10:30:41] [TRT] [W] Unable to determine GPU memory usage
[02/01/2024-10:30:41] [TRT] [W] Unable to determine GPU memory usage
[02/01/2024-10:30:41] [TRT] [W] CUDA initialization failure with error: 35. Please check your CUDA installation:  http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
Loading the ONNX file...
Available tensorrt models:
ls: cannot access '*.trt': No such file or directory
s6-rc: warning: unable to start service trt-model-prepare: command exited 2

 

 

 

I had removed the original Frigate container and template and pulled down a "fresh" copy for v13 and installed the NVIDIA Branch when it asked me in Community Apps which branch I wanted to install.  So I did not upgrade from 12-13, I started with a new pull down of the container.  I wanted to be safe that there would be no issues left over. 

You should open a issue on github then...

Link to comment
17 minutes ago, repomanz said:

with most recent update frigate is no longer looping / crashing.  That said; my configuration unchanged, I'm not getting any object detection events which I did in v12.

I see cuda / gpu being loaded in the logs but doesn't actual do the detection.  v12 worked great.

Recheck the docs and the changelog and if you still have issues, ask on github, the template is working good and it is working good for me.

Link to comment
On 1/31/2024 at 3:38 PM, yayitazale said:

v13.0 has been released today to stable channel as it has been tested in many beta releases and release candidates and it is indeed stable, so you should reads the changelog and the updated docs to find out what changes you need to make to your settings. Also it is recommended to ensure that you are using all the required variables in the template or reinstall it from scratch.

Hi, what is correct way to upgrade the Frigate image on my unraid?  Should I remove/uninstall my current frigate and install it again from unraid apps?
And big thanks for your work.

Link to comment
1 minute ago, kpcz said:

Hi, what is correct way to upgrade the Frigate image on my unraid?  Should I remove/uninstall my current frigate and install it again from unraid apps?
And big thanks for your work.

It is not mandatory, but I'll recommend it as the template has new entries now (especially for the ones that use a Nvidia card as a detector because the tensor-rt models are now generated in the frigate container itself).

Link to comment
11 minutes ago, yayitazale said:

It is not mandatory, but I'll recommend it as the template has new entries now (especially for the ones that use a Nvidia card as a detector because the tensor-rt models are now generated in the frigate container itself).

If is not mandatory, what is the different way then to do the upgrade? (not using Nvidia, but Coral).

Link to comment
2 minutes ago, kpcz said:

If is not mandatory, what is the different way then to do the upgrade? (not using Nvidia, but Coral).

Check the changelog breaking change part and if there is anything in your config that must be changed, read the docs to understand how to change it. Then update and check the logs. If there is no problem, then you can also do the same procedure but for the new features that you can add.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.