chrispcrust

Members
  • Posts

    10
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

chrispcrust's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Thank you so much! I did get it set up on my one (test) camera, only going to be using it for 2 total, so shouldn't overload my 1060 3gb too bad Really great documentation! I can't say I know the first thing about object detection models, so I'm using default everything.. I'll report back with any feedback on performance.
  2. Hey there, I'm using the beta8 Frigate docker, MQTT in docker, run HA in a VM also on unraid, and using Nvidia GPU for hardware acceleration. All is working fine, couple of minor issues, but really liking this and dropped Shinobi, finally (!) I'm curious if there are any additional steps or instructions to enable object detection using Nvidia GPU as opposed to CPU? I don't have Coral but my vintage Xeons seem to be doing fine, would be great to offload to GPU though. I see this in the docker: - If you want to use the NVidia TensorRT Detector, you have to add the tag suffix "-tensorrt" to the repository link, and create the models using the "tensorrt-models" app from the CA app. So would the steps be more or less as follows? 1) Get the tensorrt-models and follow the instructions as listed there 2) Change repository on Frigate from ghcr.io/blakeblackshear/frigate:0.12.0-beta8 to ghcr.io/blakeblackshear/frigate:0.12.0-beta8-tensorrt (note - is it beta8 or beta2?) Is that it? Do I need to add anything else to my Frigate config? In this link it states "Note: There is no support for Nvidia GPUs to perform object detection with tensorflow. It can be used for ffmpeg decoding, but not object detection." Thanks very much!
  3. This post is from 2020 but I did not see an update if it was ever figured out. So if anyone is wondering how to accomplish using "Live Server with Code Server". Go into your docker template and add the port 5500 as shown above. Your docker will now look like the following. Make note of the "internal docker IP" as I have circled in Red below: Now install the Live Server extension within Code Server. The only thing you have to change is the Host Setting, to match the value shown above, as shown below: Now you can "activate" Live Server within Code Server. To access while it's running, use http://<unraidserverip>:5500. (Make sure Live Server is set to use port 5500, of course). This extension will allow your browser to automatically refresh upon save of the HTML file within Code Server. Nice! Note that this method will only work if the computer you're working on VScode in is on the same LAN as your unraid server, or if you are on a VPN to put you on the same network as your unraid server. This does not get "included" in the reverse proxy as swag is only "proxy passing" the port associated with the primary Code Server app itself (8443), not 5500. Hope this helps someone as I searched far and wide and could not find an answer anywhere else!
  4. Seems this needs to be done every time NC itself is updated (not the docker, NC itself). This docker has not been updated in probably a year at this point and there are a couple of other options for DocumentServer in the app store. Did anyone find an alternative that works with the latest Onlyoffice connector within NC? Any limits to mobile editing etc. (I believe that's why I installed this version in the first place). Thanks!
  5. No problem, can't say I blame you at all. After doing some research, I've become pretty bearish on using this for layman, self hosted applications. it seems as though the purpose of a Turn server is to provide a "bypass" around a strict firewall in case a remote user's (i.e. WAN or separate network from the NC instance) true IP address is masked, so that voice, video and data can literally be routed "around the firewall", through the Turn server, instead of the normal routing that would be used if both users were on the same LAN. This is less than ideal for many of the NC users in the Unraid community who are reverse proxying their instance using something like letsencrypt certificates. There are many reasons a user's IP address may be masked, such as a VPN. Also, falling back on the Turn server eliminates the peer to peer nature that NC Talk is built on, resulting in slower more sluggish performance. Secondly it requires the coturn instance to be exposed (port forwarding) and a domain name pointing at the WAN IP address that the coturn instance is running on so that remote can be directed to it. So for folks like me using a cloudflare tunnel in an effort to mask my true IP for all my exposed dockers, including Nextcloud, this basically becomes a non-starter and probably not worth it from a security standpoint. Unfortunate, I'm really hoping a different technology may be utilized in the future so we can all self host our video/voice/chat communications with friends and family and not rely on 3rd parties. For now I guess Signal continues to be the best option for me.
  6. I have the same issue in Logs - "socket: Protocol not supported" indefinitely. However, in the Nextcloud UI, mine is reporting successful ICE candidates (checkmark). Even with the "success", actually initiating a video call with Talk via WAN works for about a second, then quits out on me. Searched far and wide on this issue, this is a very complicated topic, don't think there are any other answers out there unless someone is an expert. OP has stated he is not an expert either.. and just FYI the fork of this docker from instrumentisto states: "PROJECT IS CLOSED AND ARCHIVED. NO MAINTAINING WILL BE CONTINUED." ... so this docker probably won't be getting further updates.
  7. On the topic of hardware acceleration, after much trial and error, I found that manually updating Shinobi via CLI did the trick to get FFMPEG working for encoding recordings. Open console on Shinobi Docker Type the following: git reset --hard git pull npm install --unsafe-perm Then restart your docker container. However, it is finicky - if you try to change an H265 stream from your camera to H264 for display on both the dashboard, and recording as well, then it failed. For my set up, I set my camera to H264 and copied through to the dashboard and recording, no Hwacceleration used. I also found that when encoding vs. copying for recording purposes, my CPU use actually went up. With 2 cameras, CPU use is minimal anyway. I would have liked to use H265 for recordings to minimize size, but that introduces alot of codec issues with trying to view the files in browsers (which don't support 265), etc. Oh well. I think the ideal use for GPU acceleration with Shinobi is with the object detection plugin (Yolo). This can be set up with SOI's guide on YT, as well as this link: https://hub.shinobi.video/articles/view/JtJiGkdbcpAig40 The Yolo.js plugin does appear to be doing "something" when I do watch nvidia-smi in console, and walk in front of the camera(s). So, I guess that can be considered something.
  8. I updated earlier today, no issues doing so with my camera streams, recordings, or reverse proxy set up. I can't notice too much - my sense is this update is just bringing this docker up to speed with the latest "official" development versions of Shinobi. Prior to today, this hadn't been updated in years.
  9. Got it working, love the UI. Reverse proxied it, tested with a friend, everything working great!
  10. Seconded! Would be great to be able to utilize our GPUs for more dockers than Plex!