Jump to content

chrispcrust

Members
  • Posts

    10
  • Joined

  • Last visited

Posts posted by chrispcrust

  1. 19 hours ago, yayitazale said:

    Steps are almost correct but you also need to add the extra parameter "--runtime=nvidia", then fill the Nvidia entries of the template. You can also delete the intel/amd gpu entries if you are also going to use the nvidia card itself for hardware accel (but you can split the load and let the nvidia card only do the detections).

     

    Anyway, I just pushed a new version for the template to avoid mistakes on the step 2 of your list so for now on there will be a brach selector to make a easier deployment for regular/nvidia branches.

     

    The docs are you refering are only for frigate 11. Frigate 12 betas docs are here: https://deploy-preview-4055--frigate-docs.netlify.app/ (this is also refered on the description of the template).

    imagen.png

    imagen.png

     

     

    Thank you so much!  I did get it set up on my one (test) camera, only going to be using it for 2 total, so shouldn't overload my 1060 3gb too bad :) 

     

    Really great documentation!  I can't say I know the first thing about object detection models, so I'm using default everything.. I'll report back with any feedback on performance.

     

    watchsmi.jpg.6acbd2c65e6f8ca7fe063cb0b5ede221.jpg

     

     

     

    • Thanks 1
  2. Hey there,

     

    I'm using the beta8 Frigate docker, MQTT in docker, run HA in a VM also on unraid, and using Nvidia GPU for hardware acceleration.  All is working fine, couple of minor issues, but really liking this and dropped Shinobi, finally (!)

     

    I'm curious if there are any additional steps or instructions to enable object detection using Nvidia GPU as opposed to CPU?  I don't have Coral but my vintage Xeons seem to be doing fine, would be great to offload to GPU though.

     

    I see this in the docker: - If you want to use the NVidia TensorRT Detector, you have to add the tag suffix "-tensorrt" to the repository link, and create the models using the "tensorrt-models" app from the CA app.

     

    So would the steps be more or less as follows?

     

    1) Get the tensorrt-models and follow the instructions as listed there

     

    2) Change repository on Frigate from ghcr.io/blakeblackshear/frigate:0.12.0-beta8 to ghcr.io/blakeblackshear/frigate:0.12.0-beta8-tensorrt

     

    (note - is it beta8 or beta2?)

     

    Is that it?  Do I need to add anything else to my Frigate config?  In this link it states "Note: There is no support for Nvidia GPUs to perform object detection with tensorflow. It can be used for ffmpeg decoding, but not object detection."

     

    Thanks very much!

  3. On 11/1/2020 at 11:28 PM, romain said:

    Does anyone know how to get to the preview page from the Live Server extension?  I saw someone asking about the extension in the past but I think my question is different - Live Server says it's running and that it's running on port 5500.  Normally you would go to 127.0.0.1:5500/index.html if you were using VS Code on your local workstation but since it's a docker container that wouldn't/doesn't work like that - I tried adding a  few ports to the docker configuration to try and let me access ports 5500 through 5505 from my LAN but that didn't work either.  A screen shot of the port 5500 config that I tried to add.  I just did this 6 times for 5500, ..., 5505.

     

    This may be more of a routing question than a code-server question, but if anyone here can help me out I'd appreciate it!

     

    image.png.aaf95db811ae4c4e6c3dc2474aeb6a48.png

     

    This post is from 2020 but I did not see an update if it was ever figured out.  

     

    So if anyone is wondering how to accomplish using "Live Server with Code Server".

     

    Go into your docker template and add the port 5500 as shown above.  Your docker will now look like the following.  Make note of the "internal docker IP" as I have circled in Red below:

     

    codeserverip.jpg.9d04dc0d5264f21a14617d2af85727f0.jpg

     

    Now install the Live Server extension within Code Server.  The only thing you have to change is the Host Setting, to match the value shown above, as shown below:

     

    liveserverip.jpg.5c10101d989c4063cd27f1ad0644bfbb.jpg

     

    Now you can "activate" Live Server within Code Server.  To access while it's running, use http://<unraidserverip>:5500.

     

    (Make sure Live Server is set to use port 5500, of course).

     

    This extension will allow your browser to automatically refresh upon save of the HTML file within Code Server.  Nice!

     

    Note that this method will only work if the computer you're working on VScode in is on the same LAN as your unraid server, or if you are on a VPN to put you on the same network as your unraid server.  This does not get "included" in the reverse proxy as swag is only "proxy passing" the port associated with the primary Code Server app itself (8443), not 5500. 

     

    Hope this helps someone as I searched far and wide and could not find an answer anywhere else! :)

     

     

  4. On 1/24/2022 at 4:51 AM, SavageAUS said:

    Can we get an update for onlyofficedocumentserver please. Documents won’t open and I get error (Error when trying to connect (Not supported version) (version 5.6.4.20)) so I think it needs a version update?

     

    EDIT: I have followed the instructions above and got it working again but how do i stop onlyoffice from updating in nextcloud?

     

     

     

    These are the steps that worked for me.

     

    1. Disable and remove the OnlyOffice app from the Nextcloud web UI.

    2. Open the UnRAID terminal using the >_ button in the top right part of the UnRAID interface or ssh.
     2a. Navigate to the Nextcloud directory by typing (change the bolded nextcloud value to the name of the docker container):
     cd /mnt/user/appdata/nextcloud/www/nextcloud/apps/

    3. Download the previous version of the OnlyOffice by typing:
     sudo -u root wget https://github.com/ONLYOFFICE/onlyoffice-nextcloud/releases/download/v7.1.2/onlyoffice.tar.gz

    4. Unpack the archive:
     sudo -u root tar xvf onlyoffice.tar.gz

    5. Restart Nextcloud docker instance

    6. Fix the access permissions of the underlying files and folders:
     find ./onlyoffice -type d -exec chmod 0750 {} \;
     find ./onlyoffice -type f -exec chmod 0640 {} \;

     

    Seems this needs to be done every time NC itself is updated (not the docker, NC itself).  This docker has not been updated in probably a year at this point and there are a couple of other options for DocumentServer in the app store.  Did anyone find an alternative that works with the latest Onlyoffice connector within NC?  Any limits to mobile editing etc. (I believe that's why I installed this version in the first place).  Thanks!

  5. On 2/21/2022 at 6:59 PM, xthursdayx said:

    Yeah, this is kind of the wall I've run into unfortunately. As you noted, I'm not an expert, and while I was able to get this container working with Matrix for video calls in the past, troubleshooting other use cases is beyond the scope of what I have time to dig into. Moreover, the development of Coturn in general is pretty specialized and slow - mostly undertaken buy one dev, and dockerized versions in particular have been difficult to develop and troubleshoot. I may try to dig into this again in the future and create my own docker image (and new Unraid template), but for now it's on a bit of an indefinite hold.

     

    No problem, can't say I blame you at all.  After doing some research, I've become pretty bearish on using this for layman, self hosted applications.  it seems as though the purpose of a Turn server is to provide a "bypass" around a strict firewall in case a remote user's (i.e. WAN or separate network from the NC instance) true IP address is masked, so that voice, video and data can literally be routed "around the firewall", through the Turn server, instead of the normal routing that would be used if both users were on the same LAN.  This is less than ideal for many of the NC users in the Unraid community who are reverse proxying their instance using something like letsencrypt certificates.  There are many reasons a user's IP address may be masked, such as a VPN.  Also, falling back on the Turn server eliminates the peer to peer nature that NC Talk is built on, resulting in slower more sluggish performance.

     

    Secondly it requires the coturn instance to be exposed (port forwarding) and a domain name pointing at the WAN IP address that the coturn instance is running on so that remote can be directed to it.  So for folks like me using a cloudflare tunnel in an effort to mask my true IP for all my exposed dockers, including Nextcloud, this basically becomes a non-starter and probably not worth it from a security standpoint.  

     

    Unfortunate, I'm really hoping a different technology may be utilized in the future so we can all self host our video/voice/chat communications with friends and family and not rely on 3rd parties.  For now I guess Signal continues to be the best option for me.

  6. On 11/1/2020 at 3:09 AM, 4554551n said:

    Additionally, and this is the big one, @joroga22 perhaps you could help me with this, I cannot seem to get things running with your settings.

    Nextcloud doesn't seem to want to connect to the turn server, where you have a tick next to the delete button, mine just spins forever
    The logs in the coturn server via the logs drop down are giving me:
     

    A few lines about listener addresses, real addresses and relay addresses, then 47 lines of

     

    socket: Protocol not supported

     

    I have the same issue in Logs - "socket: Protocol not supported" indefinitely.

     

    However, in the Nextcloud UI, mine is reporting successful ICE candidates (checkmark).

     

    Even with the "success", actually initiating a video call with Talk via WAN works for about a second, then quits out on me.

     

    Searched far and wide on this issue, this is a very complicated topic, don't think there are any other answers out there unless someone is an expert.

     

    OP has stated he is not an expert either.. and just FYI the fork of this docker from instrumentisto states: "PROJECT IS CLOSED AND ARCHIVED. NO MAINTAINING WILL BE CONTINUED." ... so this docker probably won't be getting further updates.

  7. On the topic of hardware acceleration, after much trial and error, I found that manually updating Shinobi via CLI did the trick to get FFMPEG working for encoding recordings.

     

    Open console on Shinobi Docker

    Type the following:

    git reset --hard

    git pull

    npm install --unsafe-perm

     

    Then restart your docker container.

     

    However, it is finicky - if you try to change an H265 stream from your camera to H264 for display on both the dashboard, and recording as well, then it failed.  For my set up, I set my camera to H264 and copied through to the dashboard and recording, no Hwacceleration used.

     

    I also found that when encoding vs. copying for recording purposes, my CPU use actually went up.  With 2 cameras, CPU use is minimal anyway.

     

    I would have liked to use H265 for recordings to minimize size, but that introduces alot of codec issues with trying to view the files in browsers (which don't support 265), etc.  Oh well.

     

    I think the ideal use for GPU acceleration with Shinobi is with the object detection plugin (Yolo).  This can be set up with SOI's guide on YT, as well as this link: https://hub.shinobi.video/articles/view/JtJiGkdbcpAig40

     

    The Yolo.js plugin does appear to be doing "something" when I do watch nvidia-smi in console, and walk in front of the camera(s).  So, I guess that can be considered something.

     

     

     

     

  8. On 2/3/2022 at 8:09 PM, RodWorks said:

    I see there's an update on the docker container, just making sure there's no issues before updating?

     

    What are the changes?

    I updated earlier today, no issues doing so with my camera streams, recordings, or reverse proxy set up.

     

    I can't notice too much - my sense is this update is just bringing this docker up to speed with the latest "official" development versions of Shinobi.  Prior to today, this hadn't been updated in years.

    • Like 1
×
×
  • Create New...