Sic79

Members
  • Posts

    107
  • Joined

  • Last visited

Posts posted by Sic79

  1. Looks like this isn't being maintained anymore
    I am working on setting up my own docker to upload an updated version of the pihole-dot-doh

    Edit - I have submitted to Squid to get added onto CA
    Edit 2 - Pihole DoT-DoH now on CA, and with latest FTL version 5.5.1 etc
    This can be installed over testdasi's pihole without any issues.

    Thanks, it worked great :).
    • Like 1
  2. 1 minute ago, dlandon said:

    Sorry, didn't mean to come across like that.  You need to understand that the Unraid community is only a small base of the Zoneminder users.  People saw the Docker and started using it on all manner of hosts - Macs, Windows, Debian, etc and then asked for support for that particular platform.  There is no way possible for me to do that.  ES/ML is also not that easy to get into a Docker and have it act like a Docker.

     

    I think the way forward is for the Docker to be rebuilt into a ZM and ES/ML only Docker and get rid of the environment variables to configure the Docker at runtime.  The Docker that is in CA right now could stay for those like me that don't use ML.  The issue is the amount of time that will be needed to accomplish this that I just can't 'donate' at this point.  I'm thinking of a funding campaign to get funding so I can commit the time to the task.

     

    This is a difficult decision for me because I volunteer my time for Unraid, and I'm not here to make money.  Unfortunately for me, this Docker has been too successful with over 5 Million downloads and has ended up taking way too much time from my other work.

    No worries, and no hard feelings.

     

    Well, maybe a campaign is not a bad idea at all. If you think about the alternatives to Zoneminder, there are not many complete solutions (if none?) that include everything for free. This is also a great way to make people really show their appreciation for your work/time and to actually support the developement in their way.

  3. 10 hours ago, dlandon said:

    Good luck with that.  The only platform that seems to support the latest in GPU hardware is Ubuntu 20.04.  I really don't think you realize how much work is involved in the ML part of ES.  My Docker made it real easy to just configure ML and go.  All the setup and configuration was pretty much done for you.  Maybe I made it too easy as hardly anyone seems to have an appreciation for the amount of work involved to contribute towards my efforts.

    Hm seems like a bit of sarcasm here. Well, if you remember I was one of them who helped you trial and error the first steps on getting the opencv compile working in the first place. So yes, I understand complexness in getting everything working smooth even if I´m not on your level of knowledge in dockers. I work as a IT-pro/dev so I can see your point in that ppl doesn´t realise how much work something that can seem very simple on the surface is very complex underneat.

     

    We just want a new solution that´s is not a VM so we can share a single GPU amongst dockers.

    I can see that this is not going anywhere so discussion on a new solution should be elsewhere.

     

    So @dlandon thanks for the time, it´s been great using your ZM docker with ML.

     

  4. Unfortunately I spoke too soon.  If I had read the FAQ with more comprehension, the current mlapi implementation still requires most if not all of the ML libraries be installed on the ZM install, so that isn't a solution just yet.  The mlapi docker hasn't been updated in 7 months but you will probably have to setup/build the docker yourself because of the compile requirements of opencv so I don't think one click install app version of a ML docker is in the cards for most unraid users.  A VM following the instructions would therefore be 'easier' for most users.
     
    And to continue to rain on the parade, it looks like the Google Coral TPU isn't really an interchangeable processor like Yolo and cpu/gpu, the coral runs different models and has a different performance profile (faster but less successful detections) and is more used as a separate suit of detectors, ,like object vs face vs alpr etc.  I'm not entirely sure what all objects tpu can detect to be honest, it may or may not be the same list as the yolo list.
     
    So It's going to have to be a separate GPU and a VM for the foreseeable future if one wants 'max performance', at a TPU if you're curious, and wait for pliablepixels to continue development on the MLAPI advancing to a less kludgy (his words) implementation.

    So sad it had to be like this since it worked so well, but sure I can understand@dlandon decision too.

    Don’t like the idea to move to a VM though since I can’t even fit an extra GPU in my server. So for me it isn’t a choice :(.

    I checked the “AppStore” and saw that@ich777 had a docker with Debian Buster that had Nvidia support. Maybe it is possible to install everything in that one just to test while waiting/working on a better solution.
  5. 17 hours ago, ThreeFN said:

    ES does support remote ML server as described here.

     

    Which if you go down the rabit hole, you get pliablepixels mlapi software, which does have a containerized version someone has made (and may have GPU support?).  It may be possible even now to glue this all together.

     

    Obviously experimentation must ensue.

     

    The more I dig through stuff the more I tend to agree with dlandon that this container is doing a lot of 'init install' stuff that is more OS/VM behavior than docker pull/start behavior and I don't fault wanting to kick that to the curb.

     

    Having said that ZM is 'almost exclusively made useful' by ES's ML stuff for false positive rejection, so no ML = no ZM for me.  So at the moment it looks like the options are spin up a VM and get a well supported installation (pliablepixels documentation), or investigate the aforementioned remote ML docker.  My preference is for the latter because at the moment, containerization is about the only way in unraid to spread a GPU amongst multiple work loads (eg plex & ML) unless/until vGPU/SRIOV/etc support is added and VMs & docker can share.

     

    I guess the other solution would be to move ML processing to a Google Coral device, and give that device to the ZM VM.  Or even go the route of a TPU & mlapi VM remoted to the ZM docker.  The benchmarks seem to indicate the TPU is good but maybe hamstrung a bit by it's usb 2.0 (or slower) interface to get the data.  Not sure if the M.2 versions would be any faster, if TPU can't saturate 2.0 that seem like the chip is the bottleneck and not it's interface...

     

    Hell of a device for the cost, size, and power draw though...

     

    Dlandon, I'm guessing you'll be dropping GPU support entirely from the docker?  Like even for ffmpeg use directly by ZM and not for ML (h.264 decode etc)?  Or is that something that doesn't require your direct support (give container gpu, ffmpeg sorts it out) and will work on its own?

    @threefn This is almost exactly the same I would have wrote. We´re in the same boat for sure :)

  6. 8 hours ago, falconexe said:

     

     

    I've been working on both 1.4 and 1.5 simultaneously. They will still be released separately, but the code overlaps in some areas, so I had to figure some of it out now.

     

    Goal is for a super clean and refined Varken/Tautulli/Plex Dash which will be integrated directly into UUD, sporting some of the same falconexe style/customizations (like working growth trending) found in the UUD.

     

    @Stupifier Thought You Would Appreciate This Sneak Peek...

     

    image.thumb.png.c8752aac2c331e083301ff11d8402c8d.png

    image.thumb.png.6c3cee56e9b1b0f36f1f1fe25e52f47f.png

     

    Wow it looks fantastic, great work :). You really are a Grafana Wizard with a sense of design.

    • Thanks 1
  7. 1 hour ago, jmbailey2000 said:

    So, obviously I'm missing something simple, because I can't do anything in Grafana. The default dashboard is all there is and there is no way to add/import other board nor can you change any of the individual panels. I've been playing with Grafana through another implementation but wanted to try this since it is a "all-in-one". 

     

    Any help pointing me in the right direction is appreciated. 

     

    Thanks!

    Did you login as admin?

    "Go to ip:3006 to access grafana, login with admin/admin and make changes to the default dashboard."

  8. 5 hours ago, Aegisnir said:

    So I have my own Grafana board setup and do not use this container but you should be able to just edit the telegraf.conf and enable the server and enter unraid IP under [[inputs.apcupsd]].  I don't remember having to do anything else to get my APC UPS stats to appear.

    Thank you for the tip :)

  9. 8 hours ago, testdasi said:

    I don't have a UPS so unfortunately can't really test adding ups functionality. Installing the apcupsd exporting script is (somewhat) trivial but without actual device to test, it's pure luck if anything works.

    You are probably better off installing the apcupsd-influxdb-exporter docker from the Apps page and point it to your Gus docker IP + 8086 port. Have a look at this guide to see if it helps.

     

    You will also have to read up a bit on how to edit grafana dashboard / panel to update the queries.

    Each person's hardware has some unique parameters (e.g. which sensor to read CPU temperature), making a completely hands-off dashboard virtually impossible. Usually it just involves picking a few values from the drop-down to see which one makes the most sense.

    Ok, I might give it a shot when I have some time to test adding my UPS.

    And ofcourse, I understand that it´s nearly impossible to cover all unique combinations of hardware. Just thought that since all other CPU sensors where read right, maybe it was just a little "issue". I´ll check and see if I can fix the right settings for reading my CPU temp.

     

    And thanks again for a superb docker :)

    • Thanks 1
  10. 10 hours ago, testdasi said:

    Update (21/09/2020):

    • OpenVPN-based dockers now will crash out if the user doesn't provide an ovpn file as per instructions.
    • Added Grafana Unraid Stack

    Quick screenshot:

    grafana-unraid-stack-screen.png

    Worked really nice, thank you for the Docker. I would also love to have monitoring of my APC UPS, or maybe it´s hard to add in your docker since ppl use different brands? I assume if I add that manually it would disappear on a future update?

     

    Edit: Did just notice that CPU does not report temp. I have a Intel E5 2697 V3 if that helps

  11. Hi,

     

    I would like to have some of my dockers work together like "master and slave" when I needs to restart the "master docker" the "slaves" should also do that. Would that be possible?

     

    For example, I have a privoxyvpn that 3 other dockers is connected through, but sometimes the privoxyvpn docker needs a restart. When I do that the docker that is connected through that one looses their connection and needs a manually restart to get up and running again.

  12. @dlandon: I just wanted to leave you some quick feedback on the "1.35 master branch" of ZM.

     

    I upgraded the old 1.34 docker and followed the "Breaking changes" from pliable pixels to get eventserver working again. Have been running it for a week now, seems just as stable as 1.34 :). So thank you for supporting the master branch so that it is possible to follow the development and as a bonus beta test upcoming features.

  13. Since Caddy 1 is now obsolete it would be very nice if Caddy 2 also could be dockerized.

     

    Caddy 2 - "Caddy 2 is a powerful, enterprise-ready, open source web server with automatic HTTPS written in Go"

    https://caddyserver.com/

    • Like 1
  14. 6 hours ago, ich777 said:

    I've looked into PrivoxyVPN and actually you can do that, but i've also seen that a container from @alturismo is available in the CA App.

    The easiest way would be to do that is:

    1. Set up OVPN_Privoxy from @alturismo and configure everything so that it fits your needs. (Please note that you must create a port in this container for Deezloader-Remix, in this case TCP 8080 because you will route the traffic through this container)
    2. Open up the configuration page for my container in Unraid
    3. Click on 'Basic View' in the top right corner (so that you enable 'Advanced View')
    4. Under 'Extra Parameters' past this: '--net=container:OVPN_Privoxy' (without quotes, also please note that you must edit 'OVPN_Privoxy' to the corresponding container name that you've set up from @alturismo in the first step. Also i would advice you to delete the port since you're routing the traffic to the other container.)

     

    With this solution you route the whole traffic through the container that you've set up from @alturismo.

    Also there is a container from @binhex available but i think this container acts only as a proxy server and since Deezloader-Remix doesn't supports proxy's it's not suitable.

    This will also work if your VPN provider supports connections with OpenVPN. Simply set up a OpenVPN Client container from the CA app and configure it that it fits you needs and then basically follow the steps from above.

     

    Please report back if that worked for you. ;)

    Hi @ich777, tried it now but it collide on port 8080 between the dockers even if i remove the port variable in Deezloader, it seems to be hardcoded or something?.

     

    [[35mInfo[0m] Server is running @ localhost:8080
    [[31mError[0m] Error: listen EADDRINUSE: address already in use :::8080'

    at Server.setupListenHandle [as _listen2] (net.js:1309:16)
    at listenInCluster (net.js:1357:12)
    at Server.listen (net.js:1445:7)
    at Object.<anonymous> (/deezloaderremix/4.3.0/app/app.js:106:8)
    at Module._compile (internal/modules/cjs/loader.js:1158:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1178:10)
    at Module.load (internal/modules/cjs/loader.js:1002:32)
    at Function.Module._load (internal/modules/cjs/loader.js:901:14)
    at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:74:12)
    at internal/main/run_main_module.js:18:47

     

    I also tried to change default port in the OVPN Docker but it is always showing 8080 when running :|.