Jaburges

Members
  • Posts

    82
  • Joined

  • Last visited

Everything posted by Jaburges

  1. Ok super weird, built using this, cuda shows up as -hwaccels, but is not being used by Zoneminder. i remember I used to use cuvid for hwaccels not cuda though.
  2. FFmpeg 4 with NVIDIA Encoding and Decoding support | TalOrg interesting - didnt find this on my travels but i notice in the dockerfile: add-apt-repository ppa:jonathonf/ffmpeg-4 so maybe just the config needs changing and the extra ffmpeg headers for nvidia
  3. yeah that should be everything. I think (but i'm not sure) unless you watch nvidia-smi when object detection happens you may not see the task? However, i'm almost convinced i never had to rebuild FFMPEG to use cuda for Hwaccels - but now I do? @dlandon is there anything that may have changed on the FFMPEG side?
  4. check the versions all match up (and the files you have in /opencv are correct) I notice you are using a driver 455.38 (so I think that is Unraid beta build) opencv variables in the opencv install script too so i'm assuming given the driver version is forced by the unraid build, you'll need to match the CUDA version like the plex one using 11.1 my issue is rebuilding FFMPEG to use cuda or cuvid - I have to rebuild it from source, but with all the libraries (and its overly complicated)
  5. you looked into this: Failed to find installed gflags CMake configuration, searching for gflags build directories exported with CMake. -- Failed to find gflags - Failed to find an installed/exported CMake configuration for gflags, will perform search for installed gflags components. -- Failed to find gflags - Could not find gflags include directory, set GFLAGS_INCLUDE_DIR to directory containing gflags/gflags.h -- Failed to find glog - Could not find glog include directory, set GLOG_INCLUDE_DIR to directory containing glog/logging.h -- Module opencv_sfm disabled because the following dependencies are not found: Eigen Glog/Gflags
  6. any guidance would be greatly appreciated - do others compile FFMPEG themselves (as I didnt used to have to)
  7. Did you download the matching files from nvidia and change the variables I listed above?
  8. @dlandon the issue appears to be that running the opencv.sh doesnt seem to install everything. i should be able to (I believe) run nvcc --version and produce a result, but it does not "nvcc: command not found" I've attempted this from a fresh docker numerous times. can you confirm A) if anything changed B) what cuda verson is recommended
  9. @dlandon any reason why ffmpeg -hwaccels doesnt show 'cuda' or 'cuvid' - I can't figure out how to enable it? assuming it needs building from source (although last time I built the container from scratch it showed up without any additional work) root@e5b91be90299:/# ffmpeg -hwaccels ffmpeg version 4.3.1-0york0~18.04 Copyright (c) 2000-2020 the FFmpeg developers built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04) configuration: --prefix=/usr --extra-version='0york0~18.04' --toolchain=harden ed --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --a rch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter= resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enabl e-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec 2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfrib idi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enabl e-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-l ibpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-li bshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --e nable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable -libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --en able-libzimg --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-l ibiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared libavutil 56. 51.100 / 56. 51.100 libavcodec 58. 91.100 / 58. 91.100 libavformat 58. 45.100 / 58. 45.100 libavdevice 58. 10.100 / 58. 10.100 libavfilter 7. 85.100 / 7. 85.100 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 7.100 / 5. 7.100 libswresample 3. 7.100 / 3. 7.100 libpostproc 55. 7.100 / 55. 7.100 Hardware acceleration methods: vdpau vaapi drm opencl
  10. Did you use the right versions of the files - it looks like CUDA 11 vs 10.2. check the top of the opencv.sh and notice the file names CUDNN_RUN=libcudnn7_7.6.5.32-1+cuda10.2_amd64.deb CUDNN_DEV=libcudnn7-dev_7.6.5.32-1+cuda10.2_amd64.deb CUDA_TOOL=cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb CUDA_PIN=cuda-ubuntu1804.pin CUDA_KEY=/var/cuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub CUDA_VER=10.2 Are your file names the same? Use the archived files list on the nvidia site to locate them - i'm assuming i'm not allowed to copy them here due to licensing etc etc. If you downloaded newer versions, did you change the filenames? (I have no idea what is / isnt supported - all I know its a PITA if you mismatch the GPU driver, CUDA and CUDNN stuff)
  11. i spotted it in the logs on screen when running the opencv.sh script
  12. i compiled it yesterday and i noticed there was a dependency issue with the libcudnn7_7.6.5.32-1+cuda10.2_amd64.deb apt --fix-broken install fixed it
  13. I have compiled openCV with GPU support - everything completes successfully - EXCEPT "ffmpeg -hwaccels" does not show 'cuvid' or 'cuda' which seems abnormal? any way to enable it?
  14. it compiles openCV without GPU support first, then you need to run /config/opencv/opencv.sh once you have added the CUDA and cudnn files to the folder (its mentioned in first post - and the instructions are in the first few lines of opencv.sh
  15. did you add "--runtime=nvidia" to the extra parameters?
  16. just rebuilt the docker with nvidia (cudnn & cuda etc) However `ffmpeg -hwaccels` doesnt show cuda or cuvid as enabled? Do i need to remove ffmpeg and recompile now? That didnt seem like a needed step before?
  17. Nice work - just followed your instructions and its working! thanks for sharing!
  18. Yeah, i see a red bar when motion is detected, but you have to mouse over the stream to see it. can you not set up a webhook, or email or something to validate motion is being detected and the action is being triggered?
  19. have you mapped `/opt/shinobi` to a volume. To get things to work (and conf.json and pm2Shinobi.yml etc to remain persistent) there was a bit of a faff. 1. Start the container without the persistent folder (if you map it first it will fail looking for run.sh) 2. docker cp shinobipro:/opt/shinobi /mnt/user/appdata/shinobi (check the files are in this folder or if they copy to ...shinobi/shinobi note that) 3. now map /mnt/user/appdata/shinobipro:/opt/shinobi I then can recreate the docker without too many issues - I have no idea what the above issue is - but most of my errors cleared out by recreating the container and installing face again
  20. so the trick was to edit the `pm2Shinobi.yml` The plugins were persistent but were just not started as part of Shinobi starting up. i'm using TensorFlow and Face - my example below: apps: - script : '/opt/shinobi/camera.js' name : 'Camera-App' kill_timeout : 5000 - script : '/opt/shinobi/cron.js' name : 'Cron-App' kill_timeout : 5000 - script : '/opt/shinobi/plugins/tensorflow/shinobi-tensorflow.js' name : 'Tensorflow-Plugin' kill_timeout : 5000 - script : '/opt/shinobi/plugins/face/shinobi-face.js' name : 'Face-Plugin' kill_timeout : 5000
  21. weird - moeiscool on the discord (lead dev) mentioned that yolo is CPU based - looks like the docker this uses (MiGoller) has Yolo enabled for GPU I ended up using tensorflow and face
  22. how did you get yolo using GPU? i was under the impression that only tensorflow and face use GPU but yolo does not?
  23. still doesnt solve it - none of the plugins survive a docker reboot? They install great, but `pm2 save` doesn't survive the docker reboot so coming back up just loads yolo again Any ideas?
  24. solved it i had to docker cp the contents to the intended mapped folder first (so run.sh was present) THEN map the folder