[support] dlandon - Zoneminder 1.36


Recommended Posts

9 minutes ago, dlandon said:

If the container command:


/usr/bin/nvidia-smi

does not show the nvidia gpu, then you need to find out why and get it resolved.  Do you have the Unraid nvidia plugin installed properly for your gpu?

yes - my plex container works fine.  I am going back through all steps again as I figured i screwed up something somewhere.  Currently running ./opencv.sh again and hoping for different result this time (changed the deb file versions for this process).

Link to comment
Just now, repomanz said:

yes - my plex container works fine.  I am going back through all steps again as I figured i screwed up something somewhere.  Currently running ./opencv.sh again and hoping for different result this time (changed the deb file versions for this process).

i compiled it yesterday and i noticed there was a dependency issue with the libcudnn7_7.6.5.32-1+cuda10.2_amd64.deb

 

apt --fix-broken install 

 

fixed it

Link to comment
2 hours ago, repomanz said:

quick question about the GPU enabled container.  I changed cpu pinning on the container today and it appears to have started an install again.  Do I need to run the ./opencv.sh script again?

Set the contents of the opencv_ok file to 'yes' and opencv.sh will be run whenever it is necessary and you don't need to be concerned about if it needs to be re-compiled.

Link to comment

During restarting , this ZM docker performs updates every time, would it be possible to avoid these updates ?

eg :

      ...

      Get:58 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 libmysofa0 amd64 0.6~dfsg0-3+deb10u1build1 [38.5 kB]

      ....

      Get:62 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 python-urllib3 all 1.22-1ubuntu0.18.04.2 [86.0 kB]

      Preconfiguring packages ...

      ....

on the other way, would it be possible to performs this kind of updates every month only or something like that. In advance thx

Edited by dje2006dje
fix typo error
Link to comment
55 minutes ago, dje2006dje said:

During restarting , this ZM docker performs updates every time, would it be possible to avoid these updates ?

No, because ZM is exposed the Internet, it is imperative that security updates be applied when available.  Normally very quick.

Link to comment

Hi  @dlandon would you kindly help me debug my opencv Zoneminder install? I have attached the zip file as instructed in the logs. I cannot get Cudnn to show as enabled in the cmake output. I  have been working on this for a solid day and am relatively new with ZM and would greatly appreciate any help getting GPU support enabled. Thank you in advance!

 

    GTK+:                        NO
--     VTK support:                 NO
-- 
--   Media I/O: 
--     ZLib:                        build (ver 1.2.11)
--     JPEG:                        libjpeg-turbo (ver 2.0.2-62)
--     WEBP:                        build (ver encoder: 0x020e)
--     PNG:                         build (ver 1.6.37)
--     TIFF:                        build (ver 42 - 4.0.10)
--     JPEG 2000:                   build (ver 1.900.1)
--     OpenEXR:                     build (ver 2.3.0)
--     HDR:                         YES
--     SUNRASTER:                   YES
--     PXM:                         YES
--     PFM:                         YES
-- 
--   Video I/O:
--     DC1394:                      NO
--     FFMPEG:                      NO
--       avcodec:                   NO
--       avformat:                  NO
--       avutil:                    NO
--       swscale:                   NO
--       avresample:                NO
--     GStreamer:                   NO
--     v4l/v4l2:                    YES (linux/videodev2.h)
-- 
--   Parallel framework:            pthreads
-- 
--   Trace:                         YES (with Intel ITT)
-- 
--   Other third-party libraries:
--     Intel IPP:                   2019.0.0 Gold [2019.0.0]
--            at:                   /root/opencv/build/3rdparty/ippicv/ippicv_lnx/icv
--     Intel IPP IW:                sources (2019.0.0)
--               at:                /root/opencv/build/3rdparty/ippicv/ippicv_lnx/iw
--     Lapack:                      NO
--     Eigen:                       NO
--     Custom HAL:                  NO
--     Protobuf:                    build (3.5.1)
-- 
--   NVIDIA CUDA:                   YES (ver 11.0, CUFFT CUBLAS FAST_MATH)
--     NVIDIA GPU arch:             30 35 37 50 52 60 61 70 75
--     NVIDIA PTX archs:
-- 
--   cuDNN:                         NO
-- 
--   OpenCL:                        YES (no extra features)
--     Include path:                /root/opencv/3rdparty/include/opencl/1.2
--     Link libraries:              Dynamic load
-- 
--   Python (for build):            /usr/bin/python3
-- 
--   Java:                          
--     ant:                         NO
--     JNI:                         NO
--     Java wrappers:               NO
--     Java tests:                  NO
-- 
--   Install to:                    /usr/local
-- -----------------------------------------------------------------
-- 
-- Configuring incomplete, errors occurred!
See also "/root/opencv/build/CMakeFiles/CMakeOutput.log".
See also "/root/opencv/build/CMakeFiles/CMakeError.log".

opencv.zip

Link to comment
12 minutes ago, Madman2012 said:

Hi  @dlandon would you kindly help me debug my opencv Zoneminder install? I have attached the zip file as instructed in the logs. I cannot get Cudnn to show as enabled in the cmake output. I  have been working on this for a solid day and am relatively new with ZM and would greatly appreciate any help getting GPU support enabled. Thank you in advance!

Did you use the right versions of the files - it looks like CUDA 11 vs 10.2. 
check the top of the opencv.sh and notice the file names 

CUDNN_RUN=libcudnn7_7.6.5.32-1+cuda10.2_amd64.deb
CUDNN_DEV=libcudnn7-dev_7.6.5.32-1+cuda10.2_amd64.deb
CUDA_TOOL=cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb
CUDA_PIN=cuda-ubuntu1804.pin
CUDA_KEY=/var/cuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub
CUDA_VER=10.2

Are your file names the same? Use the archived files list on the nvidia site to locate them - i'm assuming i'm not allowed to copy them here due to licensing etc etc. 
If you downloaded newer versions, did you change the filenames? (I have no idea what is / isnt supported - all I know its a PITA if you mismatch the GPU driver, CUDA and CUDNN stuff)

Link to comment

@dlandon any reason why

ffmpeg -hwaccels

doesnt show 'cuda' or 'cuvid' - I can't figure out how to enable it? assuming it needs building from source (although last time I built the container from scratch it showed up without any additional work)

 

root@e5b91be90299:/# ffmpeg -hwaccels
ffmpeg version 4.3.1-0york0~18.04 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
  configuration: --prefix=/usr --extra-version='0york0~18.04' --toolchain=harden                                                  ed --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --a                                                  rch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=                                                  resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enabl                                                  e-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec                                                  2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfrib                                                  idi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enabl                                                  e-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-l                                                  ibpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-li                                                  bshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --e                                                  nable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable                                                  -libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265                                                   --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2                                                   --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --en                                                  able-libzimg --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-l                                                  ibiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
Hardware acceleration methods:
vdpau
vaapi
drm
opencl

 

Link to comment
On 11/3/2020 at 1:31 PM, Jaburges said:

@dlandon any reason why


ffmpeg -hwaccels

doesnt show 'cuda' or 'cuvid' - I can't figure out how to enable it? assuming it needs building from source (although last time I built the container from scratch it showed up without any additional work)

 


root@e5b91be90299:/# ffmpeg -hwaccels
ffmpeg version 4.3.1-0york0~18.04 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
  configuration: --prefix=/usr --extra-version='0york0~18.04' --toolchain=harden                                                  ed --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --a                                                  rch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=                                                  resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enabl                                                  e-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec                                                  2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfrib                                                  idi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enabl                                                  e-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-l                                                  ibpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-li                                                  bshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --e                                                  nable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable                                                  -libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265                                                   --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2                                                   --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --en                                                  able-libzimg --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-l                                                  ibiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
Hardware acceleration methods:
vdpau
vaapi
drm
opencl

 

@dlandon the issue appears to be that running the opencv.sh doesnt seem to install everything.
i should be able to (I believe) run

nvcc --version

and produce a result, but it does not "nvcc: command not found"

I've attempted this from a fresh docker numerous times. can you confirm

A) if anything changed
B) what cuda verson is recommended

Link to comment
On 11/3/2020 at 4:27 PM, Jaburges said:

Did you use the right versions of the files - it looks like CUDA 11 vs 10.2. 
check the top of the opencv.sh and notice the file names 


CUDNN_RUN=libcudnn7_7.6.5.32-1+cuda10.2_amd64.deb
CUDNN_DEV=libcudnn7-dev_7.6.5.32-1+cuda10.2_amd64.deb
CUDA_TOOL=cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb
CUDA_PIN=cuda-ubuntu1804.pin
CUDA_KEY=/var/cuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub
CUDA_VER=10.2

Are your file names the same? Use the archived files list on the nvidia site to locate them - i'm assuming i'm not allowed to copy them here due to licensing etc etc. 
If you downloaded newer versions, did you change the filenames? (I have no idea what is / isnt supported - all I know its a PITA if you mismatch the GPU driver, CUDA and CUDNN stuff)

I had to use Cuda 11 since I am on unraid beta 30 and that was what the nvidia smi reported (below), please let me know if it is not supported? It looks like it have correct versions of the files in the script variables. I can get Cuda enabled but CUDNN does not come on like before.

 

root@d39bb87ce369:/# nvidia-smi
Sat Nov  7 10:03:47 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Quadro P2200        Off  | 00000000:04:00.0 Off |                  N/A |
| 55%   50C    P0    22W /  75W |    978MiB /  5059MiB |      1%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

Link to comment
On 11/6/2020 at 3:51 PM, Jaburges said:

@dlandon the issue appears to be that running the opencv.sh doesnt seem to install everything.
i should be able to (I believe) run


nvcc --version

and produce a result, but it does not "nvcc: command not found"

I've attempted this from a fresh docker numerous times. can you confirm

A) if anything changed
B) what cuda verson is recommended

any guidance would be greatly appreciated - do others compile FFMPEG themselves (as I didnt used to have to)

Link to comment
13 hours ago, Madman2012 said:

Yes downloaded the right version of the files after registering on NVIDIAs site and changed the variables in the script to the correct names and placed the files in the correct directory.


Sent from my iPhone using Tapatalk

you looked into this:
Failed to find installed gflags CMake configuration, searching for gflags build directories exported with CMake.
-- Failed to find gflags - Failed to find an installed/exported CMake configuration for gflags, will perform search for installed gflags components.
-- Failed to find gflags - Could not find gflags include directory, set GFLAGS_INCLUDE_DIR to directory containing gflags/gflags.h
-- Failed to find glog - Could not find glog include directory, set GLOG_INCLUDE_DIR to directory containing glog/logging.h
-- Module opencv_sfm disabled because the following dependencies are not found: Eigen Glog/Gflags

Edited by Jaburges
Link to comment
  • 2 weeks later...
1 hour ago, wojcioo said:

Hello.

I've installed Zoneminder on my Unraid, but I've changed default Data Path and now it's pointing different storage. After this operation web ui seems to not working correctly. Some icon/titles/names are missing or are showing wrong. How it can be corrected? 

image.png

Show how you mapped the config and data paths.

Link to comment
3 minutes ago, wojcioo said:

Synology is my smb share....

image.thumb.png.3d9dfb2c532bd93caaf78cf153cbb3db.png

Why are you mapping to a remote share?  That's not a good idea.  If the remote server goes offline, Zoneminder will not have access to the path and will have all kinds of issues.  Map it to a local disk and set up a user script to copy it to the Synology on a schedule.

Link to comment
  • 2 weeks later...
On 11/6/2020 at 6:51 PM, Jaburges said:

@dlandon the issue appears to be that running the opencv.sh doesnt seem to install everything.
i should be able to (I believe) run


nvcc --version

and produce a result, but it does not "nvcc: command not found"

I've attempted this from a fresh docker numerous times. can you confirm

A) if anything changed
B) what cuda verson is recommended

I am having the same issue.  I am running UnRAID 6.9b35.  I started my docker with almost all the same parameters I use to start my plex container (where I do have CUDA support).  

 

docker run
    -d
    --name='Zoneminder'
    --net='bridge'
    --privileged=true
    -e TZ="America/New_York"
    -e HOST_OS="Unraid"
    -e 'PUID'='99'
    -e 'PGID'='100'
    -e 'INSTALL_HOOK'='1'
    -e 'INSTALL_TINY_YOLOV3'='0'
    -e 'INSTALL_YOLOV3'='0'
    -e 'INSTALL_TINY_YOLOV4'='0'
    -e 'INSTALL_YOLOV4'='1'
    -e 'INSTALL_FACE'='0'
    -e 'NVIDIA_VISIBLE_DEVICES'='GPU-b94fe274-0c08-930f-c3c3-7acbb123d8f3'
    -e 'SHMEM'='50%'
    -p '18443:443/tcp'
    -p '19000:9000/tcp'
    -v '/mnt/user/appdata/Zoneminder':'/config':'rw'
    -v '/mnt/user/appdata/Zoneminder/data':'/var/cache/zoneminder':'rw'
    --log-opt max-size=50m
    --log-opt max-file=1
    --runtime=nvidia
    --gpus=1
    'dlandon/zoneminder'

 

Inside my Zoneminder container:

 

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.38       Driver Version: 455.38       CUDA Version: N/A      |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce GTX 105...  Off  | 00000000:2B:00.0 Off |                  N/A |
|  0%   40C    P0    N/A /  72W |      0MiB /  4037MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

 

And inside my plex container

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.38       Driver Version: 455.38       CUDA Version: 11.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce GTX 105...  Off  | 00000000:2B:00.0 Off |                  N/A |
|  0%   40C    P0    N/A /  72W |      0MiB /  4037MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

 

I can't quite figure out what is different between the plex container and the zoneminder container as to why the zoneminder one does not have CUDA

Edited by sivart
Link to comment
59 minutes ago, sivart said:

 

I can't quite figure out what is different between the plex container and the zoneminder container as to why the zoneminder one does not have CUDA

check the versions all match up (and the files you have in /opencv are correct)

I notice you are using a driver 455.38 (so I think that is Unraid beta build) 

 

opencv variables in the opencv install script too so i'm assuming given the driver version is forced by the unraid build, you'll need to match the CUDA version like the plex one using 11.1

 

my issue is rebuilding FFMPEG to use cuda or cuvid - I have to rebuild it from source, but with all the libraries (and its overly complicated)

 

Link to comment
  • dlandon changed the title to [support] dlandon - Zoneminder 1.36

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.