I have deployed this using docker-compose, all containers are running
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f8ecfd2d3a47 linuxserver/diskover "/init" 22 minutes ago Up 22 minutes 443/tcp, 0.0.0.0:9181->9181/tcp, 8000/tcp, 0.0.0.0:9999->9999/tcp, 0.0.0.0:8090->80/tcp diskover
b93950dc9ab5 docker.elastic.co/elasticsearch/elasticsearch:5.6.9 "/bin/bash bin/es-do…" 22 minutes ago Up 22 minutes 9200/tcp, 9300/tcp elasticsearch
2727c45e4df2 redis:alpine "docker-entrypoint.s…" 22 minutes ago Up 22 minutes 6379/tcp redis
However I have no indexing as the diskover container keeps throwing:
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
I can only find references to qumulo is a very old build of diskover and then only in github code. I'm unsure what to do to get past this, as I also receive: "No diskover indices found in Elasticsearch. Please run a crawl and come back." when I try to access the webUI, which makes sense if it never truly starts up. Obviously qumulo was an old method of indexing and does not appear supported, so I'm unsure why I get this error.
elasticsearch:5.6.9 is supported and running: "[2019-06-18T21:53:07,608][INFO ][o.e.n.Node ] [6AoRXIq] started"
rdis is also running: "1:M 18 Jun 2019 21:52:58.921 * Ready to accept connections"
I tired to adjusting the abc crontab to run, in the hopes that would generate an index, but that did not help either.
manually running the indexer returns:
0 18 * * * /app/dispatcher.sh >> /config/log/diskover/dispatcher.log 2>&1
root@fileserver:/websites/diskover_config# docker exec -it diskover /app/dispatcher.sh
killing existing workers...
emptying current redis queues...
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/redis/connection.py", line 493, in connect
sock = self._connect()
File "/usr/lib/python3.6/site-packages/redis/connection.py", line 550, in _connect
raise err
File "/usr/lib/python3.6/site-packages/redis/connection.py", line 538, in _connect
sock.connect(socket_address)
OSError: [Errno 99] Address not available
During handling of the above exception, another exception occurred:
...
Suggestions,
ERIC