[Support] Linuxserver.io - diskover


140 posts in this topic Last Reply

Recommended Posts

linuxserver_medium.png?v=4&s=4000

 

Application Name: Diskover
Application Site: https://shirosaidev.github.io/diskover/
Docker Hub: https://hub.docker.com/r/linuxserver/diskover/
Github: https://github.com/linuxserver/docker-diskover

 

Please post any questions/issues relating to this docker you have in this thread.

If you are not using Unraid (and you should be!) then please do not post here, rather use the linuxserver.io forum for support.

Link to post
  • Replies 139
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

So, I deleted all the Elasticsearch data and reinstall elasticsearch, reset the CHOWN. Deleted and reinstalled Diskover And it works.   It seems if you don't put in the right detai

From github https://github.com/shirosaidev/diskover Requirements: Elasticsearch 5.6.x (local or AWS ES Service) Elasticsearch 6 not supported, ES 7 supported in Enterprise version Redis 4.

Why not make it yourself if you don't like it? 

Posted Images

21 minutes ago, glave said:

The WebGUI parameter isn't respecting that I changed it from the default. I changed it from 8080 to 8082, but when I launch WebGUI from the docker icon it still takes me to 8080.

 

Seeing the same thing. Did you get anything to load when you manually went to 8082? I just get a white screen in Safari. Chrome give me a page not working 500 error.

Link to post

Did you set up elasticsearch? It is are mandatory to get this working. You should also set up redis. 

You also need to read the Readme on github for specific setup needed on unraid to get it working. You have to use the 5.6.x branch of elasticsearch 

 

You will not get a webgui until elasticsearch is working. 

Link to post
5 hours ago, saarg said:

Did you set up elasticsearch? It is are mandatory to get this working. You should also set up redis. 

You also need to read the Readme on github for specific setup needed on unraid to get it working. You have to use the 5.6.x branch of elasticsearch 

 

You will not get a webgui until elasticsearch is working. 

Thanks for the info! I was looking at the GitHub site yesterday to find info but was also doing a million other things and missed it.

 

Sorry about that and thanks!

Link to post

So I'm not getting a response on 8080. Gateway timeout.

On :9181 I get the RQ dashboard (no workers or jobs listed)

On :9200 I get JSON from the Elasticsearch:

name	"XDTClyG"
cluster_name	"docker-cluster"
cluster_uuid	"rzwTHQOTQNuM32hC1wReCg"
version	
number	"5.6.3"
build_hash	"1a2f265"
build_date	"2017-10-06T20:33:39.012Z"
build_snapshot	false
lucene_version	"6.6.1"
tagline	"You Know, for Search"

The unRAID Docker logs for elasticsearch have two WARN about memory allocation not being big enough:

"[2018-11-22T12:43:29,079][WARN ][o.e.b.BootstrapChecks ] [XDTClyG] max file descriptors [40960] for elasticsearch process is too low, increase to at least [65536]"

"[2018-11-22T12:43:29,079][WARN ][o.e.b.BootstrapChecks ] [XDTClyG] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]"

 

Not sure how to change that, but it's a warn so I'm kinda ignoring it.

The log for Diskover shows no errors or warnings, other than about dev on production.

I'm not sure where to look for any other issues, I have checked the logs, Redis gives me a terminal, but I have no idea what it does or how to check it, but it's alive, Elasticsearch is alive, all ports and addresses (direct IP) are correct. Still all I get is a Gateway timed out. Can't find any mention of unraid on the Diskover Github page. Please help.

Link to post
2 hours ago, OFark said:

So I'm not getting a response on 8080. Gateway timeout.

On :9181 I get the RQ dashboard (no workers or jobs listed)

On :9200 I get JSON from the Elasticsearch:


name	"XDTClyG"
cluster_name	"docker-cluster"
cluster_uuid	"rzwTHQOTQNuM32hC1wReCg"
version	
number	"5.6.3"
build_hash	"1a2f265"
build_date	"2017-10-06T20:33:39.012Z"
build_snapshot	false
lucene_version	"6.6.1"
tagline	"You Know, for Search"

The unRAID Docker logs for elasticsearch have two WARN about memory allocation not being big enough:

"[2018-11-22T12:43:29,079][WARN ][o.e.b.BootstrapChecks ] [XDTClyG] max file descriptors [40960] for elasticsearch process is too low, increase to at least [65536]"

"[2018-11-22T12:43:29,079][WARN ][o.e.b.BootstrapChecks ] [XDTClyG] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]"

 

Not sure how to change that, but it's a warn so I'm kinda ignoring it.

The log for Diskover shows no errors or warnings, other than about dev on production.

I'm not sure where to look for any other issues, I have checked the logs, Redis gives me a terminal, but I have no idea what it does or how to check it, but it's alive, Elasticsearch is alive, all ports and addresses (direct IP) are correct. Still all I get is a Gateway timed out. Can't find any mention of unraid on the Diskover Github page. Please help.

 

Add this in the extra parameters field for elasticsearch and restart the diskover container. Might be a little slow on the first run. 

You did add a folder to scan for files in the diskover template? The /data mount. 

 

Looks like you also didn't read the Readme, leading to not follow the setting up the application part. You need to do the vm.max_map_count setting on unraid. 

Edited by saarg
Link to post
5 minutes ago, saarg said:

 

Add this in the extra parameters field for elasticsearch and restart the diskover container. Might be a little slow on the first run. 

You did add a folder to scan for files in the diskover template? The /data mount. 

 

Looks like you also didn't read the Readme, leading to not follow the setting up the application part. You need to do the vm.max_map_count setting on unraid. 

 

Forgive me I thought I had read the readme. When people say readme and Github I THINK they are referring to the text that Github displays by default. Is there another readme file? Thanks for the info on the vm.max_map_count.

Link to post
9 minutes ago, OFark said:

 

Forgive me I thought I had read the readme. When people say readme and Github I THINK they are referring to the text that Github displays by default. Is there another readme file? Thanks for the info on the vm.max_map_count.

You click the link in the first post to the github repo and then read the Readme that is displayed. On desktop browsers the full Readme is shown, but not on mobile/tablets. Then you have to expand it. 

Link to post
1 hour ago, saarg said:

You click the link in the first post to the github repo and then read the Readme that is displayed. On desktop browsers the full Readme is shown, but not on mobile/tablets. Then you have to expand it. 

 

Ok, so I did think the readme was the right thing, I'm not sure what link I followed but the page I remember was black, with screenshots. However I have now seen that readme, and I've added the sysctl -w vm.max_map_count=262144 option, and that warn has gone. But I'm still getting a 504 bad gateway on 8080. The Diskover log never mentions port 8080, or 80 (I have checked the port is mapped 8080 -> 80), it mentions 9999 and 9181 but not 80 or 8080.

 

Link to post
1 hour ago, OFark said:

 

Ok, so I did think the readme was the right thing, I'm not sure what link I followed but the page I remember was black, with screenshots. However I have now seen that readme, and I've added the sysctl -w vm.max_map_count=262144 option, and that warn has gone. But I'm still getting a 504 bad gateway on 8080. The Diskover log never mentions port 8080, or 80 (I have checked the port is mapped 8080 -> 80), it mentions 9999 and 9181 but not 80 or 8080.

 

So both warnings in elasticsearch log is now gone? 

 

I see you use 5.6.3. In my tests I used 5.6.9. Try that one and see if it works. 

Link to post
1 minute ago, saarg said:

So both warnings in elasticsearch log is now gone? 

 

I see you use 5.6.3. In my tests I used 5.6.9. Try that one and see if it works. 

No the other error is still there: [2018-11-22T17:13:33,802][WARN ][o.e.b.BootstrapChecks ] [XDTClyG] max file descriptors [40960] for elasticsearch process is too low, increase to at least [65536] I tried whaty I thought might change this value, based on the other one in the readme, alas it did not work.

 

I shall try upgrading elasticsearch now.

Link to post
1 hour ago, OFark said:

No the other error is still there: [2018-11-22T17:13:33,802][WARN ][o.e.b.BootstrapChecks ] [XDTClyG] max file descriptors [40960] for elasticsearch process is too low, increase to at least [65536] I tried whaty I thought might change this value, based on the other one in the readme, alas it did not work.

 

I shall try upgrading elasticsearch now.

My bad. I just wrote, add this to the extra parameters in the elasticsearch template and forgot to paste it 😆

 

So add this: --ulimit nofile=262144:262144

Link to post
3 hours ago, saarg said:

My bad. I just wrote, add this to the extra parameters in the elasticsearch template and forgot to paste it 😆

 

So add this: --ulimit nofile=262144:262144

I was wondering. Still, I've added that now. No more Warn's. Still no page. Gateway 504 Error still. I've had a look in the /mnt/user/appdata/diskover/diskover.cfg and I switched the worker bot logging to 'True', left it pointing to /tmp. Can't see any logs relating. (thanks for your help so far btw)

Link to post

For what it's worth I have Kibana working just fine. Although I don't have any index's yet, just *. Looks like what Diskover does is add drive data to ElasticSearch, so when I specify an index name prefix "diskover-" that isn't supposed to already exist, is it? I've not done anything with ElasticSearch, not configured it in any way. Kibana can't see a diskover-* index.

Link to post

Ok, so I've got the index in ElasticSearch. The /mnt/user/appdata/diskover/diskcover.cfg wasn't matching the options in the diskover container. Basically, the hostnames were names rather than IP addresses.  That fixed - Kibana is now showing data. Not brilliant for files but I can see data and I can see regular indexing. Diskover web is still showing 504 Bad gateway.

Link to post

I've found the logging under /mnt/user/appdata/diskover/log/ :)

under nginx/error/log is the following two lines:

2018/11/23 14:13:09 [error] 334#334: *20 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 192.168.1.1, server: _, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.51:8080", referrer: "http://192.168.1.51/Docker"

2018/11/23 14:14:09 [error] 334#334: *20 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 192.168.1.1, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.51:8080"

I'm not sure what 127.0.0.1:9000 is pointing to, I don;t see any ports open in unRAID for 9000. Maybe thats not an issue.

Link to post

You have to sue the IP or hostname of you elasticsearch/redis install. What is added in the template is just a suggestion.

You should not need to change anything in the diskover config file to get it to index. Pleas post the docker run command for all containers.

 

 

Link to post

Does the crawl need to complete before you can access the web-gui? I had it set up outside of docker and never noticed that.

 

Nginx, php, docker log, and dispatcher all show no errors at all, they all look to have completed correctly. However I get "Unable to connect" when loading unraid.local:8080/. Only thing that could be left is the crawler, which has been chugging away for the past several hours.

 

Also, I noticed that when editing the config, changes do not propagate to all the config files. So when I wanted to change an IP, I had to wipe my config (or manually edit all the files, I guess).

 

Edit: Something is wrong with the webgui port config. I left it at default. It says in the description that it's mapping port 8080 to 80, but it actually was doing 8080 to 8080. I moved the interface to br0 and set a static ip and opened port 80, that fixed it.

 

Edit2: I just re-pulled and it seems to be fixed now.

Edited by MacroPower
Link to post
6 hours ago, MacroPower said:

Does the crawl need to complete before you can access the web-gui? I had it set up outside of docker and never noticed that.

 

Nginx, php, docker log, and dispatcher all show no errors at all, they all look to have completed correctly. However I get "Unable to connect" when loading unraid.local:8080/. Only thing that could be left is the crawler, which has been chugging away for the past several hours.

 

Also, I noticed that when editing the config, changes do not propagate to all the config files. So when I wanted to change an IP, I had to wipe my config (or manually edit all the files, I guess).

 

Edit: Something is wrong with the webgui port config. I left it at default. It says in the description that it's mapping port 8080 to 80, but it actually was doing 8080 to 8080. I moved the interface to br0 and set a static ip and opened port 80, that fixed it.

 

Edit2: I just re-pulled and it seems to be fixed now.

 

I don't know what w if the crawl have to be finished or not. The expert on it have to answer that. 

 

Changing IPs in the config file is not a good idea. Set the IPs in the variable in the template. The variables are then input to the correct config files at startup. 

 

I don't see a way the web interface port is changed from changing the port in the template. Did you change the port in a config file? 

Link to post
19 hours ago, MacroPower said:

Edit: Something is wrong with the webgui port config. I left it at default. It says in the description that it's mapping port 8080 to 80, but it actually was doing 8080 to 8080. I moved the interface to br0 and set a static ip and opened port 80, that fixed it.

2

This didn't do anything for me, I still the 504.

 

Heres how I installed ElasticSearch:

docker pull docker.elastic.co/elasticsearch/elasticsearch:5.6.9
docker create --name="Elasticsearch-5.6.9" --net="bridge" -e TZ="Europe/London" -e HOST_OS="unRAID" -e "discovery.type"="single-node" -e "ES_JAVA_OPTS"="-Xms512M -Xmx512M" -p 9200:9200/tcp -v "/mnt/cache/appdata/elasticsearch5/data":"/usr/share/elasticsearch/data":rw --ulimit nofile=262144:262144 25482cbfd71b

Diskover was from CA, modified in this way:

docker run -d --name='diskover' --net='bridge' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'REDIS_HOST'='192.168.1.51' -e 'REDIS_PORT'='6379' -e 'ES_HOST'='192.168.1.51' -e 'ES_PORT'='9200' -e 'ES_USER'='elastic' -e 'ES_PASS'='changeme' -e 'INDEX_NAME'='diskover-' -e 'DISKOVER_OPTS'='' -e 'WORKER_OPTS'='' -e 'RUN_ON_START'='true' -e 'USE_CRON'='false' -e 'PUID'='99' -e 'PGID'='100' -p '9181:9181/tcp' -p '9999:9999/tcp' -p '8080:80/tcp' -v '/mnt/user/appdata/diskover':'/config':'rw' -v '/mnt/user/Stuff/':'/data':'rw' 'linuxserver/diskover'

 

Link to post

The plot thickens...

I removed Diskover, and the appdata folder and reinstalled here is the same run command:

docker run -d --name='diskover' --net='bridge' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'REDIS_HOST'='192.168.1.51' -e 'REDIS_PORT'='6379' -e 'ES_HOST'='192.168.1.51' -e 'ES_PORT'='9200' -e 'ES_USER'='elastic' -e 'ES_PASS'='changeme' -e 'INDEX_NAME'='diskover-' -e 'DISKOVER_OPTS'='' -e 'WORKER_OPTS'='' -e 'RUN_ON_START'='true' -e 'USE_CRON'='false' -e 'PUID'='99' -e 'PGID'='100' -p '9181:9181/tcp' -p '9999:9999/tcp' -p '8080:80/tcp' -v '/mnt/user/appdata/diskover':'/config':'rw' -v '/mnt/user/Stuff/':'/data':'rw' 'linuxserver/diskover' 

And it works! The Web GUI came up with a page, asking me for the index and a second index that I could set to be none, as it was for comparison so I did. As soon as I confirmed "diskover-" and "none" to be my index's I got a 500, and now that all I get. Despite 3 reinstalls and Appdata cleanup, all I can get is 500 and the error in the nginx log is:

[error] 355#355: *1 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Uncaught Elasticsearch\Common\Exceptions\BadRequest400Exception: {"error":{"root_cause":[{"type":"query_shard_exception","reason":"failed to create query: {\n  \"bool\" : {\n    \"must\" : [\n      {\n        \"wildcard\" : {\n          \"path_parent\" : {\n            \"wildcard\" : \"/data*\",\n            \"boost\" : 1.0\n          }\n        }\n      }\n    ],\n    \"filter\" : [\n      {\n        \"range\" : {\n          \"filesize\" : {\n            \"from\" : \"all\",\n            \"to\" : null,\n            \"include_lower\" : true,\n            \"include_upper\" : true,\n            \"boost\" : 1.0\n          }\n        }\n      }\n    ],\n    \"must_not\" : [\n      {\n        \"match\" : {\n          \"dupe_md5\" : {\n            \"query\" : \"\",\n            \"operator\" : \"OR\",\n            \"prefix_length\" : 0,\n            \"max_expansions\" : 50,\n            \"fuzzy_transpositions\" : true,\n            \"lenient\" : false,\n            \"zero_terms_quer" while reading response header from upstream, client: 192.168.1.102, server: _, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.51:8080", referrer: "http://192.168.1.51/Docker"

Now I'm a web developer, (not PHP) and I can see that's JSON but I haven't a clue where to start.

 

Link to post

So, I deleted all the Elasticsearch data and reinstall elasticsearch, reset the CHOWN.

Deleted and reinstalled Diskover

And it works.

 

It seems if you don't put in the right details the first time, you need to clean up and start again. Like the config files are written to appdata on the first docker run and then not rewritten on config change. Also seems like it messed up the elastisearch data.

 

Still, hint to anyone working on this:

  1. Install redis and elasticsearch, despite what it says they are not optional.
  2. chown -R 1000:1000 /mnt/user/appdata/elasticsearch5/
  3. start redis and elasticsearch
  4. Set the hosts for these two services when you first install Diskover, if you get it wrong, remove it, clear the appdata folder and reinstall with the correct settings. Check the port numbers, they should be good from the start, but check.
  5. Check diskover on port 9181 it should show you workers doing stuff. If not; you started Diskover before redis or elasticsearch.
  6. If diskover asks you for indexes one and two; your elasticsearch data is corrupt, delete it and restart elasticsearch and Diskover.
Link to post

I have just added what is written in the github Readme. It is written in the description in CA that elasticsearch needs to be installed.

I will remove optional from the template so it's not confusing anyone.

As for changs not written to the config I have to talk to the guy who made the container.

 

For point 6, it does ask for 2 indexes the first time you start up, at least for me. I'll ask the author of the container what is normal.

Link to post

Another issue: CustomTags are lost when you re-install the container. A restart doesn't affect them, but any reconfiguration of the docker container does. Files are still tagged, but the custom tags I've used are not listed in the context dropdown or on the admin page.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.