[Support] Linuxserver.io - diskover


140 posts in this topic Last Reply

Recommended Posts

  • Replies 139
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

So, I deleted all the Elasticsearch data and reinstall elasticsearch, reset the CHOWN. Deleted and reinstalled Diskover And it works.   It seems if you don't put in the right detai

From github https://github.com/shirosaidev/diskover Requirements: Elasticsearch 5.6.x (local or AWS ES Service) Elasticsearch 6 not supported, ES 7 supported in Enterprise version Redis 4.

Why not make it yourself if you don't like it? 

Posted Images

1 hour ago, glave said:

Is there an existing elasticsearch or redis container in community apps? I didn't see both of them and just wondering if they need to be created by hand first.

 

 

You have to manually set up elasticsearch. Redis is in CA. 

Link to post

Here to clear up some answers, sorry for the late start!

 

10 hours ago, OFark said:

Another issue: CustomTags are lost when you re-install the container. A restart doesn't affect them, but any reconfiguration of the docker container does. Files are still tagged, but the custom tags I've used are not listed in the context dropdown or on the admin page.

How are you adding the custom tags? If you add them via the diskover.conf file, they should persist.

 

14 hours ago, OFark said:

6. If diskover asks you for indexes one and two; your elasticsearch data is corrupt, delete it and restart elasticsearch and Diskover.

It will always ask for 2, but 2 are not required. Only if you want to compare indicies with heatmaps, etc.

 

On 11/24/2018 at 6:28 PM, OFark said:

The plot thickens...

I removed Diskover, and the appdata folder and reinstalled here is the same run command:


docker run -d --name='diskover' --net='bridge' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'REDIS_HOST'='192.168.1.51' -e 'REDIS_PORT'='6379' -e 'ES_HOST'='192.168.1.51' -e 'ES_PORT'='9200' -e 'ES_USER'='elastic' -e 'ES_PASS'='changeme' -e 'INDEX_NAME'='diskover-' -e 'DISKOVER_OPTS'='' -e 'WORKER_OPTS'='' -e 'RUN_ON_START'='true' -e 'USE_CRON'='false' -e 'PUID'='99' -e 'PGID'='100' -p '9181:9181/tcp' -p '9999:9999/tcp' -p '8080:80/tcp' -v '/mnt/user/appdata/diskover':'/config':'rw' -v '/mnt/user/Stuff/':'/data':'rw' 'linuxserver/diskover' 

 And it works! The Web GUI came up with a page, asking me for the index and a second index that I could set to be none, as it was for comparison so I did. As soon as I confirmed "diskover-" and "none" to be my index's I got a 500, and now that all I get. Despite 3 reinstalls and Appdata cleanup, all I can get is 500 and the error in the nginx log is:


[error] 355#355: *1 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Uncaught Elasticsearch\Common\Exceptions\BadRequest400Exception: {"error":{"root_cause":[{"type":"query_shard_exception","reason":"failed to create query: {\n  \"bool\" : {\n    \"must\" : [\n      {\n        \"wildcard\" : {\n          \"path_parent\" : {\n            \"wildcard\" : \"/data*\",\n            \"boost\" : 1.0\n          }\n        }\n      }\n    ],\n    \"filter\" : [\n      {\n        \"range\" : {\n          \"filesize\" : {\n            \"from\" : \"all\",\n            \"to\" : null,\n            \"include_lower\" : true,\n            \"include_upper\" : true,\n            \"boost\" : 1.0\n          }\n        }\n      }\n    ],\n    \"must_not\" : [\n      {\n        \"match\" : {\n          \"dupe_md5\" : {\n            \"query\" : \"\",\n            \"operator\" : \"OR\",\n            \"prefix_length\" : 0,\n            \"max_expansions\" : 50,\n            \"fuzzy_transpositions\" : true,\n            \"lenient\" : false,\n            \"zero_terms_quer" while reading response header from upstream, client: 192.168.1.102, server: _, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.51:8080", referrer: "http://192.168.1.51/Docker"

Now I'm a web developer, (not PHP) and I can see that's JSON but I haven't a clue where to start.

 

I've noticed that you've included all available ENV options in your run command. If you are going to use the defaults, I would HIGHLY suggest simply omitting them from the command entirely. One example is the INDEX_NAME, by default, will be timestamped so you can create a running history of crawls, but your command will prevent that from happening, overwriting all previous indices with the same name. If you check the README, it lists all ENV variables that are optional.

 

On 11/23/2018 at 9:16 AM, OFark said:

I've found the logging under /mnt/user/appdata/diskover/log/ :)

under nginx/error/log is the following two lines:


2018/11/23 14:13:09 [error] 334#334: *20 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 192.168.1.1, server: _, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.51:8080", referrer: "http://192.168.1.51/Docker"

2018/11/23 14:14:09 [error] 334#334: *20 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 192.168.1.1, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.51:8080"

I'm not sure what 127.0.0.1:9000 is pointing to, I don;t see any ports open in unRAID for 9000. Maybe thats not an issue.

9000 here is used by PHP-FPM for running the code. This type of error is usually an indication of an incorrect proxy config.

Link to post
7 hours ago, hackerman said:

Here to clear up some answers, sorry for the late start!

 

How are you adding the custom tags? If you add them via the diskover.conf file, they should persist.

 

It will always ask for 2, but 2 are not required. Only if you want to compare indicies with heatmaps, etc.

 

I've noticed that you've included all available ENV options in your run command. If you are going to use the defaults, I would HIGHLY suggest simply omitting them from the command entirely. One example is the INDEX_NAME, by default, will be timestamped so you can create a running history of crawls, but your command will prevent that from happening, overwriting all previous indices with the same name. If you check the README, it lists all ENV variables that are optional.

 

9000 here is used by PHP-FPM for running the code. This type of error is usually an indication of an incorrect proxy config.


The Custom tags were created by the context menu for folders and files, and the standard "Custom Tag 1" etc. removed in the Admin page. 

 

When Diskover actually started giving me an interface but then 500 it was after it asked me for the 2 indexes. Subsequently, after a reinstall and it worked, it never asked me.

 

That RUN command was copy and pasted from the docker container template in unRAID,  I just changed the elasticsearch and redis hosts to the IP addresses. If there is a value that should be left blank perhaps it should be blank in the template?

 

I'm not sure what caused the PHP-FPM error, I certainly didn't configure a proxy, I don't have one.

Link to post
On 11/28/2018 at 2:18 PM, saarg said:

That is not going to happen I'm afraid. We only do templates for our own containers. 

Aren't you being rather hypocritical then because the docker compose file listed for diskover does have

  elasticsearch:
    container_name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:5.6.9
    volumes:
      - ${DOCKER_HOME}/elasticsearch/data:/usr/share/elasticsearch/data
    environment:
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
  redis:
    container_name: redis
    image: redis:alpine
    volumes:
      - ${HOME}/docker/redis:/data

Shouldn't the references to elasticsearch and redis be removed from the yaml for the same reasoning you just gave for not having a template?  Isn't fundamentally a yaml a template?

Link to post
4 hours ago, dockerPolice said:

Aren't you being rather hypocritical then because the docker compose file listed for diskover does have


  elasticsearch:
    container_name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:5.6.9
    volumes:
      - ${DOCKER_HOME}/elasticsearch/data:/usr/share/elasticsearch/data
    environment:
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
  redis:
    container_name: redis
    image: redis:alpine
    volumes:
      - ${HOME}/docker/redis:/data

Shouldn't the references to elasticsearch and redis be removed from the yaml for the same reasoning you just gave for not having a template?  Isn't fundamentally a yaml a template?

 

🤐

Link to post
7 hours ago, doremi said:

Don't mean to be rude but should this container be in CA?

 

I think this container is too difficult to configure as there are too many dependencies missing that require manual intervention. 

 

In its defence, I would never have found this without it being in CA container. It's overkill for what I need it for (basically a Treesize replacement) but I've been playing with unRAID for 30 days now, and I managed to figure it out. It could do with a better guide, the current one assumes you already know a lot. But then I find with a lot of Linux guides. However, it's a good learning experience and I, for one, am grateful to have found it in CA.

Link to post
8 hours ago, doremi said:

Don't mean to be rude but should this container be in CA?

 

I think this container is too difficult to configure as there are too many dependencies missing that require manual intervention. 

 

Redis is already in CA, so the only thing to add is elasticsearch. The info on how to set it up is in the Readme on github. You only have to translate it to unraid. 

 

I didn't know there was a requirement of easy setup to be in CA. 

 

There are no plans to make a guide to set it up as far as I know. 

Link to post
On 11/30/2018 at 11:03 AM, MacroPower said:

Would it be possible to automatically attach a date or integer of some kind to the end of the index? Every time I run a crawl, it has the exact same index name. For me this (maybe unrelated) has resulted in all files showing duplicates.

If you don't specify diskover- as index arg, it will automatically set a date timestamp for the current day like diskover-<date>

Link to post
On 12/4/2018 at 7:02 PM, saarg said:

Redis is already in CA, so the only thing to add is elasticsearch. The info on how to set it up is in the Readme on github. You only have to translate it to unraid. 

 

I didn't know there was a requirement of easy setup to be in CA. 

 

There are no plans to make a guide to set it up as far as I know. 

hackerman is working on a guide for linuxserver.io blog and that should be released soon.

Edited by shirosai
Link to post
On 11/26/2018 at 11:56 PM, OFark said:

Another issue: CustomTags are lost when you re-install the container. A restart doesn't affect them, but any reconfiguration of the docker container does. Files are still tagged, but the custom tags I've used are not listed in the context dropdown or on the admin page.

custom tags for diskover-web ui are stored in the customtags.txt file in web root directory. When the container first gets created customtags.txt.sample gets renamed to customtags.txt.

Link to post
8 hours ago, shirosai said:

custom tags for diskover-web ui are stored in the customtags.txt file in web root directory. When the container first gets created customtags.txt.sample gets renamed to customtags.txt.

 

I haven't spoken to @hackerman, but it would be nice to have an option to specify a config folder for all persistent modifications. Makes it alot easier to use in a container. Now we have to symlink stuff and that is not the perfect scenario.

We always use /config inside a container for configs that should persist an update to the container. Don't know how much you know about containers, but to update a container it's first deleted and then pulled from dockerhub again. 

Are you on our discord? Better to talk there i think 🙂

Link to post
On 12/17/2018 at 3:54 AM, OFark said:

Me again,

I'm getting errors that look like Redis isn't installed. This seems to have happened since Redis updated a few days ago. Is there a compatibility issue?

Yes, I am getting lots of errors as well:

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/bin/rq-dashboard", line 6, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3088, in <module>
@_call_aside
File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3072, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3101, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 576, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 589, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 783, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (redis 2.10.6 (/usr/lib/python3.6/site-packages), Requirement.parse('redis>=3.0.0'), {'rq'})
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 574, in _build_master
ws.require(__requires__)
File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 892, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 783, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (redis 2.10.6 (/usr/lib/python3.6/site-packages), Requirement.parse('redis>=3.0.0'), {'rq'})

 

Link to post

1. Im new to this tool and I can't get it to work :/ 

 

I need redis..installed and all ok I think 

 

But Diskover and elasticsearch..I can't figured it out. 

 

2. How can I enable or use gource? I think that's a fun thing to watch. So I see there are two ways. 1. From log 2. Live. When I install it on my windows Maschine how can I enable/connect it? 

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.