[Support] Linuxserver.io - diskover


Recommended Posts

4 hours ago, StarsLight said:

I found another question about "No diskover indices found in Elasticsearch. Please run a crawl and come back." when visit the page. Any suggestion?

Hi! As I mentioned, my ability to support debugging is limited, esp. if you're 100% certain you followed all the steps exactly.

 

The only thing I can recommend is to check your workers GUI and see if there's even workers to spin up, as well as confirm if you asked for said workers to get spun up per the docker parameters.

Link to comment
  • 2 months later...
On 12/16/2019 at 8:32 PM, coolspot said:

What's the best way to crawl multiple paths and remote paths - how do I pass paths into diskover? Should I just run the treewalk client on each remote device to crawl? What about devices that don't support Pyton?

Mount the remote host with either smb/cifs or nfs and use diskover's -d <path> cli option to crawl that mount point path. You can also use fuse for s3 and other cloud storage. Treewalk client is just additional crawl option for diskover, but not required to crawl remote hosts, you can just use diskover.py. The linuxserver.io container is using /data as the mount point that get's crawled when the container runs. You can always point /data at something else or you can shell into the container and run diskover.py.

Link to comment
  • 4 weeks later...

I didn't spot a particular problem like the one I'm seeing so here I go.  I believe I have all the minimum requirements listed on the readme and I'm getting data on the landing page, but when I try to interact with anything in analytics or duplicate files I'm getting either this error, "Unable to open customtags.txt! Check if exists and permissions.", or a blank page with the query that was used in the search bar.

 

I don't see any errors in the logs, but I don't have any additional logging enabled either.  Is there any additional logging I can enable to expose the cause?

Link to comment
  • 5 weeks later...
On 4/1/2020 at 8:03 PM, Sn3akyP3t3 said:

I didn't spot a particular problem like the one I'm seeing so here I go.  I believe I have all the minimum requirements listed on the readme and I'm getting data on the landing page, but when I try to interact with anything in analytics or duplicate files I'm getting either this error, "Unable to open customtags.txt! Check if exists and permissions.", or a blank page with the query that was used in the search bar.

 

I don't see any errors in the logs, but I don't have any additional logging enabled either.  Is there any additional logging I can enable to expose the cause?

This has been fixed in the latest lsio release, or you can shell into the diskover container and in the diskover-web rootdir, copy customtags.txt.sample to customtags.txt.

  • Thanks 1
Link to comment
  • 4 months later...
  • 4 weeks later...
  • 5 weeks later...

I find this tool quite valuable in monitoring disk use growth so that I can project needed upgrades.  However, I'm not finding an easy way to automatically track duplicate files.  It would be great if UnRaid's file system took care of duplicates on its own, but it cannot so some additional manual effort is required here.  Thankfully Diskover provides means of duplicate file detection, but it's a chore to have to run it every time.  Is there any way to schedule this so it does it automatically after every scan it does?  I couldn't come up with something on my own to solve this so I'm asking the collective hive :)

Link to comment
  • 1 month later...

Hey guys, diskover looks great, but at my system the cron seems not to be working. The abc crontab ist there and default set at 3:00 every day.
Also is USE_CRON set true but it doesnt work. The --finddupes setting is also not working for me.

 

EDIt: It seems, the workers are running, but they dont create a new index...


Do i have to write the "--" in front of finddupes in the unraid docker config of diskover? I also saw something about "diskover.py not listening", how to fix this?

Edited by alabiana
Link to comment
  • 2 weeks later...

So figured I would give this a whirl, getting the error "No data in Elasticsearch index! After a crawl starts it can take up to 30 sec (refresh time) for index to be updated... Reload." I followed this guide from raqisasim.

 

Installation right now looks like:

  • Redis version = 6.0.9
  • Elasticsearch version = 5.6.16
  • Diskover version = v1.5.0.9

I saw from the latest Diskover Github readme that it mentions an Auth token; would that be the reason it's not crawling any data? Anyone tried to set this up recently with the latest versions have any tips?

 

The UI loads fine and I don't see any errors or warnings in the log, they all appear to have started successfully.

Edited by Sean M.
Link to comment
  • 3 weeks later...
On 12/16/2019 at 2:35 PM, StarsLight said:

 

Checked log of diskover.

 

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-diskover-config: executing...
Initial run of dispatcher in progress
[cont-init.d] 50-diskover-config: exited 0.
[cont-init.d] 60-diskover-web-config: executing...
ln: failed to create symbolic link '/app/diskover-web/public/customtags.txt': File exists
[cont-init.d] 60-diskover-web-config: exited 0.
[cont-init.d] 99-custom-files: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.

 

No any special error message. Does "fail to create symbolic link" cause problem?

Yes "fail to create symbolic link"  does cause problem. 

It tries to store something/file inside the docker I believe, but you removed the config folder in appdata, made it really confused.

But on the other hand, storing data inside the docker image is not a good docker image should do I think. 

 

What you need to do to fix this is remove docker's config folder from appdata, also need to remove the docker. You can keep the docker image. 

This way all the data this docker created will be removed, and you go reinstall this docker. 

 

Link to comment

From github https://github.com/shirosaidev/diskover

Requirements:

Elasticsearch 5.6.x (local or AWS ES Service) Elasticsearch 6 not supported, ES 7 supported in Enterprise version
Redis 4.x

 

Working steps: (if you do anything wrong, remove the docker, remove the docker's config folder in appdata, (but can keep docker image to avoid download again).)

 

0. install redis from apps jj9987's Repository, no config needed. 

 

1. 

Install CA User Scripts plugin
Create a new script named vm.max_map_count

navigate to 

\flash\config\plugins\user.scripts\scripts\vm.max_map_count

open 'description' file write some readable description about what this script does. 

open 'script' file, contents of script as follows:

#!/bin/bash
sysctl -w vm.max_map_count=262144

Set script schedule to At Startup of Array

Run the script once. 

 

Navigate to "Docker" tab and then the "Docker Repositories" sub-tab in the unRAID webui

Enter in a URL of https://github.com/OFark/docker-templates in the "Template repositories" field

Click on the "Save" button

Click back to "Docker" tab and then click on the "Add Container" button

Click on the "Template" dropdown menu and select the Elasticsearch5 image

Use pre config, no change needed. 

2.  go to apps, find diskover, click install

put in ip of the redis and elastic server , which should be your unraid ip not 127 or localhost

ES_USER : elastic

ES_PASS : changeme

change appdata path to /mnt/cache/appdata/diskover/

data path I use /mnt/user/ which is going to index everything from the user

webgui port I changed to 8081 because I have qBittorrent on 8080

add a new variable, DISKOVER_AUTH_TOKEN

value is from https://github.com/shirosaidev/diskover/wiki/Auth-token

click start, and you shoud good to go with the webui of diskover, select the 1st indice and happy searching.  It might take half a minute for the 1st indice to appear. 

 

For the whole process, you do not seem to need to change any folder/file permissions. 

 

One problem I got is, while the file index goes to 94.5% it stuck there for hours. 

So I have to delete the 3 dockers and do it again, this time, it got 100% and seems to be ok. 

But this also means this setup could have problem like stuck indexing sometime. 

 

The OFark's docker template use Elasticsearch 5 which might be a bit old for the current version diskover. 

Or running from docker caused this. 

OFark's docker image is a preconfiged working one. 

If anyone has time, maybe try to build a version 6 or 7 docker image to work with the current version diskover

 

 

 

Edited by rampage
  • Like 1
  • Thanks 4
Link to comment
  • 3 months later...
  • 3 weeks later...
On 1/12/2021 at 8:18 AM, rampage said:

From github https://github.com/shirosaidev/diskover

Requirements:

Elasticsearch 5.6.x (local or AWS ES Service) Elasticsearch 6 not supported, ES 7 supported in Enterprise version
Redis 4.x

 

Working steps: (if you do anything wrong, remove the docker, remove the docker's config folder in appdata, (but can keep docker image to avoid download again).)

 

0. install redis from apps jj9987's Repository, no config needed. 

 

1. 

Install CA User Scripts plugin
Create a new script named vm.max_map_count

navigate to 

\flash\config\plugins\user.scripts\scripts\vm.max_map_count

open 'description' file write some readable description about what this script does. 

open 'script' file, contents of script as follows:


#!/bin/bash
sysctl -w vm.max_map_count=262144

Set script schedule to At Startup of Array

Run the script once. 

 

Navigate to "Docker" tab and then the "Docker Repositories" sub-tab in the unRAID webui

Enter in a URL of https://github.com/OFark/docker-templates in the "Template repositories" field

Click on the "Save" button

Click back to "Docker" tab and then click on the "Add Container" button

Click on the "Template" dropdown menu and select the Elasticsearch5 image

Use pre config, no change needed. 

2.  go to apps, find diskover, click install

put in ip of the redis and elastic server , which should be your unraid ip not 127 or localhost

ES_USER : elastic

ES_PASS : changeme

change appdata path to /mnt/cache/appdata/diskover/

data path I use /mnt/user/ which is going to index everything from the user

webgui port I changed to 8081 because I have qBittorrent on 8080

add a new variable, DISKOVER_AUTH_TOKEN

value is from https://github.com/shirosaidev/diskover/wiki/Auth-token

click start, and you shoud good to go with the webui of diskover, select the 1st indice and happy searching.  It might take half a minute for the 1st indice to appear. 

 

For the whole process, you do not seem to need to change any folder/file permissions. 

 

One problem I got is, while the file index goes to 94.5% it stuck there for hours. 

So I have to delete the 3 dockers and do it again, this time, it got 100% and seems to be ok. 

But this also means this setup could have problem like stuck indexing sometime. 

 

The OFark's docker template use Elasticsearch 5 which might be a bit old for the current version diskover. 

Or running from docker caused this. 

OFark's docker image is a preconfiged working one. 

If anyone has time, maybe try to build a version 6 or 7 docker image to work with the current version diskover

 

 

 

 

Thanks for this.

Followed the instructions exactly and worked perfectly!

 

Link to comment
  • 3 months later...

Hi guys,

For some reason my elasticsearch5 container stops by itself pretty regularly... like once a day.
I've found this in the logs:
 

[2021-08-31T22:37:27,197][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [] fatal error in thread [elasticsearch[VcsZS9A][refresh][T#1]], exiting

 

This is definitely a recurring issue with my elasticsearch5 docker. Am I the only who experiences this container stopping regularly with this error in the logs?

Thanks
 

EDIT: I should mention it was working fine before

Edited by cam217
Link to comment
  • 2 weeks later...

Very usefull tool, thanks to creator.
After following @rampage guidence have fully working diskover with one exception, in discover log i see repeatedly this errors:

Traceback (most recent call last):
File "/usr/bin/rq-dashboard", line 11, in <module>
load_entry_point('rq-dashboard==0.6.1', 'console_scripts', 'rq-dashboard')()
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 490, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 2862, in load_entry_point
return ep.load()
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 2462, in load
return self.resolve()
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 2468, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/lib/python3.8/site-packages/rq_dashboard/cli.py", line 112, in <module>
def run(
File "/usr/lib/python3.8/site-packages/click/decorators.py", line 247, in decorator
_param_memo(f, OptionClass(param_decls, **option_attrs))
File "/usr/lib/python3.8/site-packages/click/core.py", line 2482, in __init__
super().__init__(param_decls, type=type, multiple=multiple, **attrs)
File "/usr/lib/python3.8/site-packages/click/core.py", line 2108, in __init__
raise ValueError(
ValueError: 'default' must be a list when 'multiple' is true.

Some one have ideas how to resolve this i already try - pip install -U pip , with no results. Thanks in advance!

Link to comment

I do have the same error but I stopped using diskover because of that elasticsearch error I can't fix and it makes diskover unusable. Elasticsearch worked fine for almost a year and now it stops all the time, it can't work more than 12 hours straight...

 

Diskover is a nice software but I won't spend hours trying to fix elasticsearch as I have better things to do and it is not a gamechanger to me.

 

EDIT: Maybe I should mention that I have it working also on another server that works fine (no elasticsearch issue there yet) and the error is there as well. Sometimes it's better not to worry about errors that does not affect usage.

Edited by cam217
  • Like 1
Link to comment
  • 3 weeks later...

I'm getting the same error in the diskover logs as well.

I also see that diskover is constantly using 6-8% CPU.

I've also noticed that the diskover github page is no longer available and in "elasticsearch info" and "Elasticsearch health / index sizes" in the diskover admin page, I get this.

{
  "error" : {
    "root_cause" : [
      {
        "type" : "security_exception",
        "reason" : "current license is non-compliant for [security]",
        "license.expired.feature" : "security"
      }
    ],
    "type" : "security_exception",
    "reason" : "current license is non-compliant for [security]",
    "license.expired.feature" : "security"
  },
  "status" : 403
}

 

I can login to elasticsearch without any error.

Edited by gamerkonks
Link to comment
  • 3 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.