[Support] Linuxserver.io - diskover


141 posts in this topic Last Reply

Recommended Posts

  • Replies 140
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

So, I deleted all the Elasticsearch data and reinstall elasticsearch, reset the CHOWN. Deleted and reinstalled Diskover And it works.   It seems if you don't put in the right detai

From github https://github.com/shirosaidev/diskover Requirements: Elasticsearch 5.6.x (local or AWS ES Service) Elasticsearch 6 not supported, ES 7 supported in Enterprise version Redis 4.

Why not make it yourself if you don't like it? 

Posted Images

4 hours ago, StarsLight said:

I found another question about "No diskover indices found in Elasticsearch. Please run a crawl and come back." when visit the page. Any suggestion?

Hi! As I mentioned, my ability to support debugging is limited, esp. if you're 100% certain you followed all the steps exactly.

 

The only thing I can recommend is to check your workers GUI and see if there's even workers to spin up, as well as confirm if you asked for said workers to get spun up per the docker parameters.

Link to post
  • 2 months later...
On 12/16/2019 at 8:32 PM, coolspot said:

What's the best way to crawl multiple paths and remote paths - how do I pass paths into diskover? Should I just run the treewalk client on each remote device to crawl? What about devices that don't support Pyton?

Mount the remote host with either smb/cifs or nfs and use diskover's -d <path> cli option to crawl that mount point path. You can also use fuse for s3 and other cloud storage. Treewalk client is just additional crawl option for diskover, but not required to crawl remote hosts, you can just use diskover.py. The linuxserver.io container is using /data as the mount point that get's crawled when the container runs. You can always point /data at something else or you can shell into the container and run diskover.py.

Link to post
  • 4 weeks later...

I didn't spot a particular problem like the one I'm seeing so here I go.  I believe I have all the minimum requirements listed on the readme and I'm getting data on the landing page, but when I try to interact with anything in analytics or duplicate files I'm getting either this error, "Unable to open customtags.txt! Check if exists and permissions.", or a blank page with the query that was used in the search bar.

 

I don't see any errors in the logs, but I don't have any additional logging enabled either.  Is there any additional logging I can enable to expose the cause?

Link to post
  • 5 weeks later...
On 4/1/2020 at 8:03 PM, Sn3akyP3t3 said:

I didn't spot a particular problem like the one I'm seeing so here I go.  I believe I have all the minimum requirements listed on the readme and I'm getting data on the landing page, but when I try to interact with anything in analytics or duplicate files I'm getting either this error, "Unable to open customtags.txt! Check if exists and permissions.", or a blank page with the query that was used in the search bar.

 

I don't see any errors in the logs, but I don't have any additional logging enabled either.  Is there any additional logging I can enable to expose the cause?

This has been fixed in the latest lsio release, or you can shell into the diskover container and in the diskover-web rootdir, copy customtags.txt.sample to customtags.txt.

Link to post
  • 4 months later...
  • 4 weeks later...
  • 5 weeks later...

I find this tool quite valuable in monitoring disk use growth so that I can project needed upgrades.  However, I'm not finding an easy way to automatically track duplicate files.  It would be great if UnRaid's file system took care of duplicates on its own, but it cannot so some additional manual effort is required here.  Thankfully Diskover provides means of duplicate file detection, but it's a chore to have to run it every time.  Is there any way to schedule this so it does it automatically after every scan it does?  I couldn't come up with something on my own to solve this so I'm asking the collective hive :)

Link to post
  • 1 month later...

Hey guys, diskover looks great, but at my system the cron seems not to be working. The abc crontab ist there and default set at 3:00 every day.
Also is USE_CRON set true but it doesnt work. The --finddupes setting is also not working for me.

 

EDIt: It seems, the workers are running, but they dont create a new index...


Do i have to write the "--" in front of finddupes in the unraid docker config of diskover? I also saw something about "diskover.py not listening", how to fix this?

Edited by alabiana
Link to post
  • 2 weeks later...

So figured I would give this a whirl, getting the error "No data in Elasticsearch index! After a crawl starts it can take up to 30 sec (refresh time) for index to be updated... Reload." I followed this guide from raqisasim.

 

Installation right now looks like:

  • Redis version = 6.0.9
  • Elasticsearch version = 5.6.16
  • Diskover version = v1.5.0.9

I saw from the latest Diskover Github readme that it mentions an Auth token; would that be the reason it's not crawling any data? Anyone tried to set this up recently with the latest versions have any tips?

 

The UI loads fine and I don't see any errors or warnings in the log, they all appear to have started successfully.

Edited by Sean M.
Link to post
  • 3 weeks later...
On 12/16/2019 at 2:35 PM, StarsLight said:

 

Checked log of diskover.

 

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-diskover-config: executing...
Initial run of dispatcher in progress
[cont-init.d] 50-diskover-config: exited 0.
[cont-init.d] 60-diskover-web-config: executing...
ln: failed to create symbolic link '/app/diskover-web/public/customtags.txt': File exists
[cont-init.d] 60-diskover-web-config: exited 0.
[cont-init.d] 99-custom-files: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.

 

No any special error message. Does "fail to create symbolic link" cause problem?

Yes "fail to create symbolic link"  does cause problem. 

It tries to store something/file inside the docker I believe, but you removed the config folder in appdata, made it really confused.

But on the other hand, storing data inside the docker image is not a good docker image should do I think. 

 

What you need to do to fix this is remove docker's config folder from appdata, also need to remove the docker. You can keep the docker image. 

This way all the data this docker created will be removed, and you go reinstall this docker. 

 

Link to post

From github https://github.com/shirosaidev/diskover

Requirements:

Elasticsearch 5.6.x (local or AWS ES Service) Elasticsearch 6 not supported, ES 7 supported in Enterprise version
Redis 4.x

 

Working steps: (if you do anything wrong, remove the docker, remove the docker's config folder in appdata, (but can keep docker image to avoid download again).)

 

0. install redis from apps jj9987's Repository, no config needed. 

 

1. 

Install CA User Scripts plugin
Create a new script named vm.max_map_count

navigate to 

\flash\config\plugins\user.scripts\scripts\vm.max_map_count

open 'description' file write some readable description about what this script does. 

open 'script' file, contents of script as follows:

#!/bin/bash
sysctl -w vm.max_map_count=262144

Set script schedule to At Startup of Array

Run the script once. 

 

Navigate to "Docker" tab and then the "Docker Repositories" sub-tab in the unRAID webui

Enter in a URL of https://github.com/OFark/docker-templates in the "Template repositories" field

Click on the "Save" button

Click back to "Docker" tab and then click on the "Add Container" button

Click on the "Template" dropdown menu and select the Elasticsearch5 image

Use pre config, no change needed. 

2.  go to apps, find diskover, click install

put in ip of the redis and elastic server , which should be your unraid ip not 127 or localhost

ES_USER : elastic

ES_PASS : changeme

change appdata path to /mnt/cache/appdata/diskover/

data path I use /mnt/user/ which is going to index everything from the user

webgui port I changed to 8081 because I have qBittorrent on 8080

add a new variable, DISKOVER_AUTH_TOKEN

value is from https://github.com/shirosaidev/diskover/wiki/Auth-token

click start, and you shoud good to go with the webui of diskover, select the 1st indice and happy searching.  It might take half a minute for the 1st indice to appear. 

 

For the whole process, you do not seem to need to change any folder/file permissions. 

 

One problem I got is, while the file index goes to 94.5% it stuck there for hours. 

So I have to delete the 3 dockers and do it again, this time, it got 100% and seems to be ok. 

But this also means this setup could have problem like stuck indexing sometime. 

 

The OFark's docker template use Elasticsearch 5 which might be a bit old for the current version diskover. 

Or running from docker caused this. 

OFark's docker image is a preconfiged working one. 

If anyone has time, maybe try to build a version 6 or 7 docker image to work with the current version diskover

 

 

 

Edited by rampage
  • Like 1
  • Thanks 3
Link to post
  • 3 months later...
  • 3 weeks later...
On 1/12/2021 at 8:18 AM, rampage said:

From github https://github.com/shirosaidev/diskover

Requirements:

Elasticsearch 5.6.x (local or AWS ES Service) Elasticsearch 6 not supported, ES 7 supported in Enterprise version
Redis 4.x

 

Working steps: (if you do anything wrong, remove the docker, remove the docker's config folder in appdata, (but can keep docker image to avoid download again).)

 

0. install redis from apps jj9987's Repository, no config needed. 

 

1. 

Install CA User Scripts plugin
Create a new script named vm.max_map_count

navigate to 

\flash\config\plugins\user.scripts\scripts\vm.max_map_count

open 'description' file write some readable description about what this script does. 

open 'script' file, contents of script as follows:


#!/bin/bash
sysctl -w vm.max_map_count=262144

Set script schedule to At Startup of Array

Run the script once. 

 

Navigate to "Docker" tab and then the "Docker Repositories" sub-tab in the unRAID webui

Enter in a URL of https://github.com/OFark/docker-templates in the "Template repositories" field

Click on the "Save" button

Click back to "Docker" tab and then click on the "Add Container" button

Click on the "Template" dropdown menu and select the Elasticsearch5 image

Use pre config, no change needed. 

2.  go to apps, find diskover, click install

put in ip of the redis and elastic server , which should be your unraid ip not 127 or localhost

ES_USER : elastic

ES_PASS : changeme

change appdata path to /mnt/cache/appdata/diskover/

data path I use /mnt/user/ which is going to index everything from the user

webgui port I changed to 8081 because I have qBittorrent on 8080

add a new variable, DISKOVER_AUTH_TOKEN

value is from https://github.com/shirosaidev/diskover/wiki/Auth-token

click start, and you shoud good to go with the webui of diskover, select the 1st indice and happy searching.  It might take half a minute for the 1st indice to appear. 

 

For the whole process, you do not seem to need to change any folder/file permissions. 

 

One problem I got is, while the file index goes to 94.5% it stuck there for hours. 

So I have to delete the 3 dockers and do it again, this time, it got 100% and seems to be ok. 

But this also means this setup could have problem like stuck indexing sometime. 

 

The OFark's docker template use Elasticsearch 5 which might be a bit old for the current version diskover. 

Or running from docker caused this. 

OFark's docker image is a preconfiged working one. 

If anyone has time, maybe try to build a version 6 or 7 docker image to work with the current version diskover

 

 

 

 

Thanks for this.

Followed the instructions exactly and worked perfectly!

 

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.