[Support] Linuxserver.io - diskover


Recommended Posts

I dont know how to contact the linuxserver guys, i wonder why your ddclient is not avaible for unraid. Ive instaleld it manually via docker repo, but i cant access the config file. (i need to chmod 777 it after each restart)

 

Could you port it to unraid? Thanks, ill need this, because with the other ddclient which are avaible for unraid, they are too old to update cloudflare.

 

Also, it doesnt seem to be possible to update cloudflare dns, i think ive followed the correct syntax, but it just said it wont try it again because last time it failed, i also cant find a log or anything... 

 

Thanks in advance.

Edited by nuhll
Link to comment
11 hours ago, nuhll said:

I dont know how to contact the linuxserver guys, i wonder why your ddclient is not avaible for unraid. Ive instaleld it manually via docker repo, but i cant access the config file. (i need to chmod 777 it after each restart)

 

Could you port it to unraid? Thanks, ill need this, because with the other ddclient which are avaible for unraid, they are too old to update cloudflare.

 

Also, it doesnt seem to be possible to update cloudflare dns, i think ive followed the correct syntax, but it just said it wont try it again because last time it failed, i also cant find a log or anything... 

 

Thanks in advance.

 

It's not like we are invisible....

Not going to change the permissions on the file. Use a proper way to edit the file and you will have access.

The reason there is no ddclient template is that I always forget to make it, then I remember it when I'm not at home, and when I'm home, it slips my mind again....

 

I think the cloudfare stuff is fixed, but you might need to modify your config. Check our github. https://github.com/linuxserver/docker-ddclient/pull/13

Link to comment
21 hours ago, nuhll said:

I dont know how to contact the linuxserver guys, i wonder why your ddclient is not avaible for unraid. Ive instaleld it manually via docker repo, but i cant access the config file. (i need to chmod 777 it after each restart)

 

Could you port it to unraid? Thanks, ill need this, because with the other ddclient which are avaible for unraid, they are too old to update cloudflare.

 

Also, it doesnt seem to be possible to update cloudflare dns, i think ive followed the correct syntax, but it just said it wont try it again because last time it failed, i also cant find a log or anything... 

 

Thanks in advance.

It's not like we've got a forum / discord server or even IRC..

Link to comment
1 hour ago, j0nnymoe said:

It's not like we've got a forum / discord server or even IRC..

Thanks, didnt know about the discord.

 

Quote

It's not like we are invisible....

Not going to change the permissions on the file. Use a proper way to edit the file and you will have access.

The reason there is no ddclient template is that I always forget to make it, then I remember it when I'm not at home, and when I'm home, it slips my mind again....

 

I think the cloudfare stuff is fixed, but you might need to modify your config. Check our github. https://github.com/linuxserver/docker-ddclient/pull/13
 

 

 

Propper way to edit the file? xD Like? Anyway if i chmod 777 it, i can edit it and the program itself also seems to work just fine with that. But its just annyoing.

 

That link didnt help me, i never had a "server" directive included, i used your template you gave with the config.

 

Ill post in discord.

 

*remind you to port it to unraid*

 

 

Edited by nuhll
Link to comment
  • 2 weeks later...

@shirosai thank you very much for your effort on this! I have found it to be very fast in crawling and searching 👌😊

 

I wanted to ask if anyone knows about crawling using the workers. I don't want it to spin up the drives constantly, so I was thinking of setting up the diskover.cfg to run every day like this:

 

[crawlbot]
; continuous scanner
; time to sleep (seconds) between checking for directory changes
sleeptime = 86400
; number of threads for checking directories, setting this to num of cores x2 is a good starting point
threads = 8

 

Is this the right way to do it, or do I need to use a cronjob? I can see a flag in the template

 

Run a crawl on as a cron job (optional):  false

Container Variable: USE_CRON

 

I am unsure if I can call something in the docker container to trigger the workers/crawler to run through the user scripts plugin, or do I need to find a way to install the cron management UIs listed on the Github page (crontab-ui or cronkeep).

 

Cheers

Link to comment
  • 2 weeks later...
On 11/26/2018 at 6:20 AM, OFark said:

So, I deleted all the Elasticsearch data and reinstall elasticsearch, reset the CHOWN.

Deleted and reinstalled Diskover

And it works.

 

It seems if you don't put in the right details the first time, you need to clean up and start again. Like the config files are written to appdata on the first docker run and then not rewritten on config change. Also seems like it messed up the elastisearch data.

 

Still, hint to anyone working on this:

  1. Install redis and elasticsearch, despite what it says they are not optional.
  2. chown -R 1000:1000 /mnt/user/appdata/elasticsearch5/
  3. start redis and elasticsearch
  4. Set the hosts for these two services when you first install Diskover, if you get it wrong, remove it, clear the appdata folder and reinstall with the correct settings. Check the port numbers, they should be good from the start, but check.
  5. Check diskover on port 9181 it should show you workers doing stuff. If not; you started Diskover before redis or elasticsearch.
  6. If diskover asks you for indexes one and two; your elasticsearch data is corrupt, delete it and restart elasticsearch and Diskover.

Thank you for this. I was able to get it running!! Much appreciated. 

  • Upvote 1
Link to comment
  • 1 month later...

I'm hoping I can get some help here as I've been unable to get any response on the LSIO Discord. Is Diskover capable of deleting files? I used -finddupes and tagged files for deletion but haven't been able to figure out how to delete files other than actually browsing to the target directories and deleting the files individually.

Link to comment

Hi, 

 

1. Is it possible to create multiple indices for compartmentalizing all of my files? Is it as simple as adding additional container variables and if so what would the additional variables need to be named as for proper use?

 

2. The index prefix name is slightly confusing, it indicates that it's a prefix and is optional however in the diskover GUI I was expecting to see some kind of GUID or some other alphanumeric after the hyphen so the index is just named "diskover-". So is that inaccurate and should really be treated as "diskover index NAME" and not simply as a prefix?

 

3. Is there a way to add multiple mount points to be crawled and indexed? I have some SMB shares that I want indexed as well as the local content that's on the unRAID array but I have to set the default mount point as simply /mnt/ which I'm kind of wary of doing since I'm expecting that it will index the stuff it finds on individual disks as well as user shares. 

 

I think this is a pretty neat utility so I hope that these questions can spark further interest and discussion on using it. Thanks.

Edited by dajinn
Link to comment
On 6/4/2019 at 3:06 PM, Ryonez said:

Came back to it after leaving it for a night, still no clue what's happening.

The install is a little weird, I had to do a few uninstalls, cleans, and reinstalls in particular orders when shit was just randomly breaking.

 

Changing the container config in the UI doesn't seem to update the actual config in the files itself so this is kind of a problem if you forget to swap out the generic hostnames for the actual IPs or the actual hostname.

 

If I were going to tell someone how to install this I would say, they need a clean slate and to install in this order, Redis, Elasticsearch, then Diskover. Start redis before elasticsearch, start ES before diskover, then diskover at the end. It should 'just work'. When you get the initial screen I believe you just need to pick your index under index 1 and click "select" then it will crawl your files. The actual crawling does not take that long.

 

One thing that I ran into during getting this up and running is that on one of my attempts redis was just randomly not working and so because of that the crawl wasn't doing anything even though there were 'workers'. I ended up just uninstalling everything again, clearing all of the directories and then reinstalling in that order. I also believe I used elasticsearch 5.6.16.

Link to comment
1 hour ago, dajinn said:

If I were going to tell someone how to install this I would say, they need a clean slate and to install in this order, Redis, Elasticsearch, then Diskover. Start redis before elasticsearch, start ES before diskover, then diskover at the end. It should 'just work'. When you get the initial screen I believe you just need to pick your index under index 1 and click "select" then it will crawl your files. The actual crawling does not take that long.

And still getting the same issue.
 

1 hour ago, dajinn said:

Changing the container config in the UI doesn't seem to update the actual config in the files itself so this is kind of a problem if you forget to swap out the generic hostnames for the actual IPs or the actual hostname.

Yup noticed this very quickly, and it's something not mention in the documentation.

I'm done with this. It shouldn't be hard, everything is there. Yet it just spits out `No diskover indices found in Elasticsearch. Please run a crawl and come back.`
There's no indications of anything actually doing anything, no guides as to what to expect at the start. It just doesn't work, for no obvious reason. In my opinion right now, this needs to either be updated with better docs on dealing with issues and how to find out what's actually going wrong, or looked at for removal. 
And I'm not saying that lightly. I'm a firm support of linuxserver's work for the community, but this thing is just a mess.

  • Like 1
Link to comment

Everything is working for me and I love this container! But I'm confused about the tags...so it found duplicates and I set the tag as delete. Does it not delete the stuff? What does the tags do ? Is there a way to make it automatically delete stuff by tagging? 

 

Thanks

Link to comment
Yup noticed this very quickly, and it's something not mention in the documentation.

I'm done with this. It shouldn't be hard, everything is there. Yet it just spits out `No diskover indices found in Elasticsearch. Please run a crawl and come back.`
There's no indications of anything actually doing anything, no guides as to what to expect at the start. It just doesn't work, for no obvious reason. In my opinion right now, this needs to either be updated with better docs on dealing with issues and how to find out what's actually going wrong, or looked at for removal. 
And I'm not saying that lightly. I'm a firm support of linuxserver's work for the community, but this thing is just a mess.
I cannot replicate any of the behaviour your describing so completely at a loss of what to tell you. It doesn't need removing. Working as far as I can tell.

Sent from my Mi A1 using Tapatalk

Link to comment
3 hours ago, CHBMB said:

I cannot replicate any of the behaviour your describing so completely at a loss of what to tell you. It doesn't need removing. Working as far as I can tell.

Fair enough. It just doesn't seem to work on my end, and as mentioned there's no apparent reason why. There's no errors, and it doesn't seem to actually do anything.
Without it being in the docs I'm also not sure what to expect, so I can't do anything with this.

For now I'm going to use WizTree64. It's a tad easier, maybe I'll hit this up again sometime to see if I can get it to do something.

Link to comment
2 hours ago, Ryonez said:

Fair enough. It just doesn't seem to work on my end, and as mentioned there's no apparent reason why. There's no errors, and it doesn't seem to actually do anything.
Without it being in the docs I'm also not sure what to expect, so I can't do anything with this.

For now I'm going to use WizTree64. It's a tad easier, maybe I'll hit this up again sometime to see if I can get it to do something.

Did you read the Readme linked in the first post about how to set it up?

There are more things to do than just install this container.

Link to comment
1 minute ago, saarg said:

Did you read the Readme linked in the first post about how to set it up?

There are more things to do than just install this container.

I'm aware of that, I followed the instructions on dockerhub.

Redis and Elasticsearch were set up, but there's no errors other than redis saying it's not going to have high performance on unraid.

It gets here:
```
Once running the URL will be http://host-ip/ initial application spinup will take some time so please reload if you get an empty response.
```
There's a response that I mentioned above that, but otherwise nothing happens. no info, nothing saying it's processing, nada.

If it's failing because of this:
```
If you simply want the application to work you can mount these to folders with 0777 permissions, otherwise you will need to create these users host level and set the folder ownership properly.
```
Then the documentation needs the instructions for unraid added. It's not in the template info. Also again, it should error on perm issues, which it isn't.

Link to comment
9 hours ago, Ryonez said:

I'm aware of that, I followed the instructions on dockerhub.

Redis and Elasticsearch were set up, but there's no errors other than redis saying it's not going to have high performance on unraid.

It gets here:
```
Once running the URL will be http://host-ip/ initial application spinup will take some time so please reload if you get an empty response.
```
There's a response that I mentioned above that, but otherwise nothing happens. no info, nothing saying it's processing, nada.

If it's failing because of this:
```
If you simply want the application to work you can mount these to folders with 0777 permissions, otherwise you will need to create these users host level and set the folder ownership properly.
```
Then the documentation needs the instructions for unraid added. It's not in the template info. Also again, it should error on perm issues, which it isn't.

Which information for unraid are you referring to?

Did you set the vm.max_map_count=262144?

 

Post your docker run command for all three containers.

Link to comment
33 minutes ago, saarg said:

Which information for unraid are you referring to?

The template information that's filled when using CA.

 

33 minutes ago, saarg said:

Did you set the vm.max_map_count=262144?

Yes, the instructions for this are in two places.

 

34 minutes ago, saarg said:

Post your docker run command for all three containers.

I don't even know what the full command is, and I've since removed the dockers images after trying three full times to get it working from start to finish.

Again, nothing is throwing an error. They all seem to function, but just not do anything.


Just noticed thought, the template I was looking at most was elasticsearch's. discover had this:
```
Elasticsearch is needed for this container. Use 5.6.x.
```
I just used what was in the template, which seems to point to 6.6.2.

Did this really silently fail because of this?

Link to comment
4 hours ago, Ryonez said:

The template information that's filled when using CA.

 

Yes, the instructions for this are in two places.

 

I don't even know what the full command is, and I've since removed the dockers images after trying three full times to get it working from start to finish.

Again, nothing is throwing an error. They all seem to function, but just not do anything.


Just noticed thought, the template I was looking at most was elasticsearch's. discover had this:
```
Elasticsearch is needed for this container. Use 5.6.x.
```
I just used what was in the template, which seems to point to 6.6.2.

Did this really silently fail because of this?

Yes it fails because you didn't follow the information in the Readme regarding the version of elasticsearch. Diskover only supports that version.

Link to comment

Hi @saarg please let me know if you can help with setting up the crawlers using the cron job settings.

 

I am trying to get the crawling working on a daily basis. I have changed the diskover.cfg to this:

 

[crawlbot]
; continuous scanner
; time to sleep (seconds) between checking for directory changes
sleeptime = 86400
; number of threads for checking directories, setting this to num of cores x2 is a good starting point
threads = 8

 

my template has the following settings:

RUN_ON_START=true

USE_CRON=true

 

if I go to the \config\diskover\crontabs folder, there is a file named "abc" with the following:

0 3 * * * /app/dispatcher.sh >> /config/log/diskover/dispatcher.log 2>&1

 

It seems I have to restart the diskover docker to get the crawler to work. Do I have to setup a User Scripts plugins schedule to shell into the docker container and call the python script on a set interval? e.g.

"$ python /path/to/diskover.py -d /rootpath/you/want/to/crawl -i diskover-indexname -a -O"

Link to comment

@chaz as long as you have `USE_CRON=true` set as an environment variable, running in cron should be automatic. You should have an `abc` user cron file in your config directory that you can change to run on your own schedule (I believe the default is 3am every day). Any changes to this file though will require the container to be restarted.

  • Upvote 1
Link to comment
17 hours ago, Ryonez said:

I'm aware of that, I followed the instructions on dockerhub.

Redis and Elasticsearch were set up, but there's no errors other than redis saying it's not going to have high performance on unraid.

It gets here:
```
Once running the URL will be http://host-ip/ initial application spinup will take some time so please reload if you get an empty response.
```
There's a response that I mentioned above that, but otherwise nothing happens. no info, nothing saying it's processing, nada.

If it's failing because of this:
```
If you simply want the application to work you can mount these to folders with 0777 permissions, otherwise you will need to create these users host level and set the folder ownership properly.
```
Then the documentation needs the instructions for unraid added. It's not in the template info. Also again, it should error on perm issues, which it isn't.

It looks like this is probably due to Elasticsearch then.  As an aside, it's a good illustration of what info we need to help. 

Docker is immutable, the container I run is the same as the container you run.  The ONLY way we have of testing that is if you provide your docker run command (and logs can be helpful).  Anything else is mostly noise.

 

This is why I have Docker Run command link in my signature.

 

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.