[Support] Linuxserver.io - diskover


Recommended Posts

7 hours ago, saarg said:

Yes it fails because you didn't follow the information in the Readme regarding the version of elasticsearch. Diskover only supports that version.

I followed the dockerhub instructions for the most part, which don't mention this limitation. And I must have glazed over the instructions in the template because it looked like a two line description, not instructions like the on in elasticsearch's template.After juggling three docker image setups, it was just gone.
 

4 hours ago, CHBMB said:

It looks like this is probably due to Elasticsearch then.  As an aside, it's a good illustration of what info we need to help. 

Mmm, please defiantly add that to the dockerhub instructions, maybe under `Application Setup`? I wish this didn't fail silently as well, bu I feel that's more an issue with how diskover is written over the image itself.
 

4 hours ago, CHBMB said:

Docker is immutable, the container I run is the same as the container you run.  The ONLY way we have of testing that is if you provide your docker run command (and logs can be helpful).  Anything else is mostly noise.

 

This is why I have Docker Run command link in my signature.

I'll keep that in mind and add it to future help requests. At the time it was asked of me, I had given up and removed the images completely.

Maybe after I sort out network isolation with my docker containers I'll give this all another go.

Link to comment
On 6/9/2019 at 9:17 AM, cpom1 said:

Everything is working for me and I love this container! But I'm confused about the tags...so it found duplicates and I set the tag as delete. Does it not delete the stuff? What does the tags do ? Is there a way to make it automatically delete stuff by tagging? 

 

Thanks

So does no one know about the tagging? 

Link to comment
7 hours ago, saarg said:

I don't use diskover, so I don't know how it really works. I just know how to make it run. I'll ping and @hackerman
and

 if he knows. Not sure if he is hacker an here also though 😁

 

No worries, you have still been helpful by putting me in touch with hackerman! Cheers

 

6 hours ago, hackerman-lsio said:

@chaz as long as you have `USE_CRON=true` set as an environment variable, running in cron should be automatic. You should have an `abc` user cron file in your config directory that you can change to run on your own schedule (I believe the default is 3am every day). Any changes to this file though will require the container to be restarted.

@hackerman-lsio it doesn’t seem to be running everyday at 3am :( Is there a way to check if the cron is actually scheduled? If the crawlbot setting in diskover.cfg is set, does it mean the cronjob is disabled or vice versa?

 

Appreciate your help! 

 

Link to comment
1 hour ago, Ryonez said:

I followed the dockerhub instructions for the most part, which don't mention this limitation. And I must have glazed over the instructions in the template because it looked like a two line description, not instructions like the on in elasticsearch's template.After juggling three docker image setups, it was just gone.
 

Mmm, please defiantly add that to the dockerhub instructions, maybe under `Application Setup`? I wish this didn't fail silently as well, bu I feel that's more an issue with how diskover is written over the image itself.
 

I'll keep that in mind and add it to future help requests. At the time it was asked of me, I had given up and removed the images completely.

Maybe after I sort out network isolation with my docker containers I'll give this all another go.

 

If you look at the template for diskover it says to use elasticsearch 5.6.x

Link to comment
33 minutes ago, saarg said:

If you look at the template for diskover it says to use elasticsearch 5.6.x

As mention in the section you quoted:

 

2 hours ago, Ryonez said:

I followed the dockerhub instructions for the most part, which don't mention this limitation. And I must have glazed over the instructions in the template because it looked like a two line description, not instructions like the on in elasticsearch's template.After juggling three docker image setups, it was just gone.

Gone, as in, I had like 5-6 places I was getting instructions from. Something that's normally used as a description field ended up being glossed over, especially as it didn't look like instructions among all the instructions being looked at.

I had already identified that I missed it, and why and how it could've been prevented by adding it to the dockerhub instructions. Telling me it's in the template again doesn't help if I'm to be frank.

Link to comment
18 minutes ago, Ryonez said:

As mention in the section you quoted:

 

Gone, as in, I had like 5-6 places I was getting instructions from. Something that's normally used as a description field ended up being glossed over, especially as it didn't look like instructions among all the instructions being looked at.

I had already identified that I missed it, and why and how it could've been prevented by adding it to the dockerhub instructions. Telling me it's in the template again doesn't help if I'm to be frank.

Look, lets just leave this conversation.  It's going nowhere.  The info was there, you missed it.  We can't spoon feed every bit of information.  It's on the diskover site here. https://github.com/shirosaidev/diskover#requirements and it's on our template.  If you miss the information, that's on you, there's a limit to how much we can do.

 

I had never used this before either, but I managed to set it up without any issues based on the documentation that was available, as I hadn't been involved in the dev of either diskover or the container.

 

We're not installing Windows .exe files here, we do our best to lower the bar, but by definition, you're running a server, some applications are more complex to use than others and more complicated to install than others, the only place we get this sort of complaint from is the Unraid forums which, ironically, is the place where installing stuff is the easiest.

 

The problem really boiled down to the piece of info you missed, and you incorrectly assumed that it was an issue with the container being at fault.  Whether you choose to install this again or not, I don't much care, but at least take some responsibility for your own error rather than moaning that we didn't make it easy enough for you.

Link to comment
1 hour ago, CHBMB said:

The problem really boiled down to the piece of info you missed, and you incorrectly assumed that it was an issue with the container being at fault.  Whether you choose to install this again or not, I don't much care, but at least take some responsibility for your own error rather than moaning that we didn't make it easy enough for you. 

Jesus, this really makes it off-putting for people to ask for help.
Is not noticing I missed something, asking if it was the issue taking responsibility? *I noticed a mistake I made, and asked it that was causing silent failures.*

Also, I outlined how and why I missed it, and offered an improved that would make it harder to miss. Wasn't blaming anyone else on that in the slightest.

You need to keep in mind you guys are good at this shit. I'm not dumb by any means, but I don't know everything and something like this can be overwhelming, especially in instances when you're trying to piece together several dockers.

I'm happy to stop the conversation here, I just want to get that off my chest because the way things started to turn. "we do our best to lower the bar" defiantly felt like it would naturally follow on to "but if you're to dumb you're outta luck". Logically I know that's not what you mean, so don't worry, but emotionally it does feel like an attack kinda.

But, we think we have the issue, I'll look into it some other time, and if there's an issue I'll pop back and provide stuff like the run commands and see if we can hammer it out. As it stands, I feel like the mistake I made is what's feked it up.

Thank you very much for you're help guys. Even if I got a little upset, I do appreciate the help alot. I hope you have a good day!

  • Like 1
Link to comment
On 6/11/2019 at 6:40 AM, chaz said:

 

No worries, you have still been helpful by putting me in touch with hackerman! Cheers

 

@hackerman-lsio it doesn’t seem to be running everyday at 3am :( Is there a way to check if the cron is actually scheduled? If the crawlbot setting in diskover.cfg is set, does it mean the cronjob is disabled or vice versa?

 

Appreciate your help! 

 

The crawlbot section is separate from the cron scheduling in the container. The crawlbot section isn't used unless you are running diskover.py manually with the --crawlbot cli arg. To update your index daily, you'll need to schedule it inside the container using cron.

  • Upvote 1
Link to comment
On 6/11/2019 at 6:09 AM, cpom1 said:

So does no one know about the tagging? 

diskover and diskover-web don't delete files automatically. The tagging is for managing your files, from there you can use the diskover-web rest api or export file lists from diskover-web to run cleanup/archive scripts against those tagged files/dirs.

Link to comment

I have deployed this using docker-compose, all containers are running

 


CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS                                                                                     NAMES
f8ecfd2d3a47        linuxserver/diskover                                  "/init"                  22 minutes ago      Up 22 minutes       443/tcp, 0.0.0.0:9181->9181/tcp, 8000/tcp, 0.0.0.0:9999->9999/tcp, 0.0.0.0:8090->80/tcp   diskover
b93950dc9ab5        docker.elastic.co/elasticsearch/elasticsearch:5.6.9   "/bin/bash bin/es-do…"   22 minutes ago      Up 22 minutes       9200/tcp, 9300/tcp                                                                        elasticsearch
2727c45e4df2        redis:alpine                                          "docker-entrypoint.s…"   22 minutes ago      Up 22 minutes       6379/tcp                                                                                  redis

However I have no indexing as the diskover container keeps throwing:

Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')

I can only find references to qumulo is a very old build of diskover and then only in github code.  I'm unsure what to do to get past this, as I also receive: "No diskover indices found in Elasticsearch. Please run a crawl and come back." when I try to access the webUI, which makes sense if it never truly starts up.  Obviously qumulo was an old method of indexing and does not appear supported, so I'm unsure why I get this error.

 

elasticsearch:5.6.9 is supported and running: "[2019-06-18T21:53:07,608][INFO ][o.e.n.Node               ] [6AoRXIq] started"

rdis is also running: "1:M 18 Jun 2019 21:52:58.921 * Ready to accept connections"

 

I tired to adjusting the abc crontab to run, in the hopes that would generate an index, but that did not help either.

 

manually running the indexer returns:

0 18 * * * /app/dispatcher.sh >> /config/log/diskover/dispatcher.log 2>&1
root@fileserver:/websites/diskover_config# docker exec -it diskover /app/dispatcher.sh
killing existing workers...
emptying current redis queues...
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/redis/connection.py", line 493, in connect
    sock = self._connect()
  File "/usr/lib/python3.6/site-packages/redis/connection.py", line 550, in _connect
    raise err
  File "/usr/lib/python3.6/site-packages/redis/connection.py", line 538, in _connect
    sock.connect(socket_address)
OSError: [Errno 99] Address not available

During handling of the above exception, another exception occurred:
...

 

Suggestions,

ERIC

Edited by egandt
added error from indexer
Link to comment
9 minutes ago, trurl said:

Not clear from your post, are you running this on Unraid?

No deployed on Ubuntu Linux 18.0.4 using docker-compose https://hub.docker.com/r/linuxserver/diskover

had to redirect ports, and created a private network in an attempt to fix the error with accessing redis, but reasonable a straight forward deployment.

version: '2'
services:
  diskover:
    image: linuxserver/diskover
    container_name: diskover
    environment:
      - PUID=1012
      - PGID=999
      - TZ=America/New_York
      - REDIS_HOST=redis
      - REDIS_PORT=6379
      - ES_HOST=elasticsearch
      - ES_PORT=9200
      - ES_USER=elastic
      - ES_PASS=changeme
      - RUN_ON_START=true
      - USE_CRON=true
    volumes:
      - /websites/diskover_config/config:/config
      - /raid6:/data:ro
    ports:
      - 8090:80
      - 9181:9181
      - 9999:9999
    mem_limit: 4096m
    restart: unless-stopped
    depends_on:
      - elasticsearch
      - redis
    networks:
      - diskovernet
  elasticsearch:
    container_name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:5.6.9
    volumes:
      - /websites/diskover_config/elasticsearch/data:/usr/share/elasticsearch/data
    environment:
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    networks:
      - diskovernet
  redis:
    container_name: redis
    image: redis:alpine
    volumes:
      - /websites/diskover_config/redis:/data
    networks:
      - diskovernet
networks:
  diskovernet:

ERIC

Link to comment
On 6/19/2019 at 7:21 AM, egandt said:

I have deployed this using docker-compose, all containers are running

 


CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS                                                                                     NAMES
f8ecfd2d3a47        linuxserver/diskover                                  "/init"                  22 minutes ago      Up 22 minutes       443/tcp, 0.0.0.0:9181->9181/tcp, 8000/tcp, 0.0.0.0:9999->9999/tcp, 0.0.0.0:8090->80/tcp   diskover
b93950dc9ab5        docker.elastic.co/elasticsearch/elasticsearch:5.6.9   "/bin/bash bin/es-do…"   22 minutes ago      Up 22 minutes       9200/tcp, 9300/tcp                                                                        elasticsearch
2727c45e4df2        redis:alpine                                          "docker-entrypoint.s…"   22 minutes ago      Up 22 minutes       6379/tcp                                                                                  redis

However I have no indexing as the diskover container keeps throwing:


Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')
Missing section from diskover.cfg, check diskover.cfg.sample and copy over, exiting. (No section: 'qumulo')

I can only find references to qumulo is a very old build of diskover and then only in github code.  I'm unsure what to do to get past this, as I also receive: "No diskover indices found in Elasticsearch. Please run a crawl and come back." when I try to access the webUI, which makes sense if it never truly starts up.  Obviously qumulo was an old method of indexing and does not appear supported, so I'm unsure why I get this error.

 

elasticsearch:5.6.9 is supported and running: "[2019-06-18T21:53:07,608][INFO ][o.e.n.Node               ] [6AoRXIq] started"

rdis is also running: "1:M 18 Jun 2019 21:52:58.921 * Ready to accept connections"

 

I tired to adjusting the abc crontab to run, in the hopes that would generate an index, but that did not help either.

 

manually running the indexer returns:


0 18 * * * /app/dispatcher.sh >> /config/log/diskover/dispatcher.log 2>&1
root@fileserver:/websites/diskover_config# docker exec -it diskover /app/dispatcher.sh
killing existing workers...
emptying current redis queues...
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/redis/connection.py", line 493, in connect
    sock = self._connect()
  File "/usr/lib/python3.6/site-packages/redis/connection.py", line 550, in _connect
    raise err
  File "/usr/lib/python3.6/site-packages/redis/connection.py", line 538, in _connect
    sock.connect(socket_address)
OSError: [Errno 99] Address not available

During handling of the above exception, another exception occurred:
...

 

Suggestions,

ERIC

The missing reference to qumulo is from an issue with the lsio container right now since the newest config has been released before the latest version of diskover which removes qumulo code. I just released v1.5.0-rc30 for diskover, when that gets packaged and released by lsio this issue will be gone. For now you can roll back to previous build of diskover lsio docker hub image or wait until the next release which will include v1.5.0-rc30 or add a section in your diskover.cfg file with [qumulo] header as a temp fix.

Link to comment
  • 5 weeks later...

I've had diskover running more or less successfully for a week.

 

However, the "Time Change" analytics page is blank for me. Any ideas? This is one of the most important pieces of data for me since I am most interested in my storage trends which helps me plan for future storage.

 

Update: Fixed with diskover-web v1.5.0.: https://github.com/shirosaidev/diskover-web/issues/22 Update your docker container. 

Edited by T0rqueWr3nch
Link to comment
  • 2 weeks later...

Trying to get diskover installed and running in unRaid.

Installed the docker. Installed elasticsearch and redis from Community apps, but when I try to access the diskover webgui, it's unable to load

If I go to the elasticsearch gui all I get is the following script

 

{
  "name" : "_8AGSmy",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "LMhKmpTERFWLv33avxj9qw",
  "version" : {
    "number" : "6.6.2",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "3bd3e59",
    "build_date" : "2019-03-06T15:16:26.864148Z",
    "build_snapshot" : false,
    "lucene_version" : "7.6.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

I have the elasticsearch script added to User Scripts and ran it as well.

I'm not sure what to put in for the REDIS_HOST or ES_HOST in the diskover setup either

 

Please be gentle. I'm pretty new at these things, but willing to learn as I go along. Thanks!

 

Link to comment
  • 4 weeks later...

I'm having a slight issue. I was originally running elasticsearch 6.X. I got through to the webgui on Diskover and it told me to perform a crawl. Read through the documentation that said I need elasticsearch 5.6.x, so I installed 5.6.16. Deleted previous appdata, installed elasticsearch. Now it runs, I can access the webui of diskover and elasticsearch. No errors in elasticsearch logs, no errors in diskover logs. Warnings in redis about THP and overcommit_memory.

 

Diskover still tells me to run a crawl from the webgui. Not sure where to go from here. Everything looks kosher.

 

EDIT: Well, I got it running, and I was able to get all of 1 database. However now when I restart the container it doesn't initialize another crawl. I decided to uninstall it, remove everything in the appdata directory, reinstall it and set it all back up again, and now the crawlers end up hanging. Unless somebody can help me out with a way to get this running reliably and the way the documentation states it should run I'm done. I've spend all day trying to get this to actually do what it needs to do.

Edited by Surgikill
Link to comment
  • 1 month later...

So, after reviewing this thread, I followed OFark's steps, and finally got it working! (i.e. Diskover GUI loads and workers are running against my disks)

 

For those who are deeply confused, I wanted to expand on that worthy's notes, as there are a couple points where I got stuck, and suspect others have/are in similar situations:

  1. Check if you have the Redis, ElasticSearch, and/or Diskover Docker containers already installed
  2. If you do, note where your appdata folder is hosted for each, then remove the containers, then remove the appdata (removing the appdata is crucial! An older config file really tripped me up, at one point). You'll likely need to SSH or use the UnRAID Terminal in the GUI to cd to the appdata folder location and rm -rf them.
  3. This may not be required, however -- to ensure the OS setting for ElasticSearch was set before I installed, I followed the steps at this comment: that start at "You must follow these steps if using Elasticsearch 5 or above". IMPORTANT NOTE: I did NOT install this ElasticSearch container, just used the script instructions. For which container I did install, see Step 5, below.
  4. Install the Redis container via Community Apps. I used the copy of the official Redis Docker in jj9987's Repository (Author and Dockerhub are both "Redis"). It should not need any special config, unless you already have an container running on Port 6379.
  5. To install ElasticSearch version 5, go to https://github.com/OFark/docker-templates/ and follow the steps in the README on that page; it'll have you setup an additional Docker Repository first. Note that, so far as I can tell so far, I did not need to do anything in OFark's Step 6, on that page, around adding values in Advanced View. If that turns out to be a mistake, I'll update this.
  6. At this point, I recommend checking your Redis and ElasticSearch 5 container logs just to ensure there's no weird errors, or the like.
  7. If it all looks good, install the Diskover container via Community Apps. To avoid the issues I ran into, ensure you have all the ports open to use (edit if not), and that you provide the right IP address and ports for both your Redis and ElasticSearch 5 containers. Also make sure you provide RUN_ON_START=true, and set the Elasticsearch user and password -- if you don't give the latter, you'll get no Diskover GUI and be confused, like me :)
  8. Once Diskover starts, give it a minute or so, then go to it's Workers GUI (usually at port 4040). You should see a set of workers starting to run thru your disk and pull info.
  9. From there, you should be able to go to the Diskover main GUI and see some data, eventually! As I wrap this, so did my scan; now to pass thru some parameters to get duplication and tagging working (I hope!)

 

I hope this helps -- Good luck, everyone! And thanks to Linuxserver for the Container, and OFark for templates and guidance that helped immensely in setting it up for UnRAID!

Edited by raqisasim
Grammar
Link to comment
  • 2 months later...
On 10/5/2019 at 4:32 AM, raqisasim said:

So, after reviewing this thread, I followed OFark's steps, and finally got it working! (i.e. Diskover GUI loads and workers are running against my disks)

 

For those who are deeply confused, I wanted to expand on that worthy's notes, as there are a couple points where I got stuck, and suspect others have/are in similar situations:

  1. Check if you have the Redis, ElasticSearch, and/or Diskover Docker containers already installed
  2. If you do, note where your appdata folder is hosted for each, then remove the containers, then remove the appdata (removing the appdata is crucial! An older config file really tripped me up, at one point). You'll likely need to SSH or use the UnRAID Terminal in the GUI to cd to the appdata folder location and rm -rf them.
  3. This may not be required, however -- to ensure the OS setting for ElasticSearch was set before I installed, I followed the steps at this comment: that start at "You must follow these steps if using Elasticsearch 5 or above". IMPORTANT NOTE: I did NOT install this ElasticSearch container, just used the script instructions. For which container I did install, see Step 5, below.
  4. Install the Redis container via Community Apps. I used the copy of the official Redis Docker in jj9987's Repository (Author and Dockerhub are both "Redis"). It should not need any special config, unless you already have an container running on Port 6379.
  5. To install ElasticSearch version 5, go to https://github.com/OFark/docker-templates/ and follow the steps in the README on that page; it'll have you setup an additional Docker Repository first. Note that, so far as I can tell so far, I did not need to do anything in OFark's Step 6, on that page, around adding values in Advanced View. If that turns out to be a mistake, I'll update this.
  6. At this point, I recommend checking your Redis and ElasticSearch 5 container logs just to ensure there's no weird errors, or the like.
  7. If it all looks good, install the Diskover container via Community Apps. To avoid the issues I ran into, ensure you have all the ports open to use (edit if not), and that you provide the right IP address and ports for both your Redis and ElasticSearch 5 containers. Also make sure you provide RUN_ON_START=true, and set the Elasticsearch user and password -- if you don't give the latter, you'll get no Diskover GUI and be confused, like me :)
  8. Once Diskover starts, give it a minute or so, then go to it's Workers GUI (usually at port 4040). You should see a set of workers starting to run thru your disk and pull info.
  9. From there, you should be able to go to the Diskover main GUI and see some data, eventually! As I wrap this, so did my scan; now to pass thru some parameters to get duplication and tagging working (I hope!)

 

I hope this helps -- Good luck, everyone! And thanks to Linuxserver for the Container, and OFark for templates and guidance that helped immensely in setting it up for UnRAID!

 

 

Thanks for your guideline.

I go through the steps but diskover GUI didn't display. Where should I start to do troubleshooting?

Link to comment
On 12/13/2019 at 10:30 AM, StarsLight said:

I go through the steps but diskover GUI didn't display. Where should I start to do troubleshooting?

First -- I'm no expert. I'm just a newbie around this, too! I cannot make any promises I can support deep debugging.

 

But always try looking at logs, first. Each Docker container has a log you can look at for troubleshooting.

 

Second, I noted in Step 7 a point where I did fail to see the GUI, so I'd triple-check that Elasticsearch actually is up and functional, as well as carefully re-check the other steps. If that doesn't work, re-start from ground zero (none of the relevant Dockers installed, appdata for those Dockers completely deleted as per Step 2) to ensure you've got the right config setup from jump.

 

In short, this is all a bit complex, and in fact since I also haven't gotten re-scanning to work per Surgikill's comment above I've set it all aside for now, myself. But hopefully this'll help you!

Link to comment
On 12/15/2019 at 3:10 AM, raqisasim said:

First -- I'm no expert. I'm just a newbie around this, too! I cannot make any promises I can support deep debugging.

 

But always try looking at logs, first. Each Docker container has a log you can look at for troubleshooting.

 

Second, I noted in Step 7 a point where I did fail to see the GUI, so I'd triple-check that Elasticsearch actually is up and functional, as well as carefully re-check the other steps. If that doesn't work, re-start from ground zero (none of the relevant Dockers installed, appdata for those Dockers completely deleted as per Step 2) to ensure you've got the right config setup from jump.

 

In short, this is all a bit complex, and in fact since I also haven't gotten re-scanning to work per Surgikill's comment above I've set it all aside for now, myself. But hopefully this'll help you!

 

Checked log of diskover.

 

[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 20-config: executing...
[cont-init.d] 20-config: exited 0.
[cont-init.d] 30-keygen: executing...
using keys found in /config/keys
[cont-init.d] 30-keygen: exited 0.
[cont-init.d] 50-diskover-config: executing...
Initial run of dispatcher in progress
[cont-init.d] 50-diskover-config: exited 0.
[cont-init.d] 60-diskover-web-config: executing...
ln: failed to create symbolic link '/app/diskover-web/public/customtags.txt': File exists
[cont-init.d] 60-diskover-web-config: exited 0.
[cont-init.d] 99-custom-files: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-files: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.

 

No any special error message. Does "fail to create symbolic link" cause problem?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.