[Support] FoxxMD - elasticsearch


Recommended Posts

Application Name: elasticsearch
Application Site: https://www.elastic.co/

Docker Hub: https://hub.docker.com/_/elasticsearch

Template Repo: https://github.com/FoxxMD/unraid-docker-templates

 

Overview

 

Elasticsearch is an open-source, RESTful, distributed search and analytics engine built on Apache Lucene. Since its release in 2010, Elasticsearch has quickly become the most popular search engine, and is commonly used for log analytics, full-text search, security intelligence, business analytics, and operational intelligence use cases.

 

This template defaults to Elasticsearch 6.6.2. This can easily be changed by modifying the Repository field (in Advanced View) on the template to use any available tags from dockerhub. Note: I have not tested any alpine variants.

 

How To Use

 

You must follow these steps if using Elasticsearch 5 or above -- vm.max_map_count must be increased or this container will not work

  1. Install CA User Scripts from Community Apps
  2. Create a new script named vm.max_map_count with this contents:
#!/bin/bash
sysctl -w vm.max_map_count=262144

    3. Set script schedule to At Startup of Array

    4. Run the script now to affect changes

 

General Instructions (all versions)

 

Check the default exposed ports and volume mappings in the template to ensure no conflicts. Happy searching!

 

Acknowledgements

Edited by FoxxMD
Link to comment
  • 2 weeks later...

Thank you very much for the docker. Works nicely.

 

I have a request if possible. Would you please install "ingest-attachment" plugin in the docker. Nextcloud needs it for indexing. It can be added by

 

/bin/elasticsearch-plugin install ingest-attachment

Link to comment
  • 3 weeks later...

The correct command inside the docker container would be /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch ingest-attachment ... is there an easy possibility in unRAID to run this command after the Docker container has started?

Link to comment
On 5/1/2019 at 2:26 PM, lousek said:

The correct command inside the docker container would be /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch ingest-attachment ... is there an easy possibility in unRAID to run this command after the Docker container has started?

I don't know about automatically running it but you could put a script in CA User Scripts that makes it a one-click operation after the container is running. I haven't test this but it should be close to working:

 

#!/bin/bash

# Return name of docker container with elasticsearch in it -- assuming only one container for elasticsearch is present
con="$(docker ps --format "{{.Names}}" | grep -i elasticsearch)"

# execute command inside container
docker exec -i "$con" /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch ingest-attachment

 

Edited by FoxxMD
Link to comment
On 4/2/2019 at 1:57 PM, FoxxMD said:

Application Name: elasticsearch
Application Site: https://www.elastic.co/

Docker Hub: https://hub.docker.com/_/elasticsearch

Template Repo: https://github.com/FoxxMD/unraid-docker-templates

 

Overview

 

Elasticsearch is an open-source, RESTful, distributed search and analytics engine built on Apache Lucene. Since its release in 2010, Elasticsearch has quickly become the most popular search engine, and is commonly used for log analytics, full-text search, security intelligence, business analytics, and operational intelligence use cases.

 

This template defaults to Elasticsearch 6.6.2. This can easily be changed by modifying the Repository field (in Advanced View) on the template to use any available tags from dockerhub. Note: I have not tested any alpine variants.

 

How To Use

 

You must follow these steps if using Elasticsearch 5 or above -- vm.max_map_count must be increased or this container will not work

  1. Install CA User Scripts from Community Apps
  2. Create a new script named vm.max_map_count with this contents:

#!/bin/bash
sysctl -w vm.max_map_count=262144

    3. Set script schedule to At Startup of Array

    4. Run the script now to affect changes

 

General Instructions (all versions)

 

Check the default exposed ports and volume mappings in the template to ensure no conflicts. Happy searching!

 

Acknowledgements

Hey, thanks for tips and directions.  I'm running into an issue that I'm not sure how to proceed since this docker thing's new to me.  After following your directions and creating the script with the command in it - all I get is "/tmp/user.scripts/tmpScripts/vm.max_map_count/script: line 2: $'sysctl\357\273\277': command not found".

 

Any hints?

Link to comment

I have been trying to setup nextcloud fulltextsearch using elasticsearch on unraid but I can not get the search to return any results.  Where should I look or what could I be missing in the configuration?  

 

root@f59188334fbf:/config/www/nextcloud# sudo -u abc ./occ files:scan userid --all
Starting scan for user 1 out of 7 (admin)
Starting scan for user 2 out of 7 (admin2)
Starting scan for user 3 out of 7 (Ben)
Starting scan for user 4 out of 7 (Bryce)
Starting scan for user 5 out of 7 (Fritz)
Starting scan for user 6 out of 7 (Sam)
Starting scan for user 7 out of 7 (Steph)
+---------+-------+--------------+
| Folders | Files | Elapsed time |
+---------+-------+--------------+
| 30      | 127   | 00:00:02     |
+---------+-------+--------------+
 

Link to comment
  • 2 weeks later...

When I go to the webgui for this, all I get is the below - should I be doing something else ? :

 

{ "name" : "HQdTNvT", "cluster_name" : "docker-cluster", "cluster_uuid" : "0jYRnGHBTimntwyla4MEIg", "version" : { "number" : "6.6.2", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "3bd3e59", "build_date" : "2019-03-06T15:16:26.864148Z", "build_snapshot" : false, "lucene_version" : "7.6.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "tagline" : "You Know, for Search" }

Link to comment

@vw-kombi elasticsearch is just a backend service. It does not provide a UI but rather is used by other applications to query for data. When you are visiting the HTTP port you are making a simple API request and ES sends a response in the same format it would send all other data in -- so the text you got back in your first post is basically a sample of how all data returned from ES will look.

 

There are a few web-based frontends for querying and viewing this data:

https://www.elastic.co/guide/en/sense/current/introduction.html

https://github.com/appbaseio/dejavu

 

But from your description of what you want I have a feeling that's not what you are looking for.

 

I did find this tutorial for a simple full-text search https://github.com/triestpa/Guttenberg-Search but you would have to write a new Dockerfile for it and publish it to DockerHub to make it useable on unraid.

Link to comment
  • 3 months later...

How would I come about getting the rest of the ELK stack on Unraid? I have taken the official docker images of Kibana and Logstash from the docker hub and installed them, however I need to know of filemappings, portmappings, variables and such. I don’t really know anything about docker, so I was hoping that you had made those available on Unraid as well, but unfortunately no - and noone else has done it, it seems.

 

Thanks,

 

/Klaus

Link to comment

@klausagnoletti I'm sorry but I can't help you with your specific issues, I don't run an ELK stack and don't have experience with logstash/kibana. You probably aren't finding what you're looking for because:

 

1. ELK is an orchestration of services. Unraid lets you run individual dockers (docker-compose is not available) so an ELK stack can't be easily packaged up and deployed as "one docker" for unraid.

2. Unraid's apps are basically just templates/configuration presets for docker images from dockerhub. There is nothing special about docker images run on unraid other than the app creator can provide some convenience to other unraid users. These templates don't provide any extra functionality not already found in the docker image from dockerhub.

 

Having said that you can easily run any docker image from dockerhub, as you have discovered. I suspect your issue is an unfamiliarity with the ELK stack, not docker -- as docker is usually pretty transparent. If you can setup an ELK stack in a regular environment you shouldn't have much trouble doing it with docker containers.

 

Some tips that might help you though with template configuration for docker containers in unraid:

 

variable - an environmental variable provided to the docker container. Elasticsearch requires the variable discovery.type to be provided so you would created variable named discovery.type and put its value to single-node

 

directory/folder mapping -- provides mapping from a local directory (in your array) to a directory of your choosing in the docker container. If the ELK stack requires shared files between the services you could setup a folder /mnt/user/appdata/elk that is mapped to all three dockers.

 

port - a local port (on the host machine) mapped to a port in the docker container.

If need to point docker container A to docker container B you would give it the host IP + local (mapped) port of container B.

EX elasticsearch exposes port 9200. port mapping could be 9200 (docker) -> 8200 (local). If kibana needed to access elasticsearch you would map a variable elastic_endpoint to 192.168.0.1:8200 (or whatever your host IP is)

Edited by FoxxMD
Link to comment
  • 4 weeks later...
On 5/8/2019 at 2:08 PM, pepper said:

I have been trying to setup nextcloud fulltextsearch using elasticsearch on unraid but I can not get the search to return any results.  Where should I look or what could I be missing in the configuration?  

 

root@f59188334fbf:/config/www/nextcloud# sudo -u abc ./occ files:scan userid --all
Starting scan for user 1 out of 7 (admin)
Starting scan for user 2 out of 7 (admin2)
Starting scan for user 3 out of 7 (Ben)
Starting scan for user 4 out of 7 (Bryce)
Starting scan for user 5 out of 7 (Fritz)
Starting scan for user 6 out of 7 (Sam)
Starting scan for user 7 out of 7 (Steph)
+---------+-------+--------------+
| Folders | Files | Elapsed time |
+---------+-------+--------------+
| 30      | 127   | 00:00:02     |
+---------+-------+--------------+
 

Did you manage to get this working?

I installed the nextcloud and elasticsearch docker.

Installed the nescessary apps:

Full text search

Full text search - Elasticsearch Platform

Full text search - Files

 

Configured nextcloud gui to use elasticsearch for full text search

 

Scheduled the script at docker start:
# Return name of docker container with elasticsearch in it -- assuming only one container for elasticsearch is present
con="$(docker ps --format "{{.Names}}" | grep -i elasticsearch)"

# execute command inside container
docker exec -i "$con" /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch ingest-attachment

 

But...

I guess this is not enough. I do not get results. Does anyone have the necessary step by step to complete this task?

 

Link to comment
  • 4 months later...
On 9/24/2019 at 3:12 AM, jsspanjer said:

Did you manage to get this working?

I installed the nextcloud and elasticsearch docker.

Installed the nescessary apps:

Full text search

Full text search - Elasticsearch Platform

Full text search - Files

 

Configured nextcloud gui to use elasticsearch for full text search

 

Scheduled the script at docker start:
# Return name of docker container with elasticsearch in it -- assuming only one container for elasticsearch is present
con="$(docker ps --format "{{.Names}}" | grep -i elasticsearch)"

# execute command inside container
docker exec -i "$con" /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch ingest-attachment

 

But...

I guess this is not enough. I do not get results. Does anyone have the necessary step by step to complete this task?

 

It appears to be an issue with this version of elasticsearch. I was finally able to get it working after switching to 7.6.0.

Link to comment
  • 4 months later...

Hello. I want to share, because I myself was looking for Old.
I have NextCloud production, nextcloud repository: production-apache. Version 17.

1. I entered the script from the topic in the terminal

sysctl -w vm.max_map_count=262144

 

2. I installed 4 plug-ins with a magnifier in the NextCloud, on the NextCloud web.
 

3. I installed the plugin for NextCloud.
ingest-attachment
Entered in the terminal:

 

docker exec -it elasticsearch bash /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-attachment

 

4. Then I made the settings in NextCloud, as in the photo.
 

5. Then started indexing. In terminal:
 

docker exec --user www-data nextcloud php occ fulltextsearch:index

______________
I hope this helps someone.

 

page 3.jpg

page 2.jpg

1 Page.jpg

Page 0.jpg

Edited by muwahhid
  • Like 1
Link to comment
  • 3 weeks later...
On 7/8/2020 at 11:16 PM, muwahhid said:

Hello. I want to share, because I myself was looking for Old.
I have NextCloud production, nextcloud repository: production-apache. Version 17.

1. I entered the script from the topic in the terminal

sysctl -w vm.max_map_count=262144

 

2. I installed 4 plug-ins with a magnifier in the NextCloud, on the NextCloud web.
 

3. I installed the plugin for NextCloud.
ingest-attachment
Entered in the terminal:

 

docker exec -it elasticsearch bash /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-attachment

 

4. Then I made the settings in NextCloud, as in the photo.
 

5. Then started indexing. In terminal:
 

docker exec --user www-data nextcloud php occ fulltextsearch:index

______________
I hope this helps someone.

 

page 3.jpg

page 2.jpg

1 Page.jpg

Page 0.jpg

Thank you for your help is worked so far for me exept of the last step when i want to start with the indexing it give me de following error


root@Homeserver:~# docker exec --user www-data nextcloud php occ fulltextsearch:index
unable to find user www-data: no matching entries in passwd file

do you haven an Idea what i have to do to solve this error?

Link to comment
2 minutes ago, Wingold said:

Thank you for your help is worked so far for me exept of the last step when i want to start with the indexing it give me de following error


root@Homeserver:~# docker exec --user www-data nextcloud php occ fulltextsearch:index
unable to find user www-data: no matching entries in passwd file

do you haven an Idea what i have to do to solve this error?

 

I have permissions to the "appdata/nextcloud" folder of the www-data user. You most likely have another user, so the error.

nextcloud.jpg

Link to comment
  • 1 month later...

For those that are stuck like me on the ingest-attachment plugin issue:

 

You need to stop the elasticsearch docker and restart it after you have executed the command to install the plugin so it gets loaded into elasticsearch. 

 

Here my steps:

1.) get elasticsearch docker (7.9.1 works) do a clean install (delete old elasticsearch in /mnt/user/appdata/)

 

2.) Download the full text search packages in nextcloud app store (at least 3 packages)

 

3.) Configure your Nextcloud search platform to "Elasticsearch" and address of Servlet to: http://YOUR_IP:9200/ 

It needs to be configured to the port of the REST API

 

4.) Install the plugin for elasticsearch, by either opening a console inside the elasticsearch docker and type /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch ingest-attachment

OR 

through the User scripts Unraid plugin as stated above. 

 

5.) Restart the elasticsearch container

 

6.) Test everything by opening a new shell in the nextcloud container then navigate to the occ directory (/var/www/html) and type 

./occ fulltextsearch:test

 

If everything is ok, then you can continue with the index: ./occ fulltextsearch:index

 

Edited by rob_robot
  • Thanks 2
Link to comment
  • 1 month later...

I've been using Elasticsearch 7.9.1. On a docker image reinstall, I'm failing to get this to work again. I've moved to 7.9.3 in case it was a bug fix.

 

I have a heap size issue. The pull down shows completed successfully, see the environment variable:

-e 'ES_JAVA_OPTS'='-Xms4g -Xmx4g' which I entered. Later in it shows this as well:

 

-e "ES_JAVA_OPTS"="-Xms512m -Xmx512m"

 

When I do an "echo ES_JAVA_OPTS" inside the container, it shows that only 512m memory is being used. It appears as if the 512m is taking precedent. I've also set the minimum and maximum heap size in the jvm.options file. It doesn't seem to make a difference. 

 

Below is the container pull:

 

Quote

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='elasticsearch2' --net='bridge' -e TZ="Australia/Brisbane" -e HOST_OS="Unraid" -e 'discovery.type'='single-node' -e 'ES_JAVA_OPTS'='-Xms4g -Xmx4g' -p '9200:9200/tcp' -p '9300:9300/tcp' -v '/mnt/user/appdata/elasticsearch/data':'/usr/share/elasticsearch/data':'rw' -e "ES_JAVA_OPTS"="-Xms512m -Xmx512m" --ulimit nofile=262144:262144 'elasticsearch:7.9.3'

82d34c8772e2291a617e86a9c4f21dd227a392c3d02f1522384e7b02332566fa

The command finished successfully!

I've tried this for most of the day. I've not had this problem before, I urgently need to get this up and running. Can anybody advise?

Link to comment
Quote

{"type": "server", "timestamp": "2020-10-24T16:07:56,878+10:00", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "82d34c8772e2", "message": "JVM home [/usr/share/elasticsearch/jdk]" }
{"type": "server", "timestamp": "2020-10-24T16:07:56,879+10:00", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "82d34c8772e2", "message": "JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms4g, -Xmx4g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/elasticsearch-10094349604196076055, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Des.cgroups.hierarchy.override=/, -Xms512m, -Xmx512m, -XX:MaxDirectMemorySize=268435456, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]" }

 

Link to comment

Okay,

So for those following after me. I'm not very experienced with docker, but I pulled down the container manually and by doing so changed the environment variables, changing the heap size to 2g.

Quote

docker run -d --name='elasticsearch' --net='bridge' -e TZ="Australia/Brisbane" -e HOST_OS="Unraid" -e 'discovery.type'='single-node' -e 'ES_JAVA_OPTS'='-Xms2g -Xmx2g' -p '9200:9200/tcp' -p '9300:9300/tcp' -v '/mnt/user/appdata/elasticsearch/data':'/usr/share/elasticsearch/data':'rw' -e "ES_JAVA_OPTS"="-Xms2g -Xmx2g" --ulimit nofile=262144:262144 'elasticsearch:7.9.3'
 

I changed the second set of ES_JAVA_OPTS (in red) from Xms512mb -Xmx512mb to 2g instead. However, I've just noticed that there are two ES_JAVA_OPTS entries, the other one I've marked in yellow. This was a copy and paste from the original container install. No matter what I did I couldn't get the heap size to change from 512mb to any other size by setting ES_JAVA_OPTS as a variable in the CA settings for the container or via jvm.options inside the container itself.

 

Anybody shed light on this? I would like to just pull the container down without all this phaff.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.