Difference between revisions of "Local elasticsearch"

From Software Heritage Wiki
Jump to: navigation, search
(Running local elasticsearch instance with docker)
 
 
Line 7: Line 7:
 
     sudo usermod -G docker -a $USER
 
     sudo usermod -G docker -a $USER
  
= Elasticsearch instance =
+
= Elasticsearch =
 +
== mono instance ==
 
We will be reusing existing docker container so no specific Dockerfile.
 
We will be reusing existing docker container so no specific Dockerfile.
  
Line 26: Line 27:
 
simulate a real cluster.
 
simulate a real cluster.
  
 +
== build and run ==
 
Then build and run the instance:
 
Then build and run the instance:
  
Line 49: Line 51:
 
     CMD ["bin/cerebro", "-Dhttp.port=1234"]
 
     CMD ["bin/cerebro", "-Dhttp.port=1234"]
  
Now edit the initial docker-compose.yml and add the following in the `services` dict:
+
Now edit the initial docker-compose.yml and add the following:
  
 
     cerebro:
 
     cerebro:
Line 65: Line 67:
 
Check your cerebro local instance running at http://localhost:1234/.
 
Check your cerebro local instance running at http://localhost:1234/.
 
You can now connect this instance to your elasticsearch 'cluster' (elasticsearch: http://<fqdn>:9200).
 
You can now connect this instance to your elasticsearch 'cluster' (elasticsearch: http://<fqdn>:9200).
 +
 +
== kibana ==
 +
It's a foss data visualization plugin for Elasticsearch.
 +
 +
Edit the initial docker-compose.yml and add the following:
 +
 +
    kibana:
 +
      image: kibana
 +
      ports:
 +
        - 5601:5601
 +
 +
Now:
 +
 +
    $ docker-compose up
 +
 +
should start both your elasticsearch instance and your kibana (and cerebro) instance.
 +
 +
Check your local instance running at http://localhost:5601/.

Latest revision as of 11:58, 29 March 2018

Setup docker

   sudo apt-get install docker docker-compose

Add your user to the docker group to ease setup.

   sudo usermod -G docker -a $USER

Elasticsearch

mono instance

We will be reusing existing docker container so no specific Dockerfile.

Edit a docker-compose.yml file with the following content:

   version: '3'
   services:
     elasticsearch:
       container_name: elasticsearch-6.2.2
       image: "docker.elastic.co/elasticsearch/elasticsearch:6.2.2"
       ports:
        - "9200:9200"
        - "9300:9300"
       environment:
         - discovery.type=single-node

Note: This is a single node instance but we could extend this to simulate a real cluster.

build and run

Then build and run the instance:

   docker-compose up --build

You should have a running instance on your port http://localhost:9200.

Optional tool

cerebro

It's a web application that permits to introspect/admin your cluster's state.

Adding a cerebro/Dockerfile:

   FROM openjdk:8-jre
   ENV CEREBRO_VERSION 0.7.2
   RUN wget https://github.com/lmenezes/cerebro/releases/download/v${CEREBRO_VERSION}/cerebro-${CEREBRO_VERSION}.tgz
   RUN tar xvf cerebro-0.7.2.tgz
   RUN mkdir cerebro-${CEREBRO_VERSION}/logs
   RUN mv cerebro-${CEREBRO_VERSION} /opt/
   EXPOSE 1234
   WORKDIR /opt/cerebro-${CEREBRO_VERSION}
   CMD ["bin/cerebro", "-Dhttp.port=1234"]

Now edit the initial docker-compose.yml and add the following:

   cerebro:
     container_name: cerebro
     build: cerebro
     ports:
      - "1234:1234"

Now:

   $ docker-compose up

should start both your elasticsearch instance and your cerebro instance.

Check your cerebro local instance running at http://localhost:1234/. You can now connect this instance to your elasticsearch 'cluster' (elasticsearch: http://<fqdn>:9200).

kibana

It's a foss data visualization plugin for Elasticsearch.

Edit the initial docker-compose.yml and add the following:

   kibana:
     image: kibana
     ports:
       - 5601:5601

Now:

   $ docker-compose up

should start both your elasticsearch instance and your kibana (and cerebro) instance.

Check your local instance running at http://localhost:5601/.