Hamsa K
9 min read | 4 years ago

How to Install ELK Stack on Centos 7

Install Elasticsearch,Logstash and Kibana in Centos 7

Elasticsearch: its an open-source distributed full-text search and analytics engine and it stores incoming logs from Logstash and provides an ability to search data.

Logstash: Logstash will collect your log data, convert the data into JSON documents, and store them in Elasticsearch.

kibana: It provides the web interface that will help us to inspect and analyze the logs.


Install java

[root@lampblogs ~]# yum install java-1.8.0-openjdk

check java version with below command

[root@lampblogs ~]# java -version
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (build 1.8.0_242-b08)

1) Install Elasticsearch

First we need to import GPG keys like below

[root@lampblogs ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Now we will create elasticsearch repo 

[root@lampblogs ~]# vi /etc/yum.repos.d/elasticsearch.repo

paste below content to above file and save it.

name=Elasticsearch repository for 6.x packages

Install elasticsearch with the following yum command

[root@lampblogs ~]# yum install elasticsearch

configure yml file with below settings

[root@lampblogs ~]# vi /etc/elasticsearch/elasticsearch.yml

Enable memory lock for Elasticsearch by uncommenting below line

bootstrap.memory_lock: true

and also uncomment network host and http port lines

network.host: localhost
http.port: 9200

save the file and exit. Then edit sysconfig file for elasticsearch

[root@lampblogs ~]# vi /etc/sysconfig/elasticsearch

uncomment below line and save file


Now start and enable elasticsearch service with following commands

[root@lampblogs ~]# systemctl daemon reload
[root@lampblogs ~]# systemctl restart elasticsearch
[root@lampblogs ~]# systemctl enable elasticsearch
[root@lampblogs ~]# systemctl status elasticsearch

if firewall is running on your system then allow port 9200 

[root@lampblogs ~]# firewall-cmd --permanent --add-port 9200/tcp
[root@lampblogs ~]# firewall-cmd --reload

Now we will test using curl whether the Elasticsearch is responding to the queries or not.

[root@lampblogs ~]# curl -X GET http://localhost:9200
  "name" : "gx1V5VV",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "qRd9EjcwT7WRczNCCjSEug",
  "version" : {
    "number" : "6.8.6",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "3d9f765",
    "build_date" : "2019-12-13T17:11:52.013738Z",
    "build_snapshot" : false,
    "lucene_version" : "7.7.2",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  "tagline" : "You Know, for Search"

Elasticsearch is installed and its working fine.

Step 2: Install Logstash

Install logstash with yum command.

[root@lampblogs ~]# yum install logstash

After the logstash installation, we will now create a SSL certificate for securing communication between logstash & filebeat (clients). we can use FQDN name or IP.

If you use the Logstash server hostname in the beats configuration, make sure you have A record for Logstash server and also ensure that client machine can resolve the hostname of the Logstash server.

Here we are using IP address to connect to server, we will create SSL certificate for IP.

Before creating a SSL certificate, we will make an entry of our IP in openssl.cnf

[root@lampblogs ~]# vi /etc/pki/tls/openssl.cnf

Add new line under [ v3_ca ] section like below

subjectAltName = IP:

save the file and exit. Replace your server ip address in above file

Now goto openssl directory and generate certificate file with openssl command.

[root@lampblogs ~]# cd /etc/pki/tls
[root@lampblogs tls]# openssl req -x509 -days 365 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

Once the certificate is ready, this should be copied to all the clients.

Step 3: configure Logstash

Now we will create logstash configuration file under /etc/logstash/conf.d

This file is configured as input,filter and output sections

Input section makes logstash to listen on port 5044 for incoming logs from beats.

[root@lampblogs ~]# vi /etc/logstash/conf.d/logstash.conf

paste below content to above file.you can remove ssl related lines if you are not using ssl.

# input section
input {
 beats {
   port => 5044
   ssl => true
   ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
   ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"

After input section we will configure filter section.this will parse the logs before sending them to elasticsearch.

# filter section
filter {
if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGLINE}" }
    date {
match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

Final will be output section, we will define the location where the logs to get stored which is Elasticsearch only.

# output section
output {
 elasticsearch {
  hosts => localhost
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
stdout {
    codec => rubydebug

save the file and start and enable to start logstash service at boot time.

[root@lampblogs ~]# systemctl daemon reload
[root@lampblogs ~]# systemctl start logstash
[root@lampblogs ~]# systemctl enable logstash

If you are using firewall then allow 5044 port

[root@lampblogs ~]# firewall-cmd --permanent --add-port=5044/tcp
[root@lampblogs ~]# firewall-cmd --reload

Step 4: Install Kibana

Install kibana with the following yum command

[root@lampblogs ~]# yum install kibana

once it is installed, then edit yml file

Add server ip to access kibana from external machines and also uncomment enter elasticsearch url like below.

server.host: ""
elasticsearch.hosts: ["http://localhost:9200"]

Now start and enable kibana service to start logstash service at boot time

[root@lampblogs ~]# systemctl start kibana
[root@lampblogs ~]# systemctl enable kibana
[root@lampblogs ~]# systemctl status kibana

if firewall is running then allow port 5601

[root@lampblogs ~]# firewall-cmd --permanent --add-port 5601/tcp
[root@lampblogs ~]# firewall-cmd --reload



Warning! This site uses cookies
By continuing to browse the site, you are agreeing to our use of cookies. Read our terms and privacy policy