How to use ELK stack
Matěj RačinskýIn this article, I'll show you how to use ELK stack for logging. We will use ELK in docker for easy setup.
What is ELK Stack
ELK stack (now known as Elastic stack) stands for Elasticsearch, Logstash, Kibana stack. These three technologies are used for powerful log management.
Logstash is used to manage logs. It collects, parses and stores them. There is a lot of existing inputs, filters and outputs.
Elasticsearch is powerful, distributed NoSQL database with REST API. In ELK stack, Elasticsearch is used as persistent storage for our logs.
And Kibana visualizes logs from Elasticsearch and lets you create many handy visualizations and dashboards which allow you to see all important metrics in one place.
How to run ELK Stack: Theory
The most simple installation of ELK is with docker. Because there are 3 services in ELK (obviously),
you need to use docker-compose
to manage them with ease.
There are many docker-compose
projects for ELK on github, I use this repository.
Note that this links to searchguard branch of the git repository.
Readme of the github repository describes how to run the ELK stack, be sure to check it out for further use.
Security Note: Use Searchguard
ELK stack does not offer authentication out of the box.
Searchguard is plugin to Elasticsearch, which adds authentication. You should not use ELK without any kind of authentication!
Note that you need to initialize the searchguard after starting services with docker-compose up
,
as described here.
Without it, searchguard will not work.
There is no need to configure elasticsearch for this example, since it can infer schema from data sent to it, so we will configure only Logstash in this article (except of searchguard configuration).
Logstash has inputs and outputs configuration in logstash.conf.
Note that directory with logstash.conf
is mounted as volume in docker-compose.yml,
so there is no need to rebuild the image when logstash.conf
is changed.
Also, by default mapping of data in Elasticsearch to host OS is not present.
So we will have to modify volumes
section in the docker-compose.yml
, so it maps data from Elasticsearch.
That was some theory and now let's try it in practise.
How to run ELK Stack: Practise
-
switch to searchguard branch by
git checkout searchguard
-
Update the
docker-compose.yml
file to persist Elasticsearch data:Add
- ./elasticsearch/data:/usr/share/elasticsearch/data
to thevolumes
section ofelasticsearch
service.So the
volumes
section should like this:volumes: - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml - ./elasticsearch/data:/usr/share/elasticsearch/data
-
run the stack by
docker-compose up
-
initialize the searchguard by
docker-compose exec -T elasticsearch bin/init_sg.sh
Now Elasticsearch runs on port 9200 and Kibana runs on port 5601.
As you could notice, on both sites you are asked for login. That's because the searchguard branch contains Elasticsearch with Searchguard plugin installed.
To try it out, you can use username admin
and password admin
.
So now we have ELK stack fully working!
Of course you don't want to use your ELK stack with these users without changing password, because using publicly accessible passwords does not offer much security.
Searchguard Configuration - Changing Passwords
When you run the ELK stack with searchguard, it already contains some predefined users with configured privileges. You can see these users in searchguard documentation.
Searchguard users are stored in file elasticsearch/config/sg_internal_users.yml
:
Contents of this file is
admin:
hash: $2a$12$VcCDgh2NDk07JGN0rjGbM.Ad41qVR/YFJcgHp0UGns5JDymv..TOG
#password is: admin
logstash:
hash: $2a$12$u1ShR4l4uBS3Uv59Pa2y5.1uQuZBrZtmNfqB3iM/.jL0XoV9sghS2
#password is: logstash
kibanaserver:
hash: $2a$12$4AcgAt3xwOWadA5s5blL6ev39OXDNhmOesEoo33eZtrq2N0YrU3H.
#password is: kibanaserver
kibanaro:
hash: $2a$12$JJSXNfTowz7Uu5ttXfeYpeYE0arACvcwlPBStB1F.MI7f0U9Z4DGC
#password is: kibanaro
Now, we'll change these passwords. As we can see, passwords are not stored in plaintext in this yml file, but bcrypt hash is used instead. We can generate has for new password by running hashing script in our docker container.
Hashing script is in the elasticsearch service, because the searchguard is installed there.
By default, elasticsearch/config/sg_internal_users.yml
is not mapped as volume, but used only during build of the Elasticsearch image.
We will define this file as volume in docker-compose.yml
to easily change password without need to rebuild the image.
So to change passwords:
-
Add
./elasticsearch/config/sg_internal_users.yml:/usr/share/elasticsearch/config/sg_internal_users.yml
to thevolumes
section of theelasticsearch
service.Now our
volumes
section looks like this:volumes: - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml - ./elasticsearch/data:/usr/share/elasticsearch/data - ./elasticsearch/config/sg_internal_users.yml:/usr/share/elasticsearch/config/sg_internal_users.yml
-
Generate hash of new password by
docker-compose run elasticsearch plugins/search-guard-5/tools/hash.sh -p [some_password]
-
Replace default hashes with the new ones in the file
sg_internal_users.yml
-
Restart the stack by
docker-compose down docker-compose up
Congratulations, now you have your ELK stack running and secure.
Log in using the admin account from searchguard, because kibana uses these credentials to access Elasticsearch. Otherwise you could have problems with accessing logs.
Now you have ELK stack running and secure.