Filebeat is a lightweight shipper that enables you to send your Docker container application logs to Logstash and Elasticsearch. Configure Filebeat using the pre-defined examples below to start sending and analysing your Docker application logs. To deplo y our stack, we’ll use a pre installed Linux Ubuntu 18.04 LTS with Docker CE 17.12.0, Elasticsearch 6.2.4, and Kibana 6.2.4. In Linux, the Docker containers log files are in this.
Yesterday, I gave a talk on how I use Docker to deployapplicationsat Clermont’ech API Hour #12,a French local developer group. I explained how to create a simple yet robustinfrastructure to deploy a web application and a few services with zerodowntime.
In order to monitor my infrastructure, and especially the HTTP responses, I gavethe famous ELK stack a try. ELK stands forElasticsearch,Logstash,Kibana. As I did not reallytalk about this part, I am going to explain it in this blog post.
I wrote a Dockerfile to build an ELKimage. While you can directly use this image to run a container (mounting ahost folder as a volume for the configuration files), you should probablyextend it to add your own configuration so that you can get rid of this mapping toa host folder. This is one of the Docker best practices. Last but notleast, Elasticsearch’s data are located into
/data. I recommend that you use adata-only containertopersist thesedata.
In my opinion, such a stack should run on its own server, that is why its
logstash configuration should only receive logs from the outside (theproduction environment for instance) and send them to Elasticsearch. In otherwords, we need a tool to collect logs in production, and process them elsewhere.Fortunately, that is exactly the goal of thelogstash-forwarder(formerly lumberjack) project!
Below is an example of
logstash configuration to process logs received on port
5043 thank to the
lumberjack input, and persist them into Elasticsearch. Youmay notice that Hipache logs are filtered(I actually took this configuration from my production server :p).
It is worth mentioning that
Back to the production environment, I wrote a Dockerfile to run
logstash-forwarder.You need the same set of SSL files as seen previously in the
logstashconfiguration, and a configuration file for
logstash-forwarder. Then again,using this image as a base image is recommended, but for testing purpose, we canmount host folders as volumes:
config.json file contains the following content. Ittells
logstash-forwarder to send
hipache logs (found in
Then again, having data-only containers everywhere would be better, even forlogs (and you would use
--volume-from datalogs for instance).
You are all set! You can now create your own dashboards in Kibana. Here is mineto monitor the HTTP responses of the load balancer:
Need inspiration? Watch thisvideoif you speak French…
Twelve Factors has a point saying that logs should besent to
std[err out], which seems not always possible to me, but if you dothat, then you will probably be interested inlogspout.
Don’t hesitate to share your point of view :-)
Hi Logstash Ninja Masters,
I recently spun up a Logstash docker container (v 7.4.0), to replace a local-app version of LS on the same Ubuntu box. I‘m confused how to instruct the Docker LS to use the configuration that the local-app LS used?
My host machine is Ubuntu 16.04.4, and my Docker version is 17.09.0-ce. For the moment, I want a simple standalone LS docker instance. By that, I mean I don’t want to worry about pipeline files, I don’t want to use docker-compose. Here’s my docker run command:
That second line is my clumsy attempt to tell Docker LS, “Hey, be sure to import and use this local config file, okay?”
The container comes up great:
So… the container comes up, but I don’t think its using my config file.
How do I force the Docker LS to do that? I’ve tried copying the desired file into the container’s
/usr/share/logstash/config/ directory; no effect. Can I do that and somehow restart the service?
Some other questions I have that aren’t address in the documentation… From running the local version of LS, I learned that you launched the LS service like this:
~bin/logstash -r -f config/myConfig.conf
Basically, you execute the logstash app, suppling your config file as input. The config file has a .conf suffix. So why does the LS Docker documentation talk about .yml files as config files? Are they the same?
Final Jeopardy Question: Once my config file is (hopefully) used, how do I verify Docker LS is running it?
Anyway, if you can see the error of my ways, I’d appreciate any help you can offer. Thanks!