Spring Boot logging with Loki, Promtail, and Grafana (Loki stack)

Artur Bartosik
5 min readJan 6, 2023

--

In the previous article, I presented how to setup a monitoring stack using Prometheus Operator and integrate it with a sample Spring Boot app. This post will be analogous to the previous one but will be about another important topic — logs.

We will use Spring Boot application in demo. However, you will be able to configure any other app following this article. The only thing you need to ensure is to configure your app to produce logs in JSON format.

Configure Spring Boot to produce JSON logs

This is a GitHub link to my demo app. It’s simple Spring Boot web app used to debug various stuff. There are many ways to configure JSON logging in Spring Boot. I decided to use Logback because it is easy to configure and one of the most widely used logging library in the Java Community. To enable JSON logging we need to add below dependencies.

implementation("ch.qos.logback.contrib:logback-json-classic:0.1.5")
implementation("ch.qos.logback.contrib:logback-jackson:0.1.5")
implementation("org.codehaus.janino:janino:3.1.9")

Janino dependency additionally adds support for conditional processing in configuration file logback.xml. Thanks to this, we can make our configuration parameterized. Use standard logs output when running the application locally, but enable JSON logging only when the application is running in the Kubernetes Pod by injecting the proper env variable.

Here you can find my Logback configuration file. This is all that you have to configure in Spring Boot app to enable JSON logging. When you run applications with env JSON_LOGS_ENABLED=true logs should be printed in JSON format. More information about Logback Layouts - component responsible for formatting log messages.

Grafana Loki stack

Loki Stack consists of 3 main components:

  • Loki — log aggregation system responsible for storing the logs and processing queries.
  • Promtail — lightweight agent responsible for gathering logs and pushing them to Loki. You can compare it to Fluentbit or Filebeat.
  • Grafana — visualization layer responsible for querying and displaying the logs on dashboards.
Loki stack — Promtail & Loki & Grafana

Grafana Labs in its Helm repository provides chart that can install Loki stack also with other complementary tools like Logstash or Prometheus.

Quick installation with helmfile

If you haven’t used helmfile yet, I strongly encourage you to check out this tool. In my previous article about monitoring, I described why helmfile is worth using. Here I just leave for you Installation Gist.

Firstly, clone my repo with Lok stack helmfile, and check how little configuration is needed to install all stuff. This is because the Loki stack installation comes with reasonably safe defaults whenever possible, so we have only to overwrite some crucial values. To install it we need to exec single command.

helmfile apply -i

After a short while, you should see a message that you have successfully installed three releases.

UPDATED RELEASES:
NAME CHART VERSION
loki-stack grafana-labs/loki-stack 2.8.9
grafana-dashboards local-charts/grafana-dashboards 1.0.0
demo luafanti/spring-debug-app 1.0.0

Explore logs in Grafana and understand Promtail scraping

Establish a tunnel to Grafana and check if preinstalled dashboards with logs can show data. If you want to know how my local Chart grafana-dashboardsadds dashboards to Grafana, I refer you to the paragraph in my previous article.

# establish port to Grafana Service
kubectl port-forward -n logging svc/loki-stack-grafana 3000:80

# get Grafana Credentials from Secrets
kubectl get secrets -n logging loki-stack-grafana --template='{{index .data "admin-password" | base64decode}}'
kubectl get secrets -n logging loki-stack-grafana --template='{{index .data "admin-user" | base64decode}}'

You should be able to see logs like below in the custom dashboard.

Grafana logging dashboard

Thanks to custom variables that use labels, we can create various filters for the dashboard. You can look up my configuration of variables and extend it with an analogy way for your own needs. At the top, I marked the filter with detected pods in selected namespace. In the lower part, you can see a preview of all labels that are associated with a single log line. Most labels are meta information that Promtail adds during scraping targets. This part of the Promtail configuration provides it. In this section, I also marked a few labels that not come out-of-the-box e.g. leavel , class , thread . We added these labels using the Promtail json stage. You need to know that Promtail processes scraped logs in a pipeline. A pipeline is comprised of a set of stages. json stage is a parsing stage that reads the log line as JSON and accepts JMESPath expressions to extract data.

- job_name: custom-config
pipeline_stages:
- docker: {}
- json:
expressions:
timestamp: timestamp
level: level
thread: thread
class: logger
message: message
context: context
- labels:
level:
class:
context:
thread:
- timestamp:
format: RFC3339
source: timestamp
- output:
source: message

One very important thing❗. As you can see in the config, above json stage I have docker stage added. This stage can read properly logs from Kubernetes with docker container runtime. If nodes from your cluster use different container runtime e.g. containerd (quite popular for managed Kubernetes clusters e.g. EKS, AKS) you have to replace this stage with cri stage.

- job_name: custom-config
pipeline_stages:
- cri: {}
- json:
### rest of config

BTW, docker container runtime has already become deprecated. His successor became containerd. It was designed by Docker as well. Containerd is simplified compared to its predecessor - offers a minimum set of functionality for managing images and executing containers on a node.

Dealing with multi-line stack traces of Java

If you have ever worked with Java application logs you know that stack traces are painful. Not only because they are often misunderstood, but also because they are multi-line. When a multi-line event is written to a log output, Promtail and also many other scrapers, will take each row as its own entry and send the separately to Loki. I have good news for you. The prepared configuration solves this problem thanks to Spring Boot JSON log output and appropriate Promtail configuration. You can verify it easily by following the steps.

# establish tunnel to Spring Boot Service
kubectl port-forward -n sandbox svc/demo-spring-debug-app 8080:8080

# call Spring Boot debug endpoint to produce exception log
curl <http://localhost:8080/logs/exception>

Back to Grafana and verify if stack traces are printed correctly.

Closing words

In this and the previous article, we covered two very important aspects of application observability — monitoring and logging. The only thing missing for the full trinity is tracing. Tracing is especially important in microservices or nanoservice (Serverless) architecture. In this area, Grafana Labs also has an interesting solution that is worth checking out — Tempo. In the next part of this series, I will try to introduce this tool in a similar way.

--

--

Artur Bartosik

DevOps & Serverless Enthusiast. AWS, GCP, and K8S certified. Home page: https://abartosik.dev