Spring Boot Logs in Docker and Kubernetes: Centralized Logging Guide
Effective logging is a critical component of modern application operations, especially when dealing with containerized environments like Docker and orchestrators like Kubernetes. Logs serve as the primary source of real-time insights for debugging, monitoring, and improving performance. However, the ephemeral nature of containers complicates the logging process, necessitating centralized solutions that can handle distributed systems efficiently.
This guide outlines key steps and best practices for managing Spring Boot logs in Docker and Kubernetes, exploring various tools and configurations to streamline centralized logging.
Where Spring Boot Logs Go in Docker
When running a Spring Boot application in Docker, logs are written to standard output (stdout
) and standard error (stderr
) streams by default. These logs can be accessed using the Docker CLI.
Logging in Docker
Spring Boot’s default configurations direct all logging output to the console when running inside a Docker container.
Example Log Output (Plain Text):
2025-06-21 10:00:00 INFO [main] com.example.MyService - Application started successfully 2025-06-21 10:01:00 DEBUG [main] com.example.Logger - Fetching records from database
Access Logs Using Command Line:
docker logs <container-id>
Here, <container-id>
refers to the identifier or name of the container in which your Spring Boot application is running.
Logging Formats (JSON vs Plain Text)
Structured logging improves log analysis by enabling automatic parsing and querying of log data. Two common formats for log output in Spring Boot are plain text and JSON.
Plain Text Logging
- Default format in Spring Boot.
- Easy to read manually but difficult to process programmatically.
Example Plain-Text Log Entry:
2025-06-21 11:00:00 INFO com.example.MyService - Service started
JSON Logging
JSON is ideal for centralized logging tools like Elasticsearch or Loki, as it provides key-value pairs that are easy to parse.
Example JSON Log Entry:
{ "timestamp": "2025-06-21T11:00:00", "level": "INFO", "logger": "com.example.MyService", "message": "Service started" }
Configuring JSON Logging in Spring Boot
To enable JSON logging, configure Logback (logback-spring.xml
):
<configuration> <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="net.logstash.logback.encoder.LogstashEncoder" /> </appender> <root level="INFO"> <appender-ref ref="CONSOLE" /> </root> </configuration>
Viewing Logs via docker logs
and kubectl logs
Accessing logs in Docker and Kubernetes environments requires different commands.
Access Logs in Docker
docker logs <container-id> --follow
The --follow
option streams logs in real time.
Access Logs in Kubernetes
To view logs of a specific pod:
kubectl logs <pod-name>
For multi-container pods:
kubectl logs <pod-name> -c <container-name>
Aggregating Logs Using Fluentd, Elasticsearch, and Kibana (EFK)
The EFK stack centralizes logs and allows advanced querying and visualization.
Step 1. Install Fluentd
Fluentd collects logs from containers and streams them to Elasticsearch.
To start Fluentd, use a pre-configured Helm chart:
helm repo add fluent https://fluent.github.io/helm-charts helm install fluentd fluent/fluentd
Step 2. Configure Elasticsearch
Elasticsearch serves as the log storage backend. Install Elasticsearch using Helm:
helm repo add elastic https://helm.elastic.co helm install elasticsearch elastic/elasticsearch
Step 3. Visualize Logs in Kibana
Kibana connects to Elasticsearch and provides dashboards for querying logs.
Deploy Kibana via Helm:
helm install kibana elastic/kibana
Access Kibana at the predefined URL and configure index patterns for log exploration.
Using Loki and Grafana as a Lightweight Alternative
The Loki + Grafana stack is a lightweight logging solution that integrates seamlessly with Kubernetes.
Step 1. Install Loki and Promtail
Loki aggregates logs, while Promtail acts as an agent to forward logs from containers.
helm repo add grafana https://grafana.github.io/helm-charts helm install loki grafana/loki-stack
Step 2. Set Up Grafana
Configure Grafana to connect to Loki for log visualization:
helm install grafana grafana/grafana
Configuring Spring Boot Logback for Containerized Environments
Spring Boot applications inside containers benefit from tailored configurations. For example:
Logback for JSON Logging
<encoder class="net.logstash.logback.encoder.LogstashEncoder" />
Rolling File Configuration
Log rotation avoids excessive log file growth:
<appender class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>logs/application.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>logs/application-%d{yyyy-MM-dd}.log</fileNamePattern> <maxHistory>30</maxHistory> </rollingPolicy> </appender>
Using MDC for Tracing Logs (traceId, spanId)
Mapped Diagnostic Context (MDC) provides traceability in distributed systems by adding metadata.
Example Code:
MDC.put("traceId", requestId); MDC.put("spanId", spanId); log.info("Processing request"); MDC.clear();
Enhance logback-spring.xml
for MDC fields:
<pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] [%X{traceId}] %logger{36} - %msg%n</pattern>
Injecting Pod Metadata into Logs
Include Kubernetes pod metadata like podName
and namespace
for better traceability.
Promtail Configuration Example for Kubernetes Metadata Enrichment:
pipelineStages: - kubernetes: apiServerURL: https://kubernetes.default.svc port: 443
Redacting Sensitive Logs (e.g., Auth Headers)
Prevent sensitive data from being logged by masking or filtering logs:
log.info("User token validated successfully for userId {}", userId.replaceAll(".(?=.{4})", "*"));
Alerting on Error Logs
Set up PromQL or LogQL queries in Grafana to detect error patterns and trigger alerts.
Example Query in Loki:
{level="error"} |= "NullPointerException"
Configure notifications via Slack, email, or PagerDuty for real-time alerts.
FAQ
How do I collect logs from Kubernetes pods?
Use kubectl logs <pod-name>
. For multi-container pods, specify the container name.
Why use JSON logging over plain text?
JSON logs are structured and easier to query in centralized logging systems.
What is the advantage of the EFK stack?
EFK offers powerful querying and scalability for centralized logging in distributed environments.
How can I avoid sensitive data being logged?
Mask sensitive fields at the application level and configure filtering in Fluentd or Promtail.
How does MDC enable traceability?
MDC allows consistent trace IDs and span IDs across microservices for easier debugging.
By leveraging these approaches, you can achieve robust centralized logging for Spring Boot applications in Docker and Kubernetes, improving both observability and operational efficiency.