ELK 日志分析系统
概述
ELK Stack 是一个开源的日志分析平台,由三个主要组件组成:
- Elasticsearch: 分布式搜索引擎,用于存储和检索日志数据
- Logstash: 日志收集和处理工具
- Kibana: 数据可视化和分析界面
系统架构
┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Spring │ │ Logstash │ │ Elasticsearch│ │ Kibana │
│ Boot │───▶│ (收集处理) │───▶│ (存储索引) │◀───│ (可视化) │
│ 应用日志 │ │ │ │ │ │ │
└─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘
安装指南
1. 环境要求
- Java 8 或更高版本
- 至少 4GB 内存
- 磁盘空间:建议 50GB 以上
2. 下载安装
Elasticsearch
# 下载 Elasticsearch
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.11.0-linux-x86_64.tar.gz
# 解压
tar -xzf elasticsearch-8.11.0-linux-x86_64.tar.gz
cd elasticsearch-8.11.0
# 启动 Elasticsearch
./bin/elasticsearch
Logstash
# 下载 Logstash
wget https://artifacts.elastic.co/downloads/logstash/logstash-8.11.0-linux-x86_64.tar.gz
# 解压
tar -xzf logstash-8.11.0-linux-x86_64.tar.gz
cd logstash-8.11.0
Kibana
# 下载 Kibana
wget https://artifacts.elastic.co/downloads/kibana/kibana-8.11.0-linux-x86_64.tar.gz
# 解压
tar -xzf kibana-8.11.0-linux-x86_64.tar.gz
cd kibana-8.11.0
# 启动 Kibana
./bin/kibana
3. Docker 快速安装
# docker-compose.yml
version: "3.8"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
container_name: elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- "9200:9200"
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
logstash:
image: docker.elastic.co/logstash/logstash:8.11.0
container_name: logstash
ports:
- "5044:5044"
volumes:
- ./logstash/config:/usr/share/logstash/config
- ./logstash/pipeline:/usr/share/logstash/pipeline
kibana:
image: docker.elastic.co/kibana/kibana:8.11.0
container_name: kibana
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
volumes:
elasticsearch-data:
配置说明
Elasticsearch 配置
# config/elasticsearch.yml
cluster.name: elk-cluster
node.name: node-1
network.host: 0.0.0.0
http.port: 9200
discovery.type: single-node
xpack.security.enabled: false
Logstash 配置
# config/logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
# pipeline/logstash.conf
input {
beats {
port => 5044
}
}
filter {
if [fields][service] == "spring-boot" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} %{GREEDYDATA:message}" }
}
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss.SSS" ]
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "spring-boot-logs-%{+YYYY.MM.dd}"
}
}
Kibana 配置
# config/kibana.yml
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://elasticsearch:9200"]
Spring Boot 集成
1. Maven 依赖
<dependencies>
<!-- Spring Boot Starter -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- Logback 配置 -->
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
</dependency>
<!-- Logstash Logback Encoder -->
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>7.4</version>
</dependency>
</dependencies>
2. Logback 配置
<!-- src/main/resources/logback-spring.xml -->
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<includeMdc>true</includeMdc>
<includeContext>false</includeContext>
</encoder>
</appender>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logs/application.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>logs/application.%d{yyyy-MM-dd}.log</fileNamePattern>
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<includeMdc>true</includeMdc>
<includeContext>false</includeContext>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="STDOUT" />
<appender-ref ref="FILE" />
</root>
</configuration>
3. 应用配置
# src/main/resources/application.yml
spring:
application:
name: elk-demo
logging:
level:
root: INFO
com.example: DEBUG
pattern:
console: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
file: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
4. 示例控制器
// src/main/java/com/example/elk/controller/LogController.java
package com.example.elk.controller;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.bind.annotation.*;
import java.util.HashMap;
import java.util.Map;
@RestController
@RequestMapping("/api/logs")
public class LogController {
private static final Logger logger = LoggerFactory.getLogger(LogController.class);
@GetMapping("/test")
public Map<String, Object> testLog() {
logger.info("收到测试日志请求");
logger.debug("调试信息:用户访问了测试接口");
logger.warn("警告:这是一个测试警告");
logger.error("错误:这是一个测试错误");
Map<String, Object> result = new HashMap<>();
result.put("message", "日志测试成功");
result.put("timestamp", System.currentTimeMillis());
return result;
}
@PostMapping("/custom")
public Map<String, Object> customLog(@RequestBody Map<String, String> request) {
String message = request.get("message");
String level = request.get("level");
switch (level.toLowerCase()) {
case "debug":
logger.debug("自定义调试日志: {}", message);
break;
case "warn":
logger.warn("自定义警告日志: {}", message);
break;
case "error":
logger.error("自定义错误日志: {}", message);
break;
default:
logger.info("自定义信息日志: {}", message);
}
Map<String, Object> result = new HashMap<>();
result.put("status", "success");
result.put("logged_message", message);
result.put("level", level);
return result;
}
}
5. 主应用类
// src/main/java/com/example/elk/ElkDemoApplication.java
package com.example.elk;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class ElkDemoApplication {
public static void main(String[] args) {
SpringApplication.run(ElkDemoApplication.class, args);
}
}
使用 Filebeat 收集日志
1. 安装 Filebeat
# 下载 Filebeat
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.11.0-linux-x86_64.tar.gz
# 解压
tar -xzf filebeat-8.11.0-linux-x86_64.tar.gz
cd filebeat-8.11.0
2. Filebeat 配置
# filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /path/to/spring-boot/logs/*.log
fields:
service: spring-boot
fields_under_root: true
output.logstash:
hosts: ["localhost:5044"]
logging.level: info
3. 启动 Filebeat
./filebeat -e -c filebeat.yml
Kibana 可视化配置
1. 创建索引模式
- 打开 Kibana (http://localhost:5601)
- 进入 Stack Management > Index Patterns
- 创建索引模式:
spring-boot-logs-* - 设置时间字段:
@timestamp
2. 创建仪表板
日志级别统计
{
"aggs": [
{
"id": "1",
"enabled": true,
"type": "count",
"schema": "metric",
"params": {}
},
{
"id": "2",
"enabled": true,
"type": "terms",
"schema": "segment",
"params": {
"field": "level.keyword",
"size": 5,
"order": "desc",
"orderBy": "1"
}
}
],
"type": "pie"
}
时间序列图
{
"aggs": [
{
"id": "1",
"enabled": true,
"type": "count",
"schema": "metric",
"params": {}
},
{
"id": "2",
"enabled": true,
"type": "date_histogram",
"schema": "segment",
"params": {
"field": "@timestamp",
"timeRange": {
"from": "now-1h",
"to": "now"
},
"useNormalizedEsInterval": true,
"scaleMetricValues": false,
"interval": "auto",
"drop_partials": false,
"min_doc_count": 1,
"extended_bounds": {}
}
}
],
"type": "line"
}
监控和告警
1. 创建告警规则
{
"name": "Error Log Alert",
"type": "threshold",
"query": {
"language": "kuery",
"query": "level: ERROR"
},
"threshold": {
"field": "count",
"value": 10,
"comparator": ">"
},
"actions": [
{
"type": "email",
"params": {
"to": ["admin@example.com"],
"subject": "错误日志告警",
"body": "检测到大量错误日志,请及时处理"
}
}
]
}
2. 性能监控
{
"name": "Response Time Alert",
"type": "threshold",
"query": {
"language": "kuery",
"query": "message: \"响应时间\" AND response_time > 1000"
},
"threshold": {
"field": "count",
"value": 5,
"comparator": ">"
}
}
最佳实践
1. 日志格式规范
- 使用结构化日志格式
- 包含必要的上下文信息
- 避免记录敏感信息
- 设置合适的日志级别
2. 性能优化
- 合理设置索引分片数
- 定期清理旧日志
- 使用索引生命周期管理
- 监控集群资源使用
3. 安全配置
- 启用 Elasticsearch 安全功能
- 配置用户认证和授权
- 使用 HTTPS 传输
- 定期更新组件版本
故障排除
常见问题
Elasticsearch 启动失败
- 检查内存配置
- 验证端口占用
- 查看错误日志
Logstash 连接失败
- 检查网络连通性
- 验证配置文件语法
- 确认 Elasticsearch 状态
Kibana 无法访问
- 检查端口配置
- 验证 Elasticsearch 连接
- 查看浏览器控制台错误
调试命令
# 检查 Elasticsearch 状态
curl -X GET "localhost:9200/_cluster/health?pretty"
# 查看索引
curl -X GET "localhost:9200/_cat/indices?v"
# 测试 Logstash 配置
./bin/logstash -f pipeline/logstash.conf --config.test_and_exit
总结
ELK Stack 为 Spring Boot 应用提供了强大的日志分析能力。通过合理的配置和使用,可以实现:
- 集中化日志管理
- 实时日志监控
- 智能告警机制
- 数据可视化分析
- 性能问题诊断
建议根据实际业务需求调整配置参数,并定期维护和优化系统性能。
