小知识:Docker构建ELK Docker集群日志收集系统

当我们搭建好Docker集群后就要解决如何收集日志的问题 ELK就提供了一套完整的解决方案 本文主要介绍使用Docker搭建ELK 收集Docker集群的日志

ELK简介

ELK由ElasticSearch、LogstashKiabana三个开源工具组成

Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。

Logstash是一个完全开源的工具,他可以对你的日志进行收集、过滤,并将其存储供以后使用

Kibana 也是一个开源和免费的工具,它Kibana可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志。

使用Docker搭建ELK平台

首先我们编辑一下 logstash的配置文件 logstash.conf

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
input {
udp {
port => 5000
type => json
}
}
filter {
json {
source => “message”
}
}
output {
elasticsearch {
hosts => “elasticsearch:9200” #将logstash的输出到 elasticsearch 这里改成你们自己的host
}
}

然后我们还需要需要一下Kibana 的启动方式

编写启动脚本 等待elasticserach 运行成功后启动

?
1
2
3
4
5
6
7
8
9
10
#!/usr/bin/env bash
# Wait for the Elasticsearch container to be ready before starting Kibana.
echo “Stalling for Elasticsearch”
while true; do
nc -q 1 elasticsearch 9200 2>/dev/null && break
done
echo “Starting Kibana”
exec kibana

修改Dockerfile 生成自定义的Kibana镜像

?
1
2
3
4
5
6
7
8
9
10
FROM kibana:latest
RUN apt-get update && apt-get install -y netcat
COPY entrypoint.sh /tmp/entrypoint.sh
RUN chmod +x /tmp/entrypoint.sh
RUN kibana plugin –install elastic/sense
CMD [“/tmp/entrypoint.sh”]

同时也可以修改一下Kibana 的配置文件 选择需要的插件

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
# Kibana is served by a back end server. This controls which port to use.
port: 5601
# The host to bind the server to.
host: “0.0.0.0”
# The Elasticsearch instance to use for all your queries.
elasticsearch_url: “http://elasticsearch:9200”
# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
# then the host you use to connect to *this* Kibana instance will be sent.
elasticsearch_preserve_host: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations
# and dashboards. It will create a new index if it doesnt already exist.
kibana_index: “.kibana”
# If your Elasticsearch is protected with basic auth, this is the user credentials
# used by the Kibana server to perform maintence on the kibana_index at statup. Your Kibana
# users will still need to authenticate with Elasticsearch (which is proxied thorugh
# the Kibana server)
# kibana_elasticsearch_username: user
# kibana_elasticsearch_password: pass
# If your Elasticsearch requires client certificate and key
# kibana_elasticsearch_client_crt: /path/to/your/client.crt
# kibana_elasticsearch_client_key: /path/to/your/client.key
# If you need to provide a CA certificate for your Elasticsarech instance, put
# the path of the pem file here.
# ca: /path/to/your/CA.pem
# The default application to load.
default_app_id: “discover”
# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
# request_timeout setting
# ping_timeout: 1500
# Time in milliseconds to wait for responses from the back end or elasticsearch.
# This must be > 0
request_timeout: 300000
# Time in milliseconds for Elasticsearch to wait for responses from shards.
# Set to 0 to disable.
shard_timeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
# startup_timeout: 5000
# Set to false to have a complete disregard for the validity of the SSL
# certificate.
verify_ssl: true
# SSL for outgoing requests from the Kibana Server (PEM formatted)
# ssl_key_file: /path/to/your/server.key
# ssl_cert_file: /path/to/your/server.crt
# Set the path to where you would like the process id file to be created.
# pid_file: /var/run/kibana.pid
# If you would like to send the log output to a file you can set the path below.
# This will also turn off the STDOUT log output.
log_file: ./kibana.log
# Plugins that are included in the build, and no longer found in the plugins/ folder
bundled_plugin_ids:
– plugins/dashboard/index
– plugins/discover/index
– plugins/doc/index
– plugins/kibana/index
– plugins/markdown_vis/index
– plugins/metric_vis/index
– plugins/settings/index
– plugins/table_vis/index
– plugins/vis_types/index
– plugins/visualize/index

好了下面我们编写一下 Docker-compose.yml 方便构建

端口之类的可以根据自己的需求修改 配置文件的路径根据你的目录修改一下 整体系统配置要求较高 请选择配置好点的机器

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
elasticsearch:
image: elasticsearch:latest
command: elasticsearch -Des.network.host=0.0.0.0
ports:
– “9200:9200”
– “9300:9300”
logstash:
image: logstash:latest
command: logstash -f /etc/logstash/conf.d/logstash.conf
volumes:
– ./logstash/config:/etc/logstash/conf.d
ports:
– “5001:5000/udp”
links:
– elasticsearch
kibana:
build: kibana/
volumes:
– ./kibana/config/:/opt/kibana/config/
ports:
– “5601:5601”
links:
– elasticsearch
?
1
2
#好了命令 就可以直接启动ELK了
docker-compose up -d

访问之前的设置的kibanna的5601端口就可以看到是否启动成功了

使用logspout收集Docker日志

下一步我们要使用logspout对Docker日志进行收集 我们根据我们的需求修改一下logspout镜像

编写配置文件 modules.go

?
1
2
3
4
5
6
7
package main
import (
_ “github.com/looplab/logspout-logstash”
_ “github.com/gliderlabs/logspout/transports/udp”
)

编写Dockerfile

?
1
2
FROM gliderlabs/logspout:latest
COPY ./modules.go /src/modules.go

重新构建镜像后 在各个节点运行即可

?
1
2
docker run -d –name=”logspout” –volume=/var/run/docker.sock:/var/run/docker.sock \
jayqqaa12/logspout logstash://你的logstash地址

现在打开Kibana 就可以看到收集到的 docker日志了

注意Docker容器应该选择以console输出 这样才能采集到

%小知识:Docker构建ELK Docker集群日志收集系统-猿站网-插图

好了我们的Docker集群下的ELK 日志收集系统就部署完成了

如果是大型集群还需要添加logstash 和elasticsearch 集群 这个我们下回分解。

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持服务器之家。

声明: 猿站网有关资源均来自网络搜集与网友提供,任何涉及商业盈利目的的均不得使用,否则产生的一切后果将由您自己承担! 本平台资源仅供个人学习交流、测试使用 所有内容请在下载后24小时内删除,制止非法恶意传播,不对任何下载或转载者造成的危害负任何法律责任!也请大家支持、购置正版! 。本站一律禁止以任何方式发布或转载任何违法的相关信息访客发现请向站长举报,会员发帖仅代表会员个人观点,并不代表本站赞同其观点和对其真实性负责。本网站的资源部分来源于网络,如有侵权烦请发送邮件至:2697268773@qq.com进行处理。
建站知识

小知识:在VMWare上安装ubuntu及VMWare Tools详细教程

2023-4-21 16:26:08

建站知识

小知识:VMware中虚拟机的NAT设置方法

2023-4-21 16:40:57

0 条回复 A文章作者 M管理员
    暂无讨论,说说你的看法吧
个人中心
购物车
优惠劵
今日签到
有新私信 私信列表
搜索