侧边栏壁纸
博主头像
落叶人生博主等级

走进秋风,寻找秋天的落叶

  • 累计撰写 130562 篇文章
  • 累计创建 28 个标签
  • 累计收到 9 条评论
标签搜索

目 录CONTENT

文章目录

使用Docker构建ELK Docker集群日志收集系统

2023-12-21 星期四 / 0 评论 / 0 点赞 / 143 阅读 / 6879 字

当我们搭建好Docker集群后就要解决如何收集日志的问题ELK就提供了一套完整的解决方案本文主要介绍使用Docker搭建ELK 收集Docker集群的日志ELK简介ELK由ElasticSearch、

.

当我们搭建好Docker集群后就要解决如何收集日志的问题ELK就提供了一套完整的解决方案本文主要介绍使用Docker搭建ELK 收集Docker集群的日志

.

ELK简介

ELK由ElasticSearch、Logstash和Kiabana三个开源工具组成

.

Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。

Logstash是一个完全开源的工具,他可以对你的日志进行收集、过滤,并将其存储供以后使用

Kibana 也是一个开源和免费的工具,它Kibana可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志。

.

使用Docker搭建ELK平台

首先我们编辑一下 logstash的配置文件 logstash.conf

input {    udp {    port => 5000    type => json  }}filter {   json {      source => "message"   }}output {	elasticsearch {             hosts => "elasticsearch:9200"  #将logstash的输出到 elasticsearch 这里改成你们自己的host 	}}

然后我们还需要需要一下Kibana 的启动方式

编写启动脚本 等待elasticserach 运行成功后启动

#!/usr/bin/env bash# Wait for the Elasticsearch container to be ready before starting Kibana.echo "Stalling for Elasticsearch" while true; do    nc -q 1 elasticsearch 9200 2>/dev/null && breakdoneecho "Starting Kibana"exec kibana

修改Dockerfile 生成自定义的Kibana镜像

FROM kibana:latestRUN apt-get update && apt-get install -y netcatCOPY entrypoint.sh /tmp/entrypoint.shRUN chmod +x /tmp/entrypoint.shRUN kibana plugin --install elastic/senseCMD ["/tmp/entrypoint.sh"]

同时也可以修改一下Kibana 的配置文件 选择需要的插件

# Kibana is served by a back end server. This controls which port to use.port: 5601# The host to bind the server to.host: "0.0.0.0"# The Elasticsearch instance to use for all your queries.elasticsearch_url: "http://elasticsearch:9200"# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,# then the host you use to connect to *this* Kibana instance will be sent.elasticsearch_preserve_host: true# Kibana uses an index in Elasticsearch to store saved searches, visualizations# and dashboards. It will create a new index if it doesn't already exist.kibana_index: ".kibana"# If your Elasticsearch is protected with basic auth, this is the user credentials# used by the Kibana server to perform maintence on the kibana_index at statup. Your Kibana# users will still need to authenticate with Elasticsearch (which is proxied thorugh# the Kibana server)# kibana_elasticsearch_username: user# kibana_elasticsearch_password: pass# If your Elasticsearch requires client certificate and key# kibana_elasticsearch_client_crt: /path/to/your/client.crt# kibana_elasticsearch_client_key: /path/to/your/client.key# If you need to provide a CA certificate for your Elasticsarech instance, put# the path of the pem file here.# ca: /path/to/your/CA.pem# The default application to load.default_app_id: "discover"# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to# request_timeout setting# ping_timeout: 1500# Time in milliseconds to wait for responses from the back end or elasticsearch.# This must be > 0request_timeout: 300000# Time in milliseconds for Elasticsearch to wait for responses from shards.# Set to 0 to disable.shard_timeout: 0# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying# startup_timeout: 5000# Set to false to have a complete disregard for the validity of the SSL# certificate.verify_ssl: true# SSL for outgoing requests from the Kibana Server (PEM formatted)# ssl_key_file: /path/to/your/server.key# ssl_cert_file: /path/to/your/server.crt# Set the path to where you would like the process id file to be created.# pid_file: /var/run/kibana.pid# If you would like to send the log output to a file you can set the path below.# This will also turn off the STDOUT log output.log_file: ./kibana.log# Plugins that are included in the build, and no longer found in the plugins/ folderbundled_plugin_ids: - plugins/dashboard/index - plugins/discover/index - plugins/doc/index - plugins/kibana/index - plugins/markdown_vis/index - plugins/metric_vis/index - plugins/settings/index - plugins/table_vis/index - plugins/vis_types/index - plugins/visualize/index

好了下面我们编写一下 Docker-compose.yml 方便构建

端口之类的可以根据自己的需求修改配置文件的路径根据你的目录修改一下整体系统配置要求较高 请选择配置好点的机器

elasticsearch:  image: elasticsearch:latest  command: elasticsearch -Des.network.host=0.0.0.0  ports:    - "9200:9200"    - "9300:9300"logstash:  image: logstash:latest  command: logstash  -f /etc/logstash/conf.d/logstash.conf  volumes:    - ./logstash/config:/etc/logstash/conf.d  ports:    - "5001:5000/udp"  links:    - elasticsearchkibana:  build: kibana/  volumes:    - ./kibana/config/:/opt/kibana/config/  ports:    - "5601:5601"  links:    - elasticsearch
#好了命令 就可以直接启动ELK了  docker-compose up -d 

访问之前的设置的kibanna的5601端口就可以看到是否启动成功了

使用logspout收集Docker日志

下一步我们要使用logspout对Docker日志进行收集我们根据我们的需求修改一下logspout镜像

编写配置文件 modules.go

package mainimport (    _ "github.com/looplab/logspout-logstash"    _ "github.com/gliderlabs/logspout/transports/udp")

编写Dockerfile

FROM  gliderlabs/logspout:latestCOPY ./modules.go /src/modules.go

重新构建镜像后 在各个节点运行即可

	     docker run -d --name="logspout"  --volume=/var/run/docker.sock:/var/run/docker.sock /                 jayqqaa12/logspout  logstash://你的logstash地址

现在打开Kibana 就可以看到收集到的 docker日志了

注意Docker容器应该选择以console输出 这样才能采集到

好了我们的Docker集群下的ELK 日志收集系统就部署完成了

如果是大型集群还需要添加logstash 和elasticsearch 集群 这个我们下回分解

广告 广告

评论区