容器日志集中,完结横向扩充

By admin in 美高梅手机版4858 on 2019年4月8日

架构

前者显示 –> 索引搜索 <– 日志提取及过滤 –> 日志缓存 <–
日志收集
Kibana –> Elastash <– Logstash –> redis <– filebeat

操作系统: CentOS 七.四
连带软件:filebeat-陆.三.0-linux-x捌陆_64.tar.gz, docker 18.03.1-ce,
redis_美高梅手机版4858,version:4.0.10, docker-compose 1.18.0

日记文件名称及内容:

/iba/ibaboss/java/bossmobile-tomcat-8.0.26/logs/catalina.out
#截取的内容:
22-Jun-2018 17:45:22.397 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version:        Apache Tomcat/8.0.26
22-Jun-2018 17:45:22.399 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built:          Aug 18 2015 11:38:37 UTC
22-Jun-2018 17:45:22.399 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number:         8.0.26.0

/iba/ibaboss/java/bossmobile-tomcat-8.0.26/logs/ibalife.log
# 截取的内容:
[ERROR] [2018-06-30 17:41:56][com.iba.boss.pubsub.listener.core.ListenerTemplate]ErpCustomerRegEventListener onListen Done
[ERROR] [2018-06-30 17:41:56][com.iba.boss.pubsub.listener.user.BmcLevelDescEventListener]bmcLevelDescEventListener -> Waiting for set levelDesc
[ERROR] [2018-06-30 17:41:56][com.iba.boss.pubsub.listener.core.ListenerTemplate]BmcLevelDescEventListener onListen Done

Docker 容器日志集中 ELK

环境:Centos七或上述版本

<small>关键词: filebeat logstash rancher
scale- out</small>

安装 docker

详情可以参考 
https://www.cnblogs.com/klvchen/p/8468855.html
https://www.cnblogs.com/klvchen/p/9098745.html

ELK 基于 ovr 网络下

设置前提:安装好docker服务并运行、安装好docker-compose编排工具

PS:好久不曾更新小说了,本次带来大概filebeat,经历了多少个等级的改正,固化下了那套filebeat的消除方案。

安装 docker-compose

详情可以参考 https://www.cnblogs.com/klvchen/p/9242774.html

美高梅手机版4858 1

1、安装ELK

本篇文章的对象

美高梅手机版4858 2

安装 redis (那里运用 docker)

docker pull redis 

mkdir /home/ibaboss/compose/config -p 
cd  /home/ibaboss/compose/config

# redis 的配置,密码为 ibalife
vi redis.conf 

#daemonize yes
pidfile /data/redis.pid
port 6379
tcp-backlog 30000
timeout 0
tcp-keepalive 10
loglevel notice
logfile /data/redis.log
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /data
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
requirepass ibalife
maxclients 30000
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events KEA
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 1000
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

# 编写 docker-compose redis yml 文件
cd /home/ibaboss/compose

vi docker-compose-redis.yml 
version: '3'
services:
  elk_redis:
    image: redis:latest
    container_name: elk_redis
    ports:
      - "192.168.0.223:6379:6379"     # 为提升安全,redis只对内网开放
    volumes:
      - ./config/redis.conf:/usr/local/etc/redis/redis.conf
    networks:
      - logs_elk  # 使用指定的网络 logs_elk
    entrypoint:
      - redis-server
      - /usr/local/etc/redis/redis.conf

networks:
  logs_elk:
    external:    # 指定使用网络
      name: logs_elk

# 创建 elk 专用的网络
docker network create  --attachable logs_elk

# 启动 redis
docker-compose -f docker-compose-redis.yml up -d 

# 查看状态
docker ps -a

# 可通过上一步获得 CONTAINER ID,查看启动日志
docker logs -f 4841efd2e1ef

docker-compose.yaml

git clone

背景

上壹篇作品《一键运行 filebeat 5.一.1 集成
logstash》
首要介绍了平素设置的法门和生成filebeat配置文件然后通过docker-compose.yml1键运行filebeat
service,在开张营业在此以前先讲讲在此以前的痛点,恐怕你有体会。

安装 filebeat

mkdir /home/tools -p

cd /home/tools

# 安装包上传到 /home/tools
tar zxvf filebeat-6.3.0-linux-x86_64.tar.gz -C /usr/local
cd /usr/local
ln -s /usr/local/filebeat-6.3.0-linux-x86_64 /usr/local/filebeat
version: '2'
networks:
  network-test:
    external:
      name: ovr0
services:
  elasticsearch:
    image: elasticsearch
    network-test:
      external:
    hostname: elasticsearch
    container_name: elasticsearch
    restart: always
    volumes:
      - /opt/elasticsearch/data:/usr/share/elasticsearch/data

  kibana:
    image: kibana
    network-test:
      external:    
    hostname: kibana
    container_name: kibana
    restart: always
    environment:
      ELASTICSEARCH_URL: http://elasticsearch:9200/
    ports:
      - 5601:5601

  logstash:
    image: logstash
    network-test:
      external:
    hostname: logstash
    container_name: logstash
    restart: always
    volumes:
      - /opt/logstash/conf:/opt/logstash/conf
    command: logstash -f /opt/logstash/conf/

  filebeat:
    image: prima/filebeat
    network-test:
      external:
    hostname: filebeat
    container_name: filebeat
    restart: always
    volumes:
      - /opt/filebeat/conf/filebeat.yml:/filebeat.yml
      - /opt/upload:/data/logs
      - /opt/filebeat/registry:/etc/registry

cd ELK

痛点

  1. 使用.env来布置环境变量,太过度重视文件夹,不便于横向扩大
  2. docker-compose.yml信赖于.env文件,不方便使用rancher神速安排
  3. filebeat.yml文件使用本地挂载在container中,不易于托管,并且比较单一,不太适用于各种种类应用共用2个filebeat

配置 filebeat 配置文件

cd /usr/local/filebeat

cat filebeat4bossmobile.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /iba/ibaboss/java/bossmobile-tomcat-8.0.26/logs/catalina.out
  multiline.pattern: '^[[:word:]]|^java' # 匹配多行时指定正则表达式
  multiline.negate: true # 定义上边pattern匹配到的行是否用于多行合并,也就是定义是不是作为日志的一部分,multiline 规则参考文章底下链接
  multiline.match: after # 定义如何将匹配到的行组合成,在之前或者之后
  fields:      # 在采集的信息中添加一个自定义字段 service,里面的值为 bossmobile_catalina,区分两类日志
    service: bossmobile_catalina

- type: log
  enabled: true
  paths:
    - /iba/ibaboss/java/bossmobile-tomcat-8.0.26/logs/ibalife.*
  multiline.pattern: '^\['
  multiline.negate: true
  multiline.match: after
  fields:      # 在采集的信息中添加一个自定义字段 service,里面的值为 bossmobile_ibalife,区分两类日志
    service: bossmobile_ibalife

output.redis:
  hosts: ["192.168.0.223"]               # 这里是 redis 的内网地址
  password: "ibalife"
  key: "bossmobile"                      # 存入到 redis 中的 bossmobile key 中
  db: 0
  timeout: 5

filebeat说明:
filebeat.yml 挂载为 filebeat 的安顿文件
logs 为 容器挂载日志的目录
registry 读取日志的笔录,幸免filebeat 容器挂掉,要求重新读取全数日志

安装elasticsearch

filebeat-scale-out的改进

  1. 废除.env,将变量直接注入到docker-compose.yml,能够独立选择docker-compose运转,也可以采纳rancher铺排运营
  2. 应用configuredata(data-volume)托管filebeat.yml,脱离宿主机当三步跳件夹的限定,为横向增添奠定基础
  3. 支撑多门类多应用差异filebeat.yml

启动 filebeat

# 创建 filebeat 保存日志的文件夹
mkdir /iba/ibaboss/filebeat_logs

nohup ./filebeat -e -c filebeat4bossmobile.yml >/iba/ibaboss/filebeat_logs/filebeat4bossmobile.log 2>&1 & 

# 如果想重新读取日志,可以停止 filebeat 后删除,再重新启动即可
ps -ef|grep filebeat

kill -9 PID

rm /usr/local/filebeat/data/registry

logstash 配置文件如下:

    docker-compose  -f  “docker-compose-elasticsearch.yml”  up  -d

开始

设置配备 ELK

cd /home/ibaboss/compose

cat docker-compose-elk.yml 
version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
    container_name: logs_elasticsearch           # 给容器命名
    restart: always
    environment:
      - discovery.type=single-node
      - cluster.name=docker-cluster
      - network.host=0.0.0.0
      - discovery.zen.minimum_master_nodes=1
      - ES_JAVA_OPTS=-Xms512m -Xmx512m
    networks:
      logs_elk:     # 指定使用的网络
        aliases:
          - elasticsearch     # 该容器的别名,在 logs_elk 网络中的其他容器可以通过别名 elasticsearch 来访问到该容器

  kibana:
    image: docker.elastic.co/kibana/kibana:6.2.4
    container_name: logs_kibana
    ports:
      - "5601:5601"
    restart: always
    networks:
      logs_elk:
        aliases:
          - kibana
    environment:
      - ELASTICSEARCH_URL=http://elasticsearch:9200
      - SERVER_NAME=kibana
    depends_on:
      - elasticsearch

  logstash:
    image: docker.elastic.co/logstash/logstash:6.2.4
    container_name: logs_logstash
    restart: always
    environment:
      - LS_JAVA_OPTS=-Xmx256m -Xms256m
    volumes:
      - ./config/logstash.conf:/etc/logstash.conf
    networks:
      logs_elk:
        aliases:
          - logstash
    depends_on:
      - elasticsearch
    entrypoint:
      - logstash
      - -f
      - /etc/logstash.conf

networks:
  logs_elk:
    external:
      name: logs_elk

cd /home/ibaboss/compose/config

cat logstash.conf

input {
        redis {
                port => "6379"                                    
                host => "elk_redis"             # redis 主机是 logs_elk 网络中的 elk_redis 主机
                data_type => "list"
                key  =>  "bossmobile"           # 从 redis 中 bossmobile key 中获取数据
                password => "ibalife"
        }

}

filter {        
     mutate { # 定义去除的字段
     remove_field => ["_id","@version","_index","_score","_type","beat.hostname","beat.name","beat.version","fields.service","input.type","offset","prospector.type","source"]
    }

  if [fields][service] == "bossmobile_catalina" {
    grok {   # 匹配 message 字段中的 时间,放入自定义的 customer_time 字段中
        match => [ "message" , "(?<customer_time>%{MONTHDAY}-%{MONTH}-%{YEAR} %{HOUR}:%{MINUTE}:%{SECOND})" ]
    }
  }

  if [fields][service] == "bossmobile_ibalife" {
    grok {
        match => [ "message" , "(?<customer_time>%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND})" ]
    }
  }

    date {
       match => [ "customer_time", "dd-MMM-yyyy HH:mm:ss.SSS","yyyy-MM-dd HH:mm:ss" ]     # 格式化 customer_time 中的时间类型从string 变成 date,例如 22-Jun-2018 17:45:22.397,对应为 dd-MMM-yyyy HH:mm:ss.SSS
        locale => "en"
        target => [ "@timestamp" ]  # 替换 @timestamp 字段的值,@timestamp 的值用于 kibana 排序
        timezone => "Asia/Shanghai"
    }

}

output {  # 根据 redis 中的 service 的字段,分别创建不同的 elasticsearch index
  if [fields][service] == "bossmobile_catalina" {         
        elasticsearch {
                hosts => ["elasticsearch:9200"]
                index   => "bossmobile_catalina-%{+YYYY.MM.dd}"
        }
  }

  if [fields][service] == "bossmobile_ibalife" {
        elasticsearch {
                hosts => ["elasticsearch:9200"]
                index   => "bossmobile_ibalife-%{+YYYY.MM.dd}"
        }
  }

}

# 启动容器
cd /home/ibaboss/compose
docker-compose -f docker-compose-elk.yml  up -d

docker ps -a

第一种: 使用 patterns .

安装logstash

条件急需

OS : Centos 7.x
Docker engine > 1.12.x
Docker-compose > 1.11.x
rancher : > v1.1.2

访问 kibana 所在的 ip:5061,创建 Index Patterns, bossmobile_catalina-* 和 bossmobile_容器日志集中,完结横向扩充。ibalife-* 美高梅手机版4858 3


logstash.conf: (配置了二种输入方式, filebeats, syslog)   

    docker-compose  -f  “docker-compose-logstash.yml”  up  -d

复制git folder

git clone git@github.com:easonlau02/filebeat-scale-out.git

[user@lab filebeat-scale-out]$ LC_ALL=C tree .
.
|-- Dockerfile                                       #filebeat image Dockerfile
|-- Dockerfile.data-volumes                          #configuredat(filebeat-data-volume) image
|-- LICENSE
|-- README.md
|-- build_filebeat_data_volume.sh                    #构建configuredat(filebeat-data-volume) image脚本
|-- build_filebeat_image.sh                          #构建filebeat image脚本
|-- config                                           #filebeat.yml配置管理文件夹
|   `-- defaul
|       |-- filebeat.yml
|       `-- filebeat.yml.sample
|-- docker-compose.yml                               #启动filebeat service的docker-compose.yml file, v2
|-- docker-compose.yml.v1                            #启动filebeat service的docker-compose.yml file, v1
|-- docker-entrypoint.sh                             #filebeat启动入口文件
`-- migrate_registry_from_forwarder_to_filebeat.sh   #.logstash-forwarder转化为filebeat的脚本,方便升级为filebeat

2 directories, 12 files

参考

#grok 在线调试  
http://grok.qiexun.net/

# multiline 官方说明
https://www.elastic.co/guide/en/beats/filebeat/current/multiline-examples.html

# kibana 中文用户手册
https://www.elastic.co/guide/cn/kibana/current/index.html

https://elasticsearch.cn/question/2651
http://www.importnew.com/27705.html
https://www.elastic.co/guide/en/logstash/current/index.html
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options.html
input {
    beats {
        port => 5044
        type => beats
    }

    tcp {
        port => 5000
        type => syslog
    }

}

filter {
  if[type] == "tomcat-log" {
    multiline {
      patterns_dir => "/opt/logstash/conf/patterns"
      pattern => "(^%{TOMCAT_DATESTAMP})|(^%{CATALINA_DATESTAMP})"
      negate => true
      what => "previous"
    }

    if "ERROR" in [message] {    #如果消息里有ERROR字符则将type改为自定义的标记
        mutate { replace => { type => "tomcat_catalina_error" } }
    }

    else if "WARN" in [message] {
        mutate { replace => { type => "tomcat_catalina_warn" } }
    }

    else if "DEBUG" in [message] {
        mutate { replace => { type => "tomcat_catalina_debug" } }
    }

    else {
        mutate { replace => { type => "tomcat_catalina_info" } }
    }

    grok{
      patterns_dir => "/opt/logstash/conf/patterns"
      match => [ "message", "%{TOMCATLOG}", "message", "%{CATALINALOG}" ]
      remove_field => ["message"]    #这表示匹配成功后是否删除原始信息,这个看个人情况,如果为了节省空间可以考虑删除
    }
    date {
      match => [ "timestamp", "yyyy-MM-dd HH:mm:ss,SSS Z", "MMM dd, yyyy HH:mm:ss a" ]
    }
  }    

  if[type] == "nginx-log" {

    if '"status":"404"' in [message] {
        mutate { replace => { type => "nginx_error_404" } }
    }

    else if '"status":"500"' in [message] {
        mutate { replace => { type => "nginx_error_500" } }
    }

    else if '"status":"502"' in [message] {
        mutate { replace => { type => "nginx_error_502" } }
    }

    else if '"status":"403"' in [message] {
        mutate { replace => { type => "nginx_error_403" } }
    }

    else if '"status":"504"' in [message] {
        mutate { replace => { type => "nginx_error_504" } }
    }

    else if '"status":"200"' in [message] {
        mutate { replace => { type => "nginx_200" } }
    }
    grok {
      remove_field => ["message"]    #这表示匹配成功后是否删除原始信息,这个看个人情况,如果为了节省空间可以考虑删除
    }
  }
}

output {
    elasticsearch { 
        hosts => ["elasticsearch:9200"]
    }

    #stdout { codec => rubydebug }    #输出到屏幕上
}

安装kibana

准备好configuredata

  1. 提议将log依照那样的pattern存放:/home/user/logs/*/*.log,第一个星号代表不相同的应用程序,第二个星号代表全体以.log结尾的日记文件,本例子根据这样的办法来配置filebeat.yml,若是你有例外的pattern,能够修改git
    folder中的暗许filebeat.yml文件:~/filebeat-scale-out/config/default/filebeat.yml,
    以下是有关的prospectors:
  • input_type: log
    paths:

    • /var/log/nginx/*.log

    fields_under_root: true

    document_type: nginx-logs

    ignore_older: 24h

  • input_type: log
    paths:

    • ${applog_folder}//.log

    fields_under_root: true

    document_type: app-logs

    ignore_older: 24h

* `${applog_folder} : `这个参数后面会在docker-compose文件中讲到
* `/var/log/nginx/*.log :`这个是nginx web server默认的log path,和application log不同,这种不需要使用参数化的配置来配置log路径
* 如果你有不同的项目应用,需要用不同的filebeat,你可以在文件件`~/filebeat-scale-out/config/`下面创建一个或者多个指定名字的文件夹,每个文件夹下面和default一样,存放一个filebeat.yml,根据你的需求来为不同的项目应用改写filebeat.yml,这个文件夹的名字后面会用到,请记住这个文件的名字

2. 下面来使用脚本来生成你自己的configuredata(filebeat-data-volume)

 ```
[user@lab ~]$ cd ~/filebeat-scale-out/
[user@lab filebeat-scale-out]$ ./build_filebeat_data_volume.sh
RelatePath : .
Date: Sat Jul 15 12:49:53 EDT 2017
Starting to build data config volume for filebeat....
Docker file : ./Dockerfile.data-volumes
Docker build context directory :.
=================================================
Found docker file : ./Dockerfile.data-volumes
Sending build context to Docker daemon   127 kB
Step 1 : FROM eason02/busybox:latest
---> c75bebcdd211
Step 2 : MAINTAINER Eason Lau <eason.lau02@hotmail.com>
---> Using cache
---> 32d466ef8024
Step 3 : RUN mkdir -p /etc/filebeat
---> Using cache
---> aeae50577003
Step 4 : COPY ./config/ /etc/filebeat/
---> Using cache
---> 0d752f07a240
Step 5 : RUN ls -R /etc/filebeat/
---> Using cache
---> 25ea99aaabd4
Successfully built 25ea99aaabd4
=================================================
Date: Sat Jul 15 12:49:53 EDT 2017
[user@lab filebeat-scale-out]$ 

 ```

#### 配置docker-compose.yml并启动filebeat service
1. 这里以[docker-compose.yml.v1](https://github.com/easonlau02/filebeat-scale-out/blob/master/docker-compose.yml.v1)为例,如果有更高版本的docker-compose或者rancher支持v2,可以直接使用[docker-compose.yml](https://github.com/easonlau02/filebeat-scale-out/blob/master/docker-compose.yml)

/opt/logstash/conf/patterns  下边存放 grok 文件

    docker-compose  -f  “docker-compose-kibana.yml”  up  -d

~/filebeat-scale-out/docker-compose.yml.v1

configuredata:
labels:
io.rancher.container.pull_image: always
image: eason02/filebeat-data-volume:latest
volumes:

  • /etc/filebeat
    command:
  • tail
  • -f
  • /etc/filebeat/default/filebeat.yml
    filebeat:
    image: eason02/filebeat:5.3.1
    container_name: filebeat-5.3.1
    restart: always
    labels:
    io.rancher.scheduler.global: ‘true’
    io.rancher.sidekicks: configuredata
    io.rancher.container.pull_image: always
    environment:
  • env=QA
  • logstash=localhost
  • config=default
  • applog_folder=/home/user/logs
    net: host
    volumes_from:
  • configuredata
    volumes:
  • /var/lib/filebeat/:/etc/filebeat/data
  • /var/log/nginx/:/var/log/nginx/
  • /home/user/logs:/home/user/logs
    log_opt:
    max-file: ‘5’
    max-size: 20m

    docker-compose.yml配置说明:

    * configuredata:
        ```
    

    volumes:

         - /etc/filebeat            #用来提供filebeat config给filebeat container使
        ```
        ```
       labels:
         io.rancher.container.pull_image: always    # 在upgrade或者第一次启动的时候自动拿最新的image,此处为rancher配置相关
        ```
     *  filebeat:
        ```
    environment:
      - env=QA                   # 环境配置,便于区分
      - logstash=localhost       # 配置logstash的host,不需要port,例如:host-logstash
      - config=default           # 指定加载哪个filebeat.yml
                                   #上面说到自定义创建的文件夹名字就是需要在这里指定说明
                                   # 才可以加载到相应的filebeat.yml
                                   # 说到这里你大概知道如何实现多项目应用一个filebeat的设计原则了吧
      - applog_folder=/home/user/logs      #默认的filebeat.yml中需要指定日志文件的路径
                                             #根据我们上面约定好的规则,需要指定它为/home/user/logs
                                             #才可以将我们的log映射到container里边,让filebeat根据filebeat.yml扫描日志
        ```
        ```
    volumes:          # 根据你的需求mount文件件
      - /var/lib/filebeat/:/etc/filebeat/data     # 日志偏移量文件,需要mount出来,不然重启filebeat会丢失而重复发送之前的日志
      - /home/user/logs:/home/user/logs   # 把日志文件件mount进filebeat container中,必须与applog_folder一致
      - /var/log/nginx/:/var/log/nginx/           # nginx日志文件夹
    
        ```
        ```
    labels:
        io.rancher.scheduler.global: 'true'               # 当新的host加入rancher时,自动部署,rancher配置相关
        io.rancher.sidekicks: configuredata               # 添加configuredata为filebeat container的辅助service
        io.rancher.container.pull_image: always           # upgrade或者第一次启动的时候,保持最新的image
        ```
        ```
    volumes_from:
      - configuredata        # 将托管filebeat.yml配置的service作为filebeat container的mount point
                               # 可直接将configuredata开放的volume挂在到当前container中
        ```
    
    1. 使用rancher自动安插到拥有host上面
      • 假使您只须求动用默许的filebeat.yml,你能够一向依据你的要求修改下面相关的铺排,直接将修改的docker-compose文件放入rancher中一向开发银行就足以了
      • 1旦您需求采纳自身定制化,也许伍系列的方式,你恐怕供给将以下image标记为您本身的docker image然后push到你协调的repository上边

eason02/filebeat:5.3.1
eason02/filebeat-data-volume:latest

然后修改docker-compose中的image,将其修改为你自己的images,然后再放到rancher中启动filebeat
3. 不使用rancher,直接在虚拟机上启动,修改相关配置,用docker-compose启动

4. 附上我使用rancher部署的filebeat stack

  ![filebeat-scale-out.png](http://upload-images.jianshu.io/upload_images/5342565-3d0cd1d0c0318117.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
说实在的,如果不用rancher,让我逐个机器都去跑一次`docker-compose up -d`,我是拒绝的。
好了,两点了,睡觉了。有什么问题下方留言,我会帮你解答的,欢迎有偿定制。

<small>\** 文章所有步骤都是经过实践检验并可行,若有问题,下方请评论。</small>
# 
<h6 align = "center">——END——</h6>
<small>作者 :  `Eason`,专注各种技术、平台、集成,不满现状,喜欢改改改<br>文章、技术合作,大胆的扫一扫,害羞的请邮件</small>
<small>Email : <eason.lau02@hotmail.com></small>
<small>GitHub : https://github.com/easonlau02</small>
![](http://upload-images.jianshu.io/upload_images/5342565-0bd5e7b085071e37.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

grok-patterns:

到此ELK就搭建完毕了

USERNAME [a-zA-Z0-9._-]+
USER %{USERNAME}
INT (?:[+-]?(?:[0-9]+))
BASE10NUM (?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))
NUMBER (?:%{BASE10NUM})
BASE16NUM (?<![0-9A-Fa-f])(?:[+-]?(?:0x)?(?:[0-9A-Fa-f]+))
BASE16FLOAT \b(?<![0-9A-Fa-f.])(?:[+-]?(?:0x)?(?:(?:[0-9A-Fa-f]+(?:\.[0-9A-Fa-f]*)?)|(?:\.[0-9A-Fa-f]+)))\b

POSINT \b(?:[1-9][0-9]*)\b
NONNEGINT \b(?:[0-9]+)\b
WORD \b\w+\b
NOTSPACE \S+
SPACE \s*
DATA .*?
GREEDYDATA .*
QUOTEDSTRING (?>(?<!\\)(?>"(?>\\.|[^\\"]+)+"|""|(?>'(?>\\.|[^\\']+)+')|''|(?>`(?>\\.|[^\\`]+)+`)|``))
UUID [A-Fa-f0-9]{8}-(?:[A-Fa-f0-9]{4}-){3}[A-Fa-f0-9]{12}

# Networking
MAC (?:%{CISCOMAC}|%{WINDOWSMAC}|%{COMMONMAC})
CISCOMAC (?:(?:[A-Fa-f0-9]{4}\.){2}[A-Fa-f0-9]{4})
WINDOWSMAC (?:(?:[A-Fa-f0-9]{2}-){5}[A-Fa-f0-9]{2})
COMMONMAC (?:(?:[A-Fa-f0-9]{2}:){5}[A-Fa-f0-9]{2})
IPV6 ((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:)))(%.+)?
IPV4 (?<![0-9])(?:(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2}))(?![0-9])
IP (?:%{IPV6}|%{IPV4})
HOSTNAME \b(?:[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:\.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(\.?|\b)
HOST %{HOSTNAME}
IPORHOST (?:%{HOSTNAME}|%{IP})
HOSTPORT (?:%{IPORHOST=~/\./}:%{POSINT})

# paths
PATH (?:%{UNIXPATH}|%{WINPATH})
UNIXPATH (?>/(?>[\w_%!$@:.,-]+|\\.)*)+
TTY (?:/dev/(pts|tty([pq])?)(\w+)?/?(?:[0-9]+))
WINPATH (?>[A-Za-z]+:|\\)(?:\\[^\\?*]*)+
URIPROTO [A-Za-z]+(\+[A-Za-z+]+)?
URIHOST %{IPORHOST}(?::%{POSINT:port})?
# uripath comes loosely from RFC1738, but mostly from what Firefox
# doesn't turn into %XX
URIPATH (?:/[A-Za-z0-9$.+!*'(){},~:;=@#%_\-]*)+
#URIPARAM \?(?:[A-Za-z0-9]+(?:=(?:[^&]*))?(?:&(?:[A-Za-z0-9]+(?:=(?:[^&]*))?)?)*)?
URIPARAM \?[A-Za-z0-9$.+!*'|(){},~@#%&/=:;_?\-\[\]]*
URIPATHPARAM %{URIPATH}(?:%{URIPARAM})?
URI %{URIPROTO}://(?:%{USER}(?::[^@]*)?@)?(?:%{URIHOST})?(?:%{URIPATHPARAM})?

# Months: January, Feb, 3, 03, 12, December
MONTH \b(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\b
MONTHNUM (?:0?[1-9]|1[0-2])
MONTHDAY (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])

# Days: Monday, Tue, Thu, etc...
DAY (?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?|Sat(?:urday)?|Sun(?:day)?)

# Years?
YEAR (?>\d\d){1,2}
HOUR (?:2[0123]|[01]?[0-9])
MINUTE (?:[0-5][0-9])
# '60' is a leap second in most time standards and thus is valid.
SECOND (?:(?:[0-5][0-9]|60)(?:[:.,][0-9]+)?)
TIME (?!<[0-9])%{HOUR}:%{MINUTE}(?::%{SECOND})(?![0-9])
# datestamp is YYYY/MM/DD-HH:MM:SS.UUUU (or something like it)
DATE_US %{MONTHNUM}[/-]%{MONTHDAY}[/-]%{YEAR}
DATE_EU %{MONTHDAY}[./-]%{MONTHNUM}[./-]%{YEAR}
ISO8601_TIMEZONE (?:Z|[+-]%{HOUR}(?::?%{MINUTE}))
ISO8601_SECOND (?:%{SECOND}|60)
TIMESTAMP_ISO8601 %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?
DATE %{DATE_US}|%{DATE_EU}
DATESTAMP %{DATE}[- ]%{TIME}
TZ (?:[PMCE][SD]T|UTC)
DATESTAMP_RFC822 %{DAY} %{MONTH} %{MONTHDAY} %{YEAR} %{TIME} %{TZ}
DATESTAMP_OTHER %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{TZ} %{YEAR}

# Syslog Dates: Month Day HH:MM:SS
SYSLOGTIMESTAMP %{MONTH} +%{MONTHDAY} %{TIME}
PROG (?:[\w._/%-]+)
SYSLOGPROG %{PROG:program}(?:\[%{POSINT:pid}\])?
SYSLOGHOST %{IPORHOST}
SYSLOGFACILITY <%{NONNEGINT:facility}.%{NONNEGINT:priority}>
HTTPDATE %{MONTHDAY}/%{MONTH}/%{YEAR}:%{TIME} %{INT}

# Shortcuts
QS %{QUOTEDSTRING}

# Log formats
SYSLOGBASE %{SYSLOGTIMESTAMP:timestamp} (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}:
COMMONAPACHELOG %{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)
COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}

# Log Levels
LOGLEVEL ([A-a]lert|ALERT|[T|t]race|TRACE|[D|d]ebug|DEBUG|[N|n]otice|NOTICE|[I|i]nfo|INFO|[W|w]arn?(?:ing)?|WARN?(?:ING)?|[E|e]rr?(?:or)?|ERR?(?:OR)?|[C|c]rit?(?:ical)?|CRIT?(?:ICAL)?|[F|f]atal|FATAL|[S|s]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?)

# Java Logs
JAVATHREAD (?:[A-Z]{2}-Processor[\d]+)
JAVACLASS (?:[a-zA-Z0-9-]+\.)+[A-Za-z0-9$]+
JAVAFILE (?:[A-Za-z0-9_.-]+)
JAVASTACKTRACEPART at %{JAVACLASS:class}\.%{WORD:method}\(%{JAVAFILE:file}:%{NUMBER:line}\)
JAVALOGMESSAGE (.*)
# MMM dd, yyyy HH:mm:ss eg: Jan 9, 2014 7:13:13 AM
CATALINA_DATESTAMP %{MONTH} %{MONTHDAY}, 20%{YEAR} %{HOUR}:?%{MINUTE}(?::?%{SECOND}) (?:AM|PM)
# yyyy-MM-dd HH:mm:ss,SSS ZZZ eg: 2014-01-09 17:32:25,527 -0800
TOMCAT_DATESTAMP 20%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{HOUR}:?%{MINUTE}(?::?%{SECOND}) %{ISO8601_TIMEZONE}
CATALINALOG %{CATALINA_DATESTAMP:timestamp} %{JAVACLASS:class} %{JAVALOGMESSAGE:logmessage}
# 2014-01-09 20:03:28,269 -0800 | ERROR | com.example.service.ExampleService - something compeletely unexpected happened...
TOMCATLOG %{TOMCAT_DATESTAMP:timestamp} \| %{LOGLEVEL:level} \| %{JAVACLASS:class} - %{JAVALOGMESSAGE:logmessage}

2、验证

上边大家部署 容器日志输出:

    curl

docker里,标准的日志格局是用Stdout, docker
里面配备标准输出,只须求钦定: syslog 就足以了。

    curl

对于 stdout 标准输出的 docker ��志,大家选用 logstash 来收集日志就足以。

    已上访问即使能平常重临版本等音信,则表明搭建成功。

我们在 docker-compose 中安插如下既可:

三、收集日志并传导给logstash

      logging:
        driver: syslog
        options:
          syslog-address: 'tcp://logstash:5000'

    cd Filebeat

只是一般的话大家都以文本日志,那么大家就可以间接用filebeat

    tar -xvf filebeat-5.2.2-linux-x86_64.tar.gz

对于 filebeat 大家使用 官方的 dockerhub 的 prima/filebeat 镜像。

   
修改filebeat.yml文件,只须要修改paths中的传输,借使要采访nginx的走访日志,则修改为
/var/log/nginx/ 即可。

官方的镜像中,大家需求编写翻译3个filebeat.yml 文件, 官方表达中有三种方案:

    运行Filebeat

第一是 -v 挂载 -v /path/filebeat.yml:/filebeat.yml

    ./filebeat-5.2.2-linux-x86_64/filebeat -e -c filebeat.yml

第二是 dockerfile 的时候

以上步骤就做到了ELK的搭建,并开始征集日志音讯了;下图呈现搭建后Kibana的成效。

FROM prima/filebeat

COPY filebeat.yml /filebeat.yml

美高梅手机版4858 4

编写翻译3个 filebeat.yml 文件。

    经过安插及图形化模型建立后,可高达如下效果:

filebeat.yml 补助单一路径的 prospector, 也支撑三个prospector也许各种prospector多少个途径。

美高梅手机版4858 5

paths 可使用多层相配, 如: /var/log/messages* , /var/log/* ,
/opt/nginx/*/*.log

    

例:

filebeat:
  prospectors:
    -
      paths:
          - "/data/logs/catalina.*.out"
      input_type: filebeat-log
      document_type: tomcat-log
    -  
      paths:
          - "/data/logs/nginx*/logs/*.log"
      input_type: filebeat-log
      document_type: nginx-log

  registry_file: /etc/registry/mark

output:
  logstash:
    hosts: ["logstash:5044"]

logging:
  files:
    rotateeverybytes: 10485760 # = 10MB

filebeat 须要在每台需求采集的机械下面都运转三个器皿。

履行 docker-compose up -d 查看运行的 容器

加载 filebeat 模板

进入 elasticsearch 容器中  (  docker exec -it elasticsearch bash  )

curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json

curl -XPUT 'http://elasticsearch:9200/_template/filebeat?pretty' -d@filebeat-index-template.json


filebeat-index-template.json

{
  “mappings”: {
    “_default_”: {
      “_all”: {
        “enabled”: true,
        “norms”: {
          “enabled”: false
        }
      },
      “dynamic_templates”: [
        {
          “template1”: {
            “mapping”: {
              “doc_values”: true,
              “ignore_above”: 1024,
              “index”: “not_analyzed”,
              “type”: “{dynamic_type}”
            },
            “match”: “*”
          }
        }
      ],
      “properties”: {
        “@timestamp”: {
          “type”: “date”
        },
        “message”: {
          “type”: “string”,
          “index”: “analyzed”
        },
        “offset”: {
          “type”: “long”,
          “doc_values”: “true”
        },
        “geoip”  : {
          “type” : “object”,
          “dynamic”: true,
          “properties” : {
            “location” : { “type” : “geo_point” }
          }
        }
      }
    }
  },
  “settings”: {
    “index.refresh_interval”: “5s”
  },
  “template”: “filebeat-*”
}

访问 能够看看曾经出去 kibana 了,不过还尚无数量

起步贰个 nginx 容器

docker-compose

  nginx:
    image: alpine-nginx
    networks:
      network-test:
    hostname: nginx
    container_name: nginx
    restart: always
    ports:
      - 80:80
    volumes:
      - /opt/upload/nginx/conf/vhost:/etc/nginx/vhost
      - /opt/upload/nginx/logs:/opt/nginx/logs

地方目录 /opt/upload/nginx   必须挂载到 filebeat 容器里面,让filebeat
能够收集到。

美高梅手机版4858 6

能够见见 kibana 已经有多少出来了

越多Docker相关教程见以下内容

Docker安装应用(CentOS
6.5_x64)
http://www.linuxidc.com/Linux/2014-07/104595.htm

Ubuntu 14.04安装Docker 
http://www.linuxidc.com/linux/2014-08/105656.htm

Ubuntu使用VNC运转基于Docker的桌面系统 
http://www.linuxidc.com/Linux/2015-08/121170.htm

Ali云CentOS 6.五 模板上设置 Docker
http://www.linuxidc.com/Linux/2014-11/109107.htm

Ubuntu 15.04下安装Docker 
http://www.linuxidc.com/Linux/2015-07/120444.htm

在Ubuntu Trusty 14.04 (LTS) (64-bit)安装Docker
http://www.linuxidc.com/Linux/2014-10/108184.htm

在 Ubuntu 1伍.0四 上怎么着设置Docker及主干用法
http://www.linuxidc.com/Linux/2015-09/122885.htm

Ubuntu 1陆.0肆上Docker使用手记
http://www.linuxidc.com/Linux/2016-12/138490.htm

Docker
的详细介绍
:请点这里
Docker
的下载地址
:请点那里

正文永久更新链接地址:http://www.linuxidc.com/Linux/2017-02/140530.htm

美高梅手机版4858 7

发表评论

电子邮件地址不会被公开。 必填项已用*标注

网站地图xml地图
Copyright @ 2010-2019 美高梅手机版4858 版权所有