ELK
- ELK基礎知識
-
- ELK組成
- ELK處理步驟
- Elasticsearch核心知識
- 部署ELK日志分析系統
-
- 配置ES環境
- 部署apache伺服器,安裝Logstash
- 部署Kibana
ELK基礎知識
概述:ELK是一種開源的實時日志分析系統,它能通過日志分析來幫助發現問題,解決系統故障,它由ElasticSearch(ES)、Logstash和Kibana組成
ELK組成
- ES:對logstash格式化的資料進行索引和存儲
- logstash:收集日志、過濾、格式化,傳輸到ES上(如果是高并發則可以用filebeat工具來進行日志的收集)
- Kibana:展示日志分析結果
ELK處理步驟
- 通過logstash将日志收集,并格式化傳輸到ES上存儲
- ES對格式化的資料進行索引和存儲
- 通過Kibana來展示使用者觀察
Elasticsearch核心知識
- 接近實時:它有索引辨別資料,搜尋的過着隻有輕微的延遲(通常1秒)
- 叢集:ES具有叢集機制,節點通過叢集名加入到叢集中,同時在叢集中的節點會有一個自己的唯一身份辨別
- 索引:這裡的索引有點類似于關系型資料庫中的庫,它由一個擁有積分相似特征的文檔的集合,索引名必須小寫字母辨別
- 文檔:一個文檔是一個可被索引的基礎資訊單元,
- 類型:在一個索引中,可以定義一種或多種類型,它是索引上的一個邏輯分類或分區,類似于關系型資料庫中的表
- 分片和分片副本:分片是防止資料過大不易存儲在單個節點,影響搜尋,它提高性能和吞吐量,進行水準分割拓展,增大存儲量,分片副本是放在單個分片或節點故障,丢失資料,它增加了備援性,高可用性,預設情況下,ES每個索引被分片5個主分片和一個副本
部署ELK日志分析系統
裝置準備:
- ES伺服器:node1 ,軟體,Elasticsearch,Kibana,192.168.118.11
- ES伺服器,node2,軟體,Elasticsearch,192.168.118.22
- Logstash伺服器,node3,軟體,Logstash,Apache,192.168.118.33
配置ES環境
-
配置JDK環境
node1:
[[email protected] opt]# vim /etc/hosts
192.168.118.11 node1
192.168.118.22 node2
[[email protected] opt]# tar zxvf jdk-8u91-linux-x64.tar.gz -C /usr/local
[[email protected] opt]# vim /etc/profile #修改配置檔案
export JAVA_HOME=/usr/local/jdk
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
[[email protected] ~]# source /etc/profile
node2:
[[email protected] opt]# vim /etc/hosts
192.168.118.11 node1
192.168.118.22 node2
[[email protected] opt]# tar zxvf jdk-8u91-linux-x64.tar.gz -C /usr/local
[[email protected] opt]# vim /etc/profile #修改配置檔案
export JAVA_HOME=/usr/local/jdk
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
[[email protected] jdk]# source /etc/profile
- 部署Elasticsearch(node1和node2節點部署一樣,node2改下節點名稱即可)
node1:
[[email protected] opt]# rpm -ivh elasticsearch-5.5.0.rpm
[[email protected] opt]# systemctl daemon-reload #加載背景程序
[[email protected] opt]# systemctl enable elasticsearch.service #開啟elasticsearch
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.
[[email protected] opt]# cd /etc/elasticsearch/
[[email protected] elasticsearch]# cp elasticsearch.yml elasticsearch.yml.bak
[[email protected] elasticsearch]# vim elasticsearch.yml #修改配置檔案
[[email protected] elasticsearch]# grep -v "^#" /etc/elasticsearch/elasticsearch.yml #檢視配置結果是否成功
cluster.name: my-elk-cluster
node.name: node1
path.data: /data/elk_data
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["node1", "node2"]
[[email protected] elasticsearch]# mkdir -p /data/elk_data #建立資料存放目錄
[[email protected] elasticsearch]# chown elasticsearch:elasticsearch /data/elk_data/ #給予屬主、組
[[email protected] elasticsearch]# ll -d /data/elk_data/
drwxr-xr-x. 2 elasticsearch elasticsearch 6 8月 14 10:55 /data/elk_data/
[[email protected] elasticsearch]# systemctl start elasticsearch.service
[[email protected] elasticsearch]# netstat -antp | grep 9200
tcp6 0 0 :::9200 :::* LISTEN 81615/java
- 在浏覽器上檢視
- 檢視叢集健康狀态,兩個節點正常運作
- 檢視叢集狀态資訊
- 安裝各種工具,以友善檢視浏覽器
node1和node2配置一樣:
##安裝node元件依賴包
[[email protected] opt]# tar xzvf node-v8.2.1.tar.gz
[[email protected] node-v8.2.1]# yum -y install gcc gcc-c++ make
[[email protected] opt]# cd node-v8.2.1/
[[email protected] node-v8.2.1]# ./configure
[[email protected] node-v8.2.1]# make -j3
[[email protected] node-v8.2.1]# make install
##安裝phantomjs前端架構
[[email protected] node-v8.2.1]# cd /usr/local/src/ #将軟體包下載下傳到此目錄然後解壓
[[email protected] src]# tar xjvf phantomjs-2.1.1-linux-x86_64.tar.bz2
[[email protected] src]# cd phantomjs-2.1.1-linux-x86_64/bin
[[email protected] bin]# cp phantomjs /usr/local/bin
##安裝elasticsearch-head
[[email protected] src]# cd /usr/local/src
[[email protected] src]# tar xzvf elasticsearch-head.tar.gz
[[email protected] src]# cd elasticsearch-head/
[[email protected] elasticsearch-head]# npm install
##修改主配置檔案
[[email protected] elasticsearch-head]# vim /etc/elasticsearch/elasticsearch.yml
#在末尾添加
http.cors.enabled: true #開啟跨域通路支援,預設為false
http.cors.allow-origin: "*" #跨域通路允許的域名位址
[[email protected] elasticsearch-head]# systemctl restart elasticsearch.service #重新開機服務
[[email protected] elasticsearch-head]# netstat -antp | grep 9200
tcp6 0 0 :::9200 :::* LISTEN 128499/java
##啟動elasticsearch-head伺服器
[[email protected] elasticsearch-head]# npm run start & #讓head在背景運作
[1] 128588
[[email protected] elasticsearch-head]# netstat -antp | grep 9100
tcp 0 0 0.0.0.0:9100 0.0.0.0:* LISTEN 128598/grunt
- 在浏覽器上檢視
- 在node1上建立一個索引
[[email protected] elasticsearch-head]# curl -XPUT 'localhost:9200/index-demo/test/1?pretty&pretty' -H 'content-Type: application/json' -d '{"user":"zhangsan","mesg":"hello world"}'
{
"_index" : "index-demo",
"_type" : "test",
"_id" : "1",
"_version" : 1,
"result" : "created",
"_shards" : {
"total" : 2,
"successful" : 2,
"failed" : 0
},
"created" : true
}
- 重新整理浏覽器檢視
- 檢視索引資訊
部署apache伺服器,安裝Logstash
[[email protected] ~]# yum install -y httpd
[[email protected] ~]# systemctl start httpd
##部署Java環境
[[email protected] opt]# tar zxvf jdk-8u91-linux-x64.tar.gz -C /usr/local
[[email protected] opt]# cd /usr/local
[[email protected] local]# ls
bin etc games include jdk1.8.0_91 lib lib64 libexec sbin share src
[[email protected] local]# mv jdk1.8.0_91/ jdk
[[email protected] opt]# vim /etc/profile
export JAVA_HOME=/usr/local/jdk
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
[[email protected] opt]# cd /opt
[[email protected] opt]# rpm -ivh logstash-5.5.1.rpm
[[email protected] opt]# ln -s /usr/share/logstash/bin/logstash /usr/local/bin/
[[email protected] opt]# cd /usr/local
[[email protected] local]# ls
bin etc games include jdk1.8.0_91 lib lib64 libexec sbin share src
[[email protected] local]# mv jdk1.8.0_91/ jdk
[[email protected] opt]# systemctl start logstash.service
[[email protected] opt]# systemctl enable logstash.service
Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.
##輸入标準輸出
[[email protected] ~]# logstash -e 'input { stdin{} } output { stdout{} }'
…………………………
www.baidu.com #輸入百度網址
2021-08-14T12:21:46.662Z appche www.baidu.com
##使用rubydebug顯示詳細輸出
[[email protected] ~]# logstash -e 'input { stdin{} } output { stdout{ codec=>rubydebug} }'
…………………………
www.baidu.com #輸入百度網站檢視
{
"@timestamp" => 2021-08-14T12:24:31.885Z,
"@version" => "1",
"host" => "appche",
"message" => "www.baidu.com"
}
##使用logstash将資訊寫入ES資料庫中
[[email protected] ~]# logstash -e 'input { stdin{} } output { elasticsearch { hosts=>["192.168.118.11:9200"] } }'
………………………………
20:28:12.680 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
www.baidu.com #輸入内容
www.google.com.cn #輸入内容
- 重新整理浏覽器檢視會發現有新的logstash資訊
- 在apache上做對接配置,收集系統日志
[[email protected] ~]# chmod o+r /var/log/messages
[[email protected] ~]# cd /etc/logstash/conf.d/
[[email protected] conf.d]# vim system.conf
input {
file{
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["192.168.118.11:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
[[email protected] conf.d]# systemctl restart logstash.service #重新開機logstash
- 檢視網頁
部署Kibana
- 這裡直接在ES伺服器上安裝Kibana
[[email protected] opt]# rpm -ivh kibana-5.5.1-x86_64.rpm
[[email protected] opt]# cd /etc/kibana/
[[email protected] kibana]# cp kibana.yml kibana.yml.bak
[[email protected] kibana]# vim kibana.yml #修改配置檔案
[[email protected] kibana]# systemctl start kibana.service #開啟Kibana
[[email protected] kibana]# systemctl enable kibana.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /etc/systemd/system/kibana.service.
- 在浏覽器上檢視
- 在Kibana上輸入索引檢視