天天看點

大資料-Flume采集案例Agent級聯

2.2. 采集案例

2.2.5. Agent 級聯

大資料-Flume采集案例Agent級聯
大資料-Flume采集案例Agent級聯

分析

  1. 第一個agent負責收集檔案當中的資料,通過網絡發送到
  2. 第二個agent當中去 第二個agent負責接收第一個agent發送的資料,并将資料儲存到hdfs上面去

Step 1: Node02 安裝 Flume

将node03機器上面解壓後的flume檔案夾拷貝到node02機器上面去

cd /export/servers 
scp -r apache-flume-1.8.0-bin/ node02:$PWD
           

Step 2: Node02 配置 Flume

在node02機器配置我們的flume

cd /export/servers/ apache-flume-1.8.0-bin/conf 
vim tail-avro-avro-logger.conf
           
################## 
# Name the components on this agent 
a1.sources = r1 
a1.sinks = k1 a1.channels = c1 
# Describe/configure the source 
a1.sources.r1.type = exec 
a1.sources.r1.command = tail -F /export/servers/taillogs/access_log 
a1.sources.r1.channels = c1 
# Describe the sink 
##sink端的avro是一個資料發送者 
a1.sinks = k1 a1.sinks.k1.type = avro 
a1.sinks.k1.channel = c1 
a1.sinks.k1.hostname = 192.168.174.120 
a1.sinks.k1.port = 4141 
a1.sinks.k1.batch-size = 10 
# Use a channel which buffers events in memory 
a1.channels.c1.type = memory 
a1.channels.c1.capacity = 1000 
a1.channels.c1.transactionCapacity = 100 
# Bind the source and sink to the channel 
a1.sources.r1.channels = c1 
a1.sinks.k1.channel = c1
           

Step 3: 開發腳本向檔案中寫入資料

直接将node03下面的腳本和資料拷貝到node02即可,node03機器上執行以下指令

cd /export/servers 
scp -r shells/ taillogs/ node02:$PWD
           

Step 4: Node03 Flume 配置檔案

在node03機器上開發flume的配置檔案

cd /export/servers/apache-flume-1.8.0-bin/conf 
vim avro-hdfs.conf
           
# Name the components on this agent 
a1.sources = r1 
a1.sinks = k1 
a1.channels = c1 
# Describe/configure the source 
##source中的avro元件是一個接收者服務 
a1.sources.r1.type = avro 
a1.sources.r1.channels = c1 
a1.sources.r1.bind = 192.168.174.120 
a1.sources.r1.port = 4141 
# Describe the sink 
a1.sinks.k1.type = hdfs 
a1.sinks.k1.hdfs.path = hdfs://node01:8020/av /%y-%m-%d/%H%M/ 
a1.sinks.k1.hdfs.filePrefix = events- 
a1.sinks.k1.hdfs.round = true 
a1.sinks.k1.hdfs.roundValue = 10 
a1.sinks.k1.hdfs.roundUnit = minute 
a1.sinks.k1.hdfs.rollInterval = 3 
a1.sinks.k1.hdfs.rollSize = 20 
a1.sinks.k1.hdfs.rollCount = 5 
a1.sinks.k1.hdfs.batchSize = 1 
a1.sinks.k1.hdfs.useLocalTimeStamp = true #生成的檔案類型,預設是Sequencefile,可用DataStream,則為普通文本 
a1.sinks.k1.hdfs.fileType = DataStream 
# Use a channel which buffers events in memory 
a1.channels.c1.type = memory 
a1.channels.c1.capacity = 1000 
a1.channels.c1.transactionCapacity = 100 
# Bind the source and sink to the channel 
a1.sources.r1.channels = c1 
a1.sinks.k1.channel = c1
           

Step 5: 順序啟動

cd /export/servers/apache-flume-1.8.0-bin bin/flume-ng agent -c conf -f conf/avro-hdfs.conf -n a1
           
cd /export/servers/apache-flume-1.8.0-bin/ bin/flume-ng agent -c conf -f conf/tail-avro-avro-logger.conf -n a1
           
cd /export/servers/shells 
sh tail-file.sh