目錄
什麼是序列化和反序列化?
hadoop 中常用資料的序列化類型
自定義bean對象實作序列化接口(Writable)
序列化案例實操
自定義類:FlowBean
Mapper類
Mapper
Driver
什麼是序列化和反序列化?
序列化:将記憶體中的對象裝換成位元組序列,以便于持久化到硬碟和網絡傳輸
反序列化:将接收到的位元組序列或者是磁盤中的持久化資料轉換成記憶體中的對象
在Hadoop中涉及到叢集,叢集件的需要進行大量的資料傳輸,是以對于Hadoop叢集來說會有一個需求就是怎麼樣将A 機器記憶體中的資料傳輸到B 機器?這可以使用java自帶的序列化架構,serializable;但是由于java自帶的序列化會有很多額外的資訊,不利于網絡的傳輸,是以hadoop有自己的序列化機制 Writable。
hadoop 中常用資料的序列化類型
表4-1 常用的資料類型對應的Hadoop資料序列化類型
Java類型 | Hadoop Writable類型 |
Boolean | BooleanWritable |
Byte | ByteWritable |
Int | IntWritable |
Float | FloatWritable |
Long | LongWritable |
Double | DoubleWritable |
String | Text |
Map | MapWritable |
Array | ArrayWritable |
自定義bean對象實作序列化接口(Writable)
在企業開發中往往常用的基本序列化類型不能滿足所有需求,比如在Hadoop架構内部傳遞一個bean對象,那麼該對象就需要實作序列化接口。具體實作bean對象序列化步驟如下7步。
(1)必須實作Writable接口
(2)反序列化時,需要反射調用空參構造函數,是以必須有空參構造
public FlowBean() {
super();
}
(3)重寫序列化方法
@Override
public void write(DataOutput out) throws IOException {
out.writeLong(upFlow);
out.writeLong(downFlow);
out.writeLong(sumFlow);
}
(4)重寫反序列化方法
@Override
public void readFields(DataInput in) throws IOException {
upFlow = in.readLong();
downFlow = in.readLong();
sumFlow = in.readLong();
}
(5)注意反序列化的順序和序列化的順序完全一緻
(6)要想把結果顯示在檔案中,需要重寫toString(),可用”\t”分開,友善後續用。
(7)如果需要将自定義的bean放在key中傳輸,則還需要實作Comparable接口,因為MapReduce框中的Shuffle過程要求對key必須能排序。
序列化案例實操
需求:統計每一個手機号耗費的總上行流量、下行流量、總流量,資料如下:
1 13736230513 192.196.100.1 www.isea.com 2481 24681 200
2 13846544121 192.196.100.2 264 0 200
3 13956435636 192.196.100.3 132 1512 200
4 13966251146 192.168.100.1 240 0 404
5 18271575951 192.168.100.2 www.isea.com 1527 2106 200
6 84188413 192.168.100.3 www.isea.com 4116 1432 200
7 13590439668 192.168.100.4 1116 954 200
8 15910133277 192.168.100.5 www.hao123.com 3156 2936 200
9 13729199489 192.168.100.6 240 0 200
10 13630577991 192.168.100.7 www.shouhu.com 6960 690 200
11 15043685818 192.168.100.8 www.baidu.com 3659 3538 200
12 15959002129 192.168.100.9 www.isea.com 1938 180 500
13 13560439638 192.168.100.10 918 4938 200
14 13470253144 192.168.100.11 180 180 200
15 13682846555 192.168.100.12 www.qq.com 1938 2910 200
16 13992314666 192.168.100.13 www.gaga.com 3008 3720 200
17 13509468723 192.168.100.14 www.qinghua.com 7335 110349 404
18 18390173782 192.168.100.15 www.sogou.com 9531 2412 200
19 13975057813 192.168.100.16 www.baidu.com 11058 48243 200
20 13768778790 192.168.100.17 120 120 200
21 13568436656 192.168.100.18 www.alibaba.com 2481 24681 200
22 13568436656 192.168.100.19 1116 954 200
期望的值是:
13560436666 1116 954 2070
手機号碼 上行流量 下行流量 總流量
構思過程:
代碼實作:
自定義類:FlowBean
package com.isea.flow;
import org.apache.hadoop.io.Writable;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
public class FlowBean implements Writable {
private long upFlow;
private long downFlow;
private long sumFlow;
public FlowBean(long upFlow, long downFlow) {
this.upFlow = upFlow;
this.downFlow = downFlow;
this.sumFlow = upFlow + downFlow;
}
// 無參構造方法,反序列化時候需要用到
public FlowBean() {
}
public long getUpFlow() {
return upFlow;
}
public void setUpFlow(long upFlow) {
this.upFlow = upFlow;
}
public long getDownFlow() {
return downFlow;
}
public void setDownFlow(long downFlow) {
this.downFlow = downFlow;
}
public long getSumFlow() {
return sumFlow;
}
public void setSumFlow(long sumFlow) {
this.sumFlow = sumFlow;
}
public void set(Long upFlow,Long downFlow){
this.upFlow = upFlow;
this.downFlow = downFlow;
sumFlow = upFlow + downFlow;
}
@Override
public String toString() {
return upFlow + "\t" + downFlow + "\t" + sumFlow ;
}
// 序列化方法
public void write(DataOutput out) throws IOException {
out.writeLong(upFlow);
out.writeLong(downFlow);
out.writeLong(sumFlow);
}
// 反序列化方法
public void readFields(DataInput in) throws IOException {
this.upFlow = in.readLong();
this.downFlow = in.readLong();
this.sumFlow = in.readLong();
}
}
Mapper類
package com.isea.flow;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
public class FlowMapper extends Mapper<LongWritable, Text,Text,FlowBean> {
// 鍵值對的準備
private Text phone = new Text();
private FlowBean flowBean = new FlowBean();
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
// 1,擷取一行
String line = value.toString();
// 2,切割資料 "\t"
String[] field = line.split("\t");
// 3,封裝對象
phone.set(field[1]);
long upFlow = Long.parseLong(field[field.length - 3]);
long downFlow = Long.parseLong(field[field.length - 2]);
flowBean.setUpFlow(upFlow);
flowBean.setDownFlow(downFlow);
// 4,寫給reduce
context.write(phone,flowBean);
}
}
Reducer
package com.isea.flow;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class FlowReducer extends Reducer<Text,FlowBean, Text,FlowBean> {
private FlowBean resultFlowBean = new FlowBean();
@Override
protected void reduce(Text key, Iterable<FlowBean> values, Context context) throws IOException, InterruptedException {
long sumUp = 0;
long sumDown = 0;
// 1,累加求和
for (FlowBean value : values) {
sumDown += value.getDownFlow();
sumUp += value.getUpFlow();
}
resultFlowBean.set(sumUp,sumDown);
// 2,輸出
context.write(key,resultFlowBean);
}
}
Driver
package com.isea.flow;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
public class FlowSumDriver {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
args = new String[]{"G:/input","G:/output2"};
// 1,擷取job對象
Configuration configuration = new Configuration();
Job job = Job.getInstance(configuration);
// 2,設定jar的路徑
job.setJarByClass(FlowSumDriver.class);
// 3,關聯Mapper和Reducer
job.setMapperClass(FlowMapper.class);
job.setReducerClass(FlowReducer.class);
// 4,設定Mapper輸出的key和value類型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(FlowBean.class);
// 5,設定最終的輸出類型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(FlowBean.class);
// 6, 設定輸入輸出的路徑
FileInputFormat.setInputPaths(job,new Path(args[0]));
FileOutputFormat.setOutputPath(job,new Path(args[1]));
// 7, 送出job
boolean result = job.waitForCompletion(true);
System.exit(result ? 0 : 1);
}
}
運作之後的結果如下:
13470253144 180 180 360
13509468723 7335 110349 117684
13560439638 918 4938 5856
13568436656 3597 25635 29232
13590439668 1116 954 2070
13630577991 6960 690 7650
13682846555 1938 2910 4848
13729199489 240 0 240
13736230513 2481 24681 27162
13768778790 120 120 240
13846544121 264 0 264
13956435636 132 1512 1644
13966251146 240 0 240
13975057813 11058 48243 59301
13992314666 3008 3720 6728
15043685818 3659 3538 7197
15910133277 3156 2936 6092
15959002129 1938 180 2118
18271575951 1527 2106 3633
18390173782 9531 2412 11943
84188413 4116 1432 5548