在我們的一些應用程式中,常常避免不了要與資料庫進行互動,而在我們的hadoop中,有時候也需要和資料庫進行互動,比如說,資料分析的結果存入資料庫,或者是,讀取資料庫的資訊寫入HDFS上,不過直接使用MapReduce操作資料庫,這種情況在現實開發還是比較少,一般我們會采用Sqoop來進行資料的遷入,遷出,使用Hive分析資料集,大多數情況下,直接使用Hadoop通路關系型資料庫,可能産生比較大的資料通路壓力,尤其是在資料庫還是單機的情況下,情況可能更加糟糕,在叢集的模式下壓力會相對少一些。
那麼,今天散仙就來看下,如何直接使用Hadoop1.2.0的MR來讀寫操作資料庫,hadoop的API提供了DBOutputFormat和DBInputFormat這兩個類,來進行與資料庫互動,除此之外,我們還需要定義一個類似JAVA Bean的實體類,來與資料庫的每行記錄進行對應,通常這個類要實作Writable和DBWritable接口,來重寫裡面的4個方法以對應擷取每行記錄裡面的各個字段資訊。
下面,我們先來看下如何使用MR來讀取資料庫的資料,并寫入HDFS上,
資料表的截圖如下所示,
實體類定義代碼:
<pre name="code" class="java">package com.qin.operadb;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapreduce.lib.db.DBWritable;
/***
* 封裝資料庫實體資訊
* 的記錄
*
* 搜尋大資料技術交流群:376932160
*
* **/
public class PersonRecoder implements Writable,DBWritable {
public int id;//對應資料庫中id字段
public String name;//對應資料庫中的name字段
public int age;//對應資料庫中的age字段
@Override
public void readFields(ResultSet result) throws SQLException {
this.id=result.getInt(1);
this.name=result.getString(2);
this.age=result.getInt(3);
}
@Override
public void write(PreparedStatement stmt) throws SQLException {
stmt.setInt(1, id);
stmt.setString(2, name);
stmt.setInt(3, age);
}
@Override
public void readFields(DataInput arg0) throws IOException {
// TODO Auto-generated method stub
this.id=arg0.readInt();
this.name=Text.readString(arg0);
this.age=arg0.readInt();
}
@Override
public void write(DataOutput out) throws IOException {
// TODO Auto-generated method stub
out.writeInt(id);
Text.writeString(out, this.name);
out.writeInt(this.age);
}
@Override
public String toString() {
// TODO Auto-generated method stub
return "id: "+id+" 年齡: "+age+" 名字:"+name;
}
}
</pre>
MR類的定義代碼,注意是一個Map Only作業:
<pre name="code" class="java">package com.qin.operadb;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.lib.IdentityReducer;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.db.DBConfiguration;
import org.apache.hadoop.mapreduce.lib.db.DBInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class ReadMapDB {
/**
* Map作業讀取資料記錄數
*
* **/
private static class DBMap extends Mapper&lt;LongWritable, PersonRecoder , LongWritable, Text&gt;{
@Override
protected void map(LongWritable key, PersonRecoder value,Context context)
throws IOException, InterruptedException {
context.write(new LongWritable(value.id), new Text(value.toString()));
}
}
public static void main(String[] args)throws Exception {
JobConf conf=new JobConf(ReadMapDB.class);
//Configuration conf=new Configuration();
// conf.set("mapred.job.tracker","192.168.75.130:9001");
//讀取person中的資料字段
// conf.setJar("tt.jar");
//注意這行代碼放在最前面,進行初始化,否則會報
DBConfiguration.configureDB(conf, "com.mysql.jdbc.Driver", "jdbc:mysql://192.168.211.36:3306/test", "root", "qin");
/**要讀取的字段資訊**/
String fileds[]=new String[]{"id","name","age"};
/**Job任務**/
Job job=new Job(conf, "readDB");
System.out.println("模式: "+conf.get("mapred.job.tracker"));
/**設定資料庫輸入格式的一些資訊**/
DBInputFormat.setInput(job, PersonRecoder.class, "person", null, "id", fileds);
/***設定輸入格式*/
job.setInputFormatClass(DBInputFormat.class);
job.setOutputKeyClass(LongWritable.class);
job.setOutputValueClass(Text.class);
job.setMapperClass(DBMap.class);
String path="hdfs://192.168.75.130:9000/root/outputdb";
FileSystem fs=FileSystem.get(conf);
Path p=new Path(path);
if(fs.exists(p)){
fs.delete(p, true);
System.out.println("輸出路徑存在,已删除!");
}
FileOutputFormat.setOutputPath(job,p );
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
</pre>
寫入到HDFS目錄下資料集:
讀取相對比較簡單,需要注意的第一注意JDBC的驅動jar包要在各個節點上分别上傳一份,第二是在main方法裡的驅動類的編寫順序,以及資料資訊的完整,才是正确連接配接資料庫并讀取的關鍵。
下面來看下,如何使用MR,分析完資料後的結果,寫入在資料庫中,散仙本篇測試的是一個簡單的WordCount的統計。我們先來看下資料庫表的資訊:
實體類定義代碼:
<pre name="code" class="java">package com.qin.operadb;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapreduce.lib.db.DBWritable;
public class WordRecoder implements Writable,DBWritable {
public String word;
public int count;
@Override
public void readFields(ResultSet rs) throws SQLException {
this.word=rs.getString(1);
this.count=rs.getInt(2);
}
@Override
public void write(PreparedStatement ps) throws SQLException {
ps.setString(1, this.word);
ps.setInt(2, this.count);
}
@Override
public void readFields(DataInput in) throws IOException {
this.word=Text.readString(in);
this.count=in.readInt();
}
@Override
public void write(DataOutput out) throws IOException {
Text.writeString(out, this.word);
out.writeInt(count);
}
}
</pre>
統計的2個檔案的内容所示:
MR的核心類代碼:
<pre name="code" class="java">package com.qin.operadb;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.db.DBConfiguration;
import org.apache.hadoop.mapreduce.lib.db.DBInputFormat;
import org.apache.hadoop.mapreduce.lib.db.DBOutputFormat;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WriteMapDB {
private static class WMap extends Mapper&lt;LongWritable, Text, Text, IntWritable&gt;{
/***
* Mapper的參數類型介紹
* K,V,K,V分别依次代表
* Map作業輸入類型的K,輸入類型的V
* 後面兩個是輸出類型的K,輸出類型的V
* 後面的兩個與 context.write(word, one);
* 的兩個參數是對應的
* **/
private Text word=new Text();
private IntWritable one=new IntWritable(1);
@Override
protected void map(LongWritable key, Text value,Context context)
throws IOException, InterruptedException {
String line=value.toString();
//處理記事本UTF-8的BOM問題
if (line.getBytes().length &gt; 0) {
if ((int) line.charAt(0) == 65279) {
line = line.substring(1);
}
}
StringTokenizer st=new StringTokenizer(line);
while(st.hasMoreTokens()){
word.set(st.nextToken());//設定單詞
context.write(word, one);
}
}
}
/***
* 由于在reduce中,需要向資料庫裡寫入
* 資料,是以跟combine,不能共用
*
*
*
* ***/
private static class WCombine extends Reducer&lt;Text, IntWritable, Text, IntWritable&gt;{
@Override
protected void reduce(Text text, Iterable&lt;IntWritable&gt; value,Context context)
throws IOException, InterruptedException {
int sum=0;
for(IntWritable iw:value){
sum+=iw.get();
}
context.write(text, new IntWritable(sum));
}
}
/**
*
* Reduce類
*
* **/
private static class WReduce extends Reducer&lt;Text, IntWritable, WordRecoder, Text&gt;{
@Override
protected void reduce(Text key, Iterable&lt;IntWritable&gt; values,Context context)
throws IOException, InterruptedException {
int sum=0;
for(IntWritable s:values){
sum+=s.get();
}
WordRecoder wr=new WordRecoder();
wr.word=key.toString();
wr.count=sum;
//寫出到資料庫裡
context.write(wr, new Text());
}
}
public static void main(String[] args)throws Exception {
JobConf conf=new JobConf(WriteMapDB.class);
//Configuration conf=new Configuration();
// conf.set("mapred.job.tracker","192.168.75.130:9001");
//讀取person中的資料字段
//conf.setJar("tt.jar");
// conf.setNumReduceTasks(1);
//注意這行代碼放在最前面,進行初始化,否則會報
/**建立資料庫連接配接**/
DBConfiguration.configureDB(conf, "com.mysql.jdbc.Driver", "jdbc:mysql://192.168.211.36:3306/test?characterEncoding=utf-8", "root", "qin");
String fileds[]=new String[]{"word","count"};
Job job=new Job(conf, "writeDB");
System.out.println("運作模式: "+conf.get("mapred.job.tracker"));
/**設定輸出表的的資訊 第一個參數是job任務,第二個參數是表名,第三個參數字段項**/
DBOutputFormat.setOutput(job, "wordresult", fileds);
/**設定DB的輸入路徑**/
job.setInputFormatClass(TextInputFormat.class);
/**設定DB的輸出路徑**/
job.setOutputFormatClass(DBOutputFormat.class);
/***設定Reduce的個數為1,可以得到全局統計的數字
* 但,需要注意,在分布式環境下,最好不要設定為1,Reduce的個數
* 正是Hadoop并發能力的展現
*
* **/
// job.setNumReduceTasks(1);
/**設定輸出K路徑**/
job.setOutputKeyClass(Text.class);
/**設定輸出V路徑**/
job.setOutputValueClass(IntWritable.class);
/**設定Map類**/
job.setMapperClass(WMap.class);
/**設定Combiner類**/
job.setCombinerClass(WCombine.class);
/**設定Reduce類**/
job.setReducerClass(WReduce.class);
/**設定輸入路徑*/
FileInputFormat.setInputPaths(job, new Path("hdfs://192.168.75.130:9000/root/input"));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
</pre>
運作狀态如下所示:
<pre name="code" class="java">運作模式: 192.168.75.130:9001
14/03/26 20:26:59 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/03/26 20:27:01 INFO input.FileInputFormat: Total input paths to process : 2
14/03/26 20:27:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/03/26 20:27:01 WARN snappy.LoadSnappy: Snappy native library not loaded
14/03/26 20:27:01 INFO mapred.JobClient: Running job: job_201403262328_0006
14/03/26 20:27:02 INFO mapred.JobClient: map 0% reduce 0%
14/03/26 20:27:10 INFO mapred.JobClient: map 50% reduce 0%
14/03/26 20:27:11 INFO mapred.JobClient: map 100% reduce 0%
14/03/26 20:27:18 INFO mapred.JobClient: map 100% reduce 33%
14/03/26 20:27:19 INFO mapred.JobClient: map 100% reduce 100%
14/03/26 20:27:20 INFO mapred.JobClient: Job complete: job_201403262328_0006
14/03/26 20:27:20 INFO mapred.JobClient: Counters: 28
14/03/26 20:27:20 INFO mapred.JobClient: Job Counters
14/03/26 20:27:20 INFO mapred.JobClient: Launched reduce tasks=1
14/03/26 20:27:20 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=10345
14/03/26 20:27:20 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/03/26 20:27:20 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/03/26 20:27:20 INFO mapred.JobClient: Launched map tasks=2
14/03/26 20:27:20 INFO mapred.JobClient: Data-local map tasks=2
14/03/26 20:27:20 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=8911
14/03/26 20:27:20 INFO mapred.JobClient: File Output Format Counters
14/03/26 20:27:20 INFO mapred.JobClient: Bytes Written=0
14/03/26 20:27:20 INFO mapred.JobClient: FileSystemCounters
14/03/26 20:27:20 INFO mapred.JobClient: FILE_BYTES_READ=158
14/03/26 20:27:20 INFO mapred.JobClient: HDFS_BYTES_READ=325
14/03/26 20:27:20 INFO mapred.JobClient: FILE_BYTES_WRITTEN=182065
14/03/26 20:27:20 INFO mapred.JobClient: File Input Format Counters
14/03/26 20:27:20 INFO mapred.JobClient: Bytes Read=107
14/03/26 20:27:20 INFO mapred.JobClient: Map-Reduce Framework
14/03/26 20:27:20 INFO mapred.JobClient: Map output materialized bytes=164
14/03/26 20:27:20 INFO mapred.JobClient: Map input records=6
14/03/26 20:27:20 INFO mapred.JobClient: Reduce shuffle bytes=164
14/03/26 20:27:20 INFO mapred.JobClient: Spilled Records=24
14/03/26 20:27:20 INFO mapred.JobClient: Map output bytes=185
14/03/26 20:27:20 INFO mapred.JobClient: Total committed heap usage (bytes)=336338944
14/03/26 20:27:20 INFO mapred.JobClient: CPU time spent (ms)=2850
14/03/26 20:27:20 INFO mapred.JobClient: Combine input records=20
14/03/26 20:27:20 INFO mapred.JobClient: SPLIT_RAW_BYTES=218
14/03/26 20:27:20 INFO mapred.JobClient: Reduce input records=12
14/03/26 20:27:20 INFO mapred.JobClient: Reduce input groups=8
14/03/26 20:27:20 INFO mapred.JobClient: Combine output records=12
14/03/26 20:27:20 INFO mapred.JobClient: Physical memory (bytes) snapshot=464982016
14/03/26 20:27:20 INFO mapred.JobClient: Reduce output records=8
14/03/26 20:27:20 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2182836224
14/03/26 20:27:20 INFO mapred.JobClient: Map output records=20
</pre>
最後,我們就可以去資料庫裡,檢視統計的資訊了,截圖如下: