天天看点

安装hadoop-2.3.0-cdh5.1.2

记录安装步骤,网上有点乱

安装过程简单,主要是前面步骤做好就行

参考地址 http://blog.csdn.net/qyf_5445/article/details/42679857

包下载地址

jdk

http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html  jdk1.7.0_79

hadoop

http://archive.cloudera.com/cdh5/cdh/5/  hadoop-2.3.0-cdh5.1.2

主机名    ip                            角色

master  192.168.5.30    NameNode    ResourceManager

slave1   192.168.5.31    DateNode     NodeManager

slave2    192.168.5.32    DateNode     NodeManager

以下操作3个节点都需要做

1.cat /etc/hosts

192.168.5.30 master

192.168.5.31 slave1

192.168.5.32 slave2

2.

useradd hadoop  

passwd hadoop 

3.ssh互信,su hadoop(这步在master做即可)

ssh-keygen -t rsa

ssh-copy-id -i id_rsa.pub hadoop@slave1

ssh-copy-id -i id_rsa.pub hadoop@slave2

4.

mkdir -p /data/hadoop

chown -R hadoop.hadoop /data/hadoop/

5.java环境

cat /etc/profile

export JAVA_HOME=/usr/java/jdk1.7.0_79

export JAVA_BIN=$JAVA_HOME/bin

export PATH=$PATH:$JAVA_HOME/bin

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

export HADOOP_HOME=/data/hadoop

export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$PATH

------------------------------------------------------

以下在master上操作

6.修改/etc/hadoop/hadoop-env.sh中设JAVA_HOME。

应当使用绝对路径。

export JAVA_HOME=$JAVA_HOME                  //错误,不能这么改

export JAVA_HOME=/usr/java/jdk1.7.0_79        //正确,应该这么改

7.

tar xvf hadoop-2.3.0-cdh5.1.2.tar.gz -C /data/hadoop 

确认目录结构是/data/hadoop/etc/hadoop

8.master上/data/hadoop/etc/hadoop

vi slaves

slave1

slave2

vi masters

master

9.修改如下几个文件,并在中间添加以下相应内容

vi core-site.xml

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://master:9000</value>

</property>

<name>io.file.buffer.size</name>

<value>131072</value>

<name>hadoop.tmp.dir</name>

<value>file:/data/hadoop/tmp</value>

</configuration>

vi hdfs-site.xml 

<name>dfs.namenode.name.dir</name>

<value>file:/data/hadoop/dfs/name</value>

<name>dfs.namenode.data.dir</name>

<value>file:/data/hadoop/dfs/data</value>

<name>dfs.replication</name>   

<value>2</value> 

vi yarn-site.xml

<name>yarn.resourcemanager.address</name>

<value>master:8032</value>

<name>yarn.resourcemanager.scheduler.address</name>

<value>master:8030</value>

<name>yarn.resourcemanager.resource-tracker.address</name>

<value>master:8031</value>

<name>yarn.resourcemanager.admin.address</name>

<value>master:8033</value>

<name>yarn.resourcemanager.webapp.address</name>

<value>master:8088</value>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>

<value>org.apache.hadoop.mapred.ShuffleHandler</value>

vi mapred-site.xml

<name>mapreduce.framework.name</name>

<value>yarn</value>

<name>mapreduce.jobhistory.address</name>

<value>master:10020</value>

<name>mapreduce.jobhistory.webapp.address</name>

<value>master:19888</value>

10.复制hadoop到slave1,slave2

su hadoop

scp -r /data/hadoop/* hadoop@slave1:/data/hadoop/

scp -r /data/hadoop/* hadoop@slave2:/data/hadoop/

11.su hadoop,hadoop namenode -format  初始化只需要做一次

12.su hadoop,start-dfs.sh and start-yarn.sh

提示logging to /data/hadoop/logs/hadoop-hadoop-namenode-master.out

13.检测是否安装成功

jps

http://192.168.5.30:8088/cluster

14.报错

hadoop-2.3.0安装错误

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform  hadoop本地库与系统版本不一致引起的错误

hadoop-2.2.0安装错误

在使用./sbin/start-dfs.sh或./sbin/start-all.sh启动时会报出这样如下警告:

Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library 

/usr/local/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled 

stack guard. The VM will try to fix the stack guard now.

....

Java: ssh: Could not resolve hostname Java: Name or service not known

HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not 

known

64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known

这个问题的错误原因会发生在64位的操作系统上,原因是从官方下载的hadoop使用的本地库

文件(例如lib/native/libhadoop.so.1.0.0)都是基于32位编译的,运行在64位系统上就会出

现上述错误。

解决方法之一是在64位系统上重新编译hadoop,另一种方法是在hadoop-env.sh和yarn-env.sh中添加如下两行: 

export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native  

export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

15.hadoop使用端口说明

16.ss -an

本文转自 liqius 51CTO博客,原文链接:http://blog.51cto.com/szgb17/1691814,如需转载请自行联系原作者