天天看点

hadoop 集群搭建(二)

前边完成了基本的环境搭建 参考 : hadoop 集群搭建(一)

下面安装hadoop

一 、解压 

tar -zxvf hadoop-2.6.0-5.7.0.tar.gz

cd hadoop-2.6.0-5.7.0/etc/hadoop

ls -l

hadoop 集群搭建(二)

二、配置

首先配置core-site.xml

<configuration>
                    <!-- 指定hdfs的nameservice为ns1 -->
                    <property>
                        <name>fs.defaultFS</name>
                        <value>hdfs://hadoop001:9000</value>
                    </property>
                    <!-- 指定hadoop临时目录 -->
                    <property>
                        <name>hadoop.tmp.dir</name>
                        <value>/Users/fengzhongdedacong/data/hadoop/tmp</value>

                    </property>
                    <!-- 指定zookeeper地址 -->
                    <property>
                        <name>ha.zookeeper.quorum</name>
                        <value>hadoop000::2181,hadoop001:2181</value>
                    </property>
                </configuration>
           

 配置 hadoop-env.sh文件

export JAVA_HOME=/home/hadoop/app/jdk1.8.0_91
           

配置 yarn-env.sh

export JAVA_HOME=/home/hadoop/app/jdk1.8.0_91
           

配置hdfs-site.xml

<property>
 <name>dfs.replication</name>
   <value>1</value>
 </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/home/hadoop/data/hdfs/name</value>
   <final>true</final>
</property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/home/hadoop/data/hdfs/data</value>
   <final>true</final>
 </property>
 <property>
  <name>dfs.namenode.secondary.http-address</name>
   <value>hadoop001:9001</value>
 </property>
 <property>
   <name>dfs.webhdfs.enabled</name>
   <value>true</value>
 </property>
 <property>
   <name>dfs.permissions</name>
   <value>false</value>
 </property>
           

配置 mapred-site.xml

<property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
 </property>
           

配置yarn-site.xml

<property>
 <name>yarn.resourcemanager.address</name>
   <value>hadoop001:18040</value>
 </property>
 <property>
   <name>yarn.resourcemanager.scheduler.address</name>
   <value>hadoop001:18030</value>
 </property>
 <property>
   <name>yarn.resourcemanager.webapp.address</name>
   <value>hadoop001:18088</value>
 </property>
 <property>
   <name>yarn.resourcemanager.resource-tracker.address</name>
   <value>hadoop001:18025</value>
 </property>
 <property>
   <name>yarn.resourcemanager.admin.address</name>
   <value>hadoop001:18141</value>
 </property>
 <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
 </property>
 <property>
     <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
     <value>org.apache.hadoop.mapred.ShuffleHandler</value>
 </property>
           

配置slaves 文件

hadoop000
           

配置环境变量   vim /etc/profile

export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-5.7.0
export PATH="$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH"
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
           

将hadoop分发到其它机器上

scp -r hadoop-2.6.0-5.7.0 [email protected]:~/app/
           

三 、 初始化hadoop

hdfs name -format
           

四 、 启动hadoop

第一种  
  start-all.sh
第二种
 start-dfs.sh
 start-yarn.sh
           

五、验证

jps

hadoop 集群搭建(二)

访问

http://hadoop001:50070

hadoop 集群搭建(二)

继续阅读