天天看點

spark從入門到放棄--hadoop叢集安裝

1、機器準備

192.168.10.149 hadoop-master
192.168.10.150 hadoop-salve1
192.168.10.151 hadoop-salve2
           

2、修改主機名

分别登陸到三台虛拟機 去修改主機名

vi /etc/hostname

hadoop-master
           

配置 host

vi /etc/hosts

192.168.10.149 hadoop-master
192.168.10.150 hadoop-salve1
192.168.10.151 hadoop-salve2
           

3、免密登陸

生成公鑰和私鑰

  ssh-keygen -t rsa

導入公鑰到認證檔案

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  
scp ~/.ssh/id_rsa.pub [email protected]:/home/xxx/id_rsa.pub  
cat ~/id_rsa.pub >> ~/.ssh/authorized_keys 
           

更改權限

chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys  
           

4.關閉SELINUX以及防火牆

防火牆會引起hadoop相關元件通訊的各種異常,需關閉防火牆。

用root使用者權限登入進行操作:

1、關閉防火牆:

service iptables stop

驗證:service iptables status

service iptables stop(臨時關閉)

chkconfig iptables off(重新開機後生效)

2、關閉SELINUX

修改/etc/selinux/config檔案

将SELINUX=enforcing改為SELINUX=disabled

5、修改hadoop配置

5.1、修改core.site.xml

<configuration>
    <property>
      <name>fs.default.name</name>
      <value>hdfs://hadoop-master:9000</value>
    </property>
</configuration>
           

5.2、hdfs-site.xml

<configuration>
    <property>
      <name>dfs.name.dir</name>
      <value>/mnt/data/namenode</value>
    </property>
    <property>
      <name>dfs.data.dir</name>
      <value>/mnt/data/datanode</value>
    </property>
    <property>
      <name>dfs.tmp.dir</name>
      <value>/mnt/data/tmp</value>
    </property>
    <property>
      <name>dfs.replication</name>
      <value>2</value>
    </property>
    <property>
      <name>dfs.permissions</name>
      <value>false</value>
    </property>
</configuration>
           

5.3、mapred-site.xml

<configuration>
    <property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
    </property>
</configuration>
           

5.4、yarn-site.xml

<configuration>
    <property>
      <name>yarn.resourcemanager.hostname</name>
      <value>hadoop-master</value>
    </property>
    <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
    </property>
</configuration>
           

其他節點配置同上

6、格式化namenode

hdfs namenode -format
           

7、啟動hadoop叢集

sh start-dfs.sh 
           

8、master節點檢視啟動狀态jps

23425 SecondaryNameNode
22743 Bootstrap
10056 QuorumPeerMain
7562 jar
23850 Jps
23580 ResourceManager
23245 NameNode
           

9、從伺服器檢視節點狀态jps

25315 QuorumPeerMain
31763 Jps
31528 DataNode
31626 NodeManager
           

繼續閱讀