三台阿裡雲伺服器搭建完全分布式Hadoop叢集
1.叢集規劃
角色配置設定:
![](https://img.laitimes.com/img/__Qf2AjLwojIjJCLyojI0JCLicmbw5yY1MjN3YDNklzN4IzY3cTYjJmY0EmN2UTOmVjZiZGZ58CX5d2bs92Yl1iclB3bsVmdlR2LcNWaw9CXt92Yu4GZjlGbh5yYjV3Lc9CX6MHc0RHaiojIsJye.png)
2.準備環境
阿裡雲環境:CentOS Hadoop-3.2.2 jdk1.8
Xshell Xftp
打開Xshell
ssh 477.xx.xx
(公網ip)
輸入使用者名root和密碼
點選Xshell上方小圖示
輕按兩下傳回上一級
進入usr
将下載下傳好的hadoop和jdk拖拽過來
等待傳輸完成
3.開始搭建
1.SSH無秘鑰通路
連接配接至master
ssh 477.xx.xx,xxx
(外網ip)
修改主機名:
vim /etc/hostsname
将預設值删掉改為自己角色主機名
關閉防火牆:
systemctl stop firewalld.service
//關閉防火牆
systemctl firewalld.service
//關閉防火牆開機自啟動
配置主機映射:
vim /etc/hosts
477.xx.xx.xxx slave1 (外網ip)
477.xx.xx.xxx slave2 (外網ip)
172.xxx.xx.xx master (内網ip)
生成秘鑰:
ssh-keygen
//根據提示連敲三下回車
發送秘鑰:
ssh-copy-id master
ssh-copy-id slave1
ssh-copy-id slave2
重新開機:
reboot
連接配接至slave1
ssh 477.xx.xx,xxx
vim /etc/hostsname
systemctl stop firewalld.service
systemctl firewalld.service
vim /etc/hosts
477.xx.xx.xxx master(外網ip)
172.xxx.xx.xx slave1(内網ip)
ssh-keygen
ssh-copy-id master
ssh-copy-id slave1
ssh-copy-id slave2
reboot
連接配接至slave2
ssh 477.xx.xx,xxx
vim /etc/hostsname
systemctl stop firewalld.service
systemctl firewalld.service
vim /etc/hosts
172.xxx.xx.xx slave2(内網ip)
ssh-keygen
ssh-copy-id master
ssh-copy-id slave1
ssh-copy-id slave2
reboot
檢驗:登入三台伺服器互相用ssh通路檢視是否需要密碼
2.配置jdk
ssh 477.xx.xx,xxx
(外網ip)
解壓jdk和hadoop:
cd /usr
tar -zxvf 檔案名
更改檔案名
mv 舊檔案名 新檔案名
配置java環境變量
vim /etc/profile
export JAVA_HOME=/usr/jdk
export PATH=$JAVA_HOME/bin:$PATH
source /etc/profile
//使檔案生效
java -version
//檢視版本
登入slave1
ssh slave1
cd /usr
tar -zxvf 檔案名
mv 舊檔案名 新檔案名
vim /etc/profile
source /etc/profile
java -version
退出slave1
exit
登入slave2
ssh slave2
cd /usr
tar -zxvf 檔案名
mv 舊檔案名 新檔案名
vim /etc/profile
source /etc/profile
`
js
//使檔案生效
java -version
//檢視版本
退出slave2
exit
3.配置hadoop
登入master
ssh 477.xx.xx.xxx
(外網ip)
cd /usr/hadoop/etc/hadoop
vim hadoop-env.sh
export JAVA_HOME=/usr/jdk
vim mapred-env.sh
export JAVA_HOME=/usr/jdk
vim yarn-env.sh
export JAVA_HOME=/usr/jdk
vim core-site.xml
![image.png](https://ucc.alicdn.com/pic/developer-ecology/105788165c4d498f9726721642044473.png)
vim hdfs-site.xml
![image.png](https://ucc.alicdn.com/pic/developer-ecology/94e6dfc7326d437e9ae79e8cb28b962c.png)
vim mapred-site.xml
![image.png](https://ucc.alicdn.com/pic/developer-ecology/51fc7fbfa3904a129d8bff92d2128dc8.png)
vim yarn-site.xml
![image.png](https://ucc.alicdn.com/pic/developer-ecology/83f3520cbf8048f6af968ecf63430a5c.png)
vim workers
![image.png](https://ucc.alicdn.com/pic/developer-ecology/fbcb13b9ae774fe79ab963cefb4d3fd2.png)
cd /usr/hadoop/sbin
vim start-dfs.sh
![image.png](https://ucc.alicdn.com/pic/developer-ecology/e14e69fa89254cb5b5277ece14338c5c.png)
vim stop-dfs.sh
![image.png](https://ucc.alicdn.com/pic/developer-ecology/86f14cf8c9584e96baf3bc3aa5336fab.png)
vim start-yarn.sh
![image.png](https://ucc.alicdn.com/pic/developer-ecology/f1bd250f87314e2aa392e509a7a3267a.png)
vim stop-yarn.sh
![image.png](https://ucc.alicdn.com/pic/developer-ecology/6e1aa7389378462d99cd0f758b942207.png)
将修改的配置同步到slave1和slave2
scp -r /usr/hadoop/etc/hadoop root@slave1:/usr/hadoop/etc/
scp -r /usr/hadoop/etc/hadoop root@slave2:/usr/hadoop/etc/
scp -r /usr/hadoop/sbin root@slave1:/usr/hadoop/
master開放端口9000和50070
![image.png](https://ucc.alicdn.com/pic/developer-ecology/d01f5088fd2845609d41609943d00147.png)
格式化Namenode
/usr/hadoop/bin/hdfs namenode -format
啟動程序
/usr/hadoop/sbin/start-dfs.sh
ssh root@slave2 /usr/hadoop/sbin/start-yarn.sh
前往master的公網ip加端口号50070檢視
477.xx.xx.xxx:50070