天天看點

zookeeper、kafka、高可用的hadoop一、zookeeper的概念二、kafka三、Hadoop高可用

一、zookeeper的概念

zookeeper是一個開源的分布式應用程式協調服務

zookeeper是用來保證資料在叢集間的事務一緻性

zookeeper角色與特性:

leader:接受所有Follower的提案請求并同一協調發起提案的投票,負責與所有的Follower進行内部資料互動

Follower:直接為用戶端服務并參與提案的投票,同時與leader進行資料交換

Observer:直接為用戶端服務但并不參與提案的投票,同時也與leader進行資料交換

zookeeper、kafka、高可用的hadoop一、zookeeper的概念二、kafka三、Hadoop高可用

zookeeper叢集實驗

1.1 按照zookeeper

[[email protected] ~]# tar -xf hadoop/zookeeper-3.4.13.tar.gz 
[[email protected] ~]# mv zookeeper-3.4.13   /usr/local/zookeeper
[[email protected] conf]# vim zoo.cfg
server.1=hadoop-0002:2888:3888
server.2=hadoop-0003:2888:3888
server.3=hadoop-0004:2888:3888
server.4=hadoop-0001:2888:3888:observer
[[email protected] ~]# ansible-playbook 111.yml  //同步zookeeper配置檔案
[[email protected] ~]# mkdir /tmp/zookeeper      //zoo.cfg中指定的目錄
[[email protected] ~]# ansible node -m shell -a 'mkdir /tmp/zookeeper'
           

建立 myid 檔案,id 必須與配置檔案裡主機名對應的 server.(id) 一緻

[[email protected] ~]# echo 4 >/tmp/zookeeper/myid
[[email protected] ~]# ssh hadoop-0002 'echo 1 >/tmp/zookeeper/myid'
[[email protected] ~]# ssh hadoop-0003 'echo 2 >/tmp/zookeeper/myid'
[[email protected] ~]# ssh hadoop-0004 'echo 3 >/tmp/zookeeper/myid'
           

啟動服務,并檢視狀态

[[email protected]op-0001 ~]# /usr/local/zookeeper/bin/zkServer.sh start
[[email protected] ~]# ansible node -m shell -a '/usr/local/zookeeper/bin/zkServer.sh start'
[[email protected] ~]# ansible node -m shell -a '/usr/local/zookeeper/bin/zkServer.sh status'
           

二、kafka

2.1 kafka是什麼

zookeeper、kafka、高可用的hadoop一、zookeeper的概念二、kafka三、Hadoop高可用

2.2 Kafka叢集實驗

利用Zookeeper搭建一個Kafka叢集

建立一個topic

模拟生産者釋出消息

模拟消費者接收消息

zookeeper、kafka、高可用的hadoop一、zookeeper的概念二、kafka三、Hadoop高可用

2.2.1 安裝kafka

[[email protected] ~]# tar -xf hadoop/kafka_2.12-2.1.0.tgz 
[[email protected] ~]# mv kafka_2.12-2.1.0 /usr/local/kafka
[[email protected] ~]# vim /usr/local/kafka/config/server.properties
...............
broker.id=4    //範圍1-255,保證數字不重複
..............
[[email protected] ~]#  for i in 71 72 73; do rsync -aSH --delete /usr/local/kafka 192.168.1.$i:/usr/local/; done     //拷貝到其他機器上
其他機器也更改配置,保證數字唯一
           

2.2.2 啟動 kafka 叢集,并驗證

在hadoop-0002、hadoop-0003、hadoop-00014啟動

[[email protected] ~]# ansible node -m shell -a '/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties'
[[email protected] ~]# jps
[[email protected] ~]# /usr/local/kafka/bin/kafka-topics.sh --create --partitions 1 --replication-factor 1 --zookeeper node3:2181 --topic aa    //建立一個 topic
[[email protected] ~]# /usr/local/kafka/bin/kafka-console-producer.sh \
--broker-list node2:9092 --topic aa        //模拟生産者,寫一個資料
[[email protected] ~]# /usr/local/kafka/bin/kafka-console-consumer.sh \ 
--bootstrap-server node1:9092 --topic aa        //模拟消費者,接收消息,這邊會直接同步
           

三、Hadoop高可用

zookeeper、kafka、高可用的hadoop一、zookeeper的概念二、kafka三、Hadoop高可用

3.1 停止所有服務, kafka的實驗做完之後就已經停止

3.2 新加一台機器namenode2,這裡之前有一台namenode,實作高可用

把hadoop配置檔案拷貝過去,同步/etc/hosts,給私鑰(ansible部署)

[[email protected] ~]# rm -rf /var/hadoop/*    //所有的主機删除/var/hadoop/*
[[email protected] ~]# ansible node -m shell -a 'rm -rf /var/hadoop/*'
           

3.3 配置 core-site

[[email protected] ~]# vim /usr/local/hadoop/etc/hadoop/core-site.xml
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://nsd1911</value>
        <description>use file system</description>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/var/hadoop</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>hadoop-0002:2181,hadoop-0003:2181,hadoop-0004:2181</value>
    </property>
    <property>
        <name>hadoop.proxyuser.nfsuser.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.nfsuser.hosts</name>
        <value>*</value>
    </property>
</configuration>
           

3.4 配置 hdfs-site

[[email protected] ~]# vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
    <property>
        <name>dfs.nameservices</name>
        <value>nsd1911</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.nsd1911</name>
        <value>nn1,nn2</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.nsd1911.nn1</name>
        <value>hadoop-0001:8020</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.nsd1911.nn2</name>
        <value>namenode2:8020</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.nsd1911.nn1</name>
        <value>hadoop-0001:50070</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.nsd1911.nn2</name>
        <value>namenode2:50070</value>
    </property>
      <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://hadoop-0002:8485;hadoop-0003:8485;hadoop-0004:8485/nsd1911</value>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/var/hadoop/journal</value>
    </property>
    <property>
        <name>dfs.client.failover.proxy.provider.nsd1911</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/root/.ssh/id_rsa</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
</configuration>

           

3.5 配置yarn檔案

[[email protected] ~]# vim /usr/local/hadoop/etc/hadoop/yarn-site.xml
<configuration>
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.store.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>hadoop-0002:2181,hadoop-0003:2181,hadoop-0004:2181</value>
    </property>
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>yarn-ha</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>hadoop-0001</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>namenode2</value>
    </property>
       <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>

           

同步到所有機器上(是由ansible同步)

3.6 驗證

確定所有機器上的/var/hadoop/*都删除資料

初始化

[[email protected] hadoop]# ./bin/hdfs zkfc -formatZK
[[email protected] hadoop]# ansible node -m shell -a '/usr/local/hadoop/sbin/hadoop-daemon.sh start journalnode'   
[[email protected] hadoop]# ./bin/hdfs namenode -format
[[email protected] hadoop]# scp -r  /var/hadoop/dfs/ 192.168.1.76:/var/hadoop/
[[email protected] hadoop]# ./bin/hdfs namenode -initializeSharedEdits
[[email protected] hadoop]# ansible node -m shell -a '/usr/local/hadoop/sbin/hadoop-daemon.sh stop journalnode'
           

啟動叢集

[[email protected] hadoop]# ./sbin/start-dfs.sh
[[email protected] hadoop]# ./sbin/start-yarn.sh 
[[email protected] hadoop]# ./sbin/yarn-daemon.sh start resourcemanager
           

檢視狀态

[[email protected] hadoop]# ./bin/hdfs haadmin -getServiceState nn2
[[email protected] hadoop]# ./bin/hdfs haadmin -getServiceState nn1
[[email protected] hadoop]# ./bin/yarn rmadmin -getServiceState rm1
[[email protected] hadoop]# ./bin/yarn rmadmin -getServiceState rm
[[email protected] hadoop]# ./bin/hdfs dfsadin -report  //能看到三個
[[email protected] hadoop]# ./bin/yarn node -list     //能看到三個
[[email protected] hadoop]# ./bin/hadoop fs -mkdir /input    //通路叢集檔案
[[email protected] hadoop]# ./bin/hadoop fs -ls /
[[email protected] hadoop]# ./sbin/hadoop-daemon.sh stop namenode   //主從切換Activete

           

繼續閱讀