天天看点

CentOS7使用HBase-1.2.6单机版+hadoop-2.6.5单机版

一、hadoop下载,使用2.6.5版本 http://hadoop.apache.org/releases.html

tar -zxvf hadoop-2.6.5.tar.gz

笔者的路径是/root/Downloads/hadoop-2.6.5

二、Hadoop的配置

1、修改文件/root/Downloads/hadoop-2.6.5/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/root/Downloads/jdk1.8.0_172

2、修改/etc/hadoop/core-site.xml 文件

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
        <description>HDFS,URI</description> 
    </property>
</configuration>      

3、修改etc/hadoop/hdfs-site.xml文件

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/usr/local/hadoop/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/usr/local/hadoop/tmp/dfs/data</value>
    </property>
</configuration>      

4、执行 NameNode 的格式化:

[root@bogon ~]# ./bin/hdfs namenode -format

成功的话,会看到 “successfully formatted” 和 “Exitting with status 0” 的提示,若为 “Exitting with status 1” 则是出错。

18/04/27 15:45:59 INFO common.Storage: Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted.

18/04/27 15:45:59 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression

18/04/27 15:45:59 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0

seconds.

18/04/27 15:45:59 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0

18/04/27 15:45:59 INFO util.ExitUtil: Exiting with status 0

18/04/27 15:45:59 INFO namenode.NameNode: SHUTDOWN_MSG:

5、SSH免密码登录(单机版不需要像某些教程提到的新建hadoop用户,即不新建用户)

[root@bogon ~]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa  

Generating public/private dsa key pair.

Your identification has been saved in /root/.ssh/id_dsa.

Your public key has been saved in /root/.ssh/id_dsa.pub.

The key fingerprint is:

a1:52:80:19:6c:48:57:44:2b:6d:68:f7:ee:36:88:d3 root@bogon

The key's randomart image is:

+--[ DSA 1024]----+

|.+.==+           |

|. * o..          |

| . + =. .        |

|  . +... .       |

|    . ..S        |

|     ..          |

|    o ..         |

|   o E.o         |

|    . ...        |

+-----------------+

[root@bogon ~]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

[root@bogon ~]# chmod 0600 ~/.ssh/authorized_keys

6、hdfs启动与停止

[root@bogon hadoop-2.6.5]# ./sbin/start-dfs.sh

Starting namenodes on [localhost]

localhost: starting namenode, logging to /root/Downloads/hadoop-2.6.5/logs/hadoop-root-namenode-bogon.out

localhost: starting datanode, logging to /root/Downloads/hadoop-2.6.5/logs/hadoop-root-datanode-bogon.out

Starting secondary namenodes [0.0.0.0]

0.0.0.0: starting secondarynamenode, logging to /root/Downloads/hadoop-2.6.5/logs/hadoop-root-secondarynamenode-bogon.out

[root@bogon hadoop-2.6.5]# jps

9153 DataNode

9298 SecondaryNameNode

9413 Jps

9036 NameNode

[root@bogon hadoop-2.6.5]# ./sbin/stop-dfs.sh

启动完成后,可以通过命令 jps 来判断是否成功启动,若成功启动则会列出如下进程: “NameNode”、”DataNode”和SecondaryNameNode(如果 SecondaryNameNode 没有启动,请运行 sbin/stop-dfs.sh 关闭进程,然后再次尝试启动尝试)。如果没有 NameNode 或 DataNode ,那就是配置不成功,请仔细检查之前步骤,或通过查看启动日志排查原因。

7、web监控,浏览器输入

http://localhost:50070

8、配置yarn文件(非必须)

配置/etc/hadoop/mapred-site.xml 。这里注意一下,hadoop里面默认是mapred-site.xml.template 文件,如果配置yarn,把mapred-site.xml.template 重命名为mapred-site.xml 。如果不启动yarn,把重命名还原。

mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>
yarn-site.xml 
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
        </property>
</configuration>      

YARN 是从 MapReduce 中分离出来的,负责资源管理与任务调度。YARN 运行于 MapReduce 之上,提供了高可用性、高扩展性,但 YARN 主要是为集群提供更好的资源管理与任务调度,然而这在单机上体现不出价值,反而会使程序跑得稍慢些。因此在单机上是否开启 YARN 就看实际情况了。

启动和停止的终端命令是:

[root@bogon hadoop-2.6.5]# ./sbin/start-yarn.sh

starting yarn daemons

starting resourcemanager, logging to /root/Downloads/hadoop-2.6.5/logs/yarn-root-resourcemanager-bogon.out

localhost: starting nodemanager, logging to /root/Downloads/hadoop-2.6.5/logs/yarn-root-nodemanager-bogon.out

10180 Jps

10056 ResourceManager

10142 NodeManager

[root@bogon hadoop-2.6.5]# ./sbin/stop-yarn.sh

启动 YARN 之后,运行实例的方法还是一样的,仅仅是资源管理方式、任务调度不同。观察日志信息可以发现,不启用 YARN 时,是 “mapred.LocalJobRunner” 在跑任务,启用 YARN 之后,是 “mapred.YARNRunner” 在跑任务。启动 YARN 有个好处是可以通过 Web 界面查看任务的运行情况:http://localhost:8088/cluster。

如果不想启动 YARN,务必把配置文件 mapred-site.xml 重命名,改成 mapred-site.xml.template,需要用时改回来就行。否则在该配置文件存在,而未开启 YARN 的情况下,运行程序会提示 “Retrying connect to server: 0.0.0.0/0.0.0.0:8032” 的错误,这也是为何该配置文件初始文件名为 mapred-site.xml.template。

三、Hase的配置

先看姊妹篇的介绍:CentOS7使用HBase-1.2.6单机版,无hadoop

再来修改配置文件/root/Downloads/hbase-1.2.6/conf/hbase-site.xml

<configuration>
   <property>
      <name>hbase.tmp.dir</name>
      <value>/usr/local/hbase/hbaseData</value>
   </property>
   <property>
      <name>hbase.rootdir</name>
      <value>hdfs://localhost:9000/hbase</value>
   </property>
   <property> 
      <name>hbase.zookeeper.quorum</name>
      <value>localhost</value> 
   </property>
</configuration>      

启动验证

[root@bogon ~]# cd /root/Downloads/hbase-1.2.6/bin

[root@bogon bin]# ./start-hbase.sh

starting master, logging to /usr/local/hbase/logs/hbase-root-master-bogon.out

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0

[root@bogon bin]# jps

11379 Jps

11339 HMaster

我们来检测一下HBase有没有连接上Hadoop

[root@bogon ~]# cd /root/Downloads/hadoop-2.6.5/bin

[root@bogon bin]# hdfs dfs -ls /

Found 1 items

drwxr-xr-x   - root supergroup          0 2018-04-27 17:05 /hbase

说明自动建立/hbase文件夹成功了,连接ok!