天天看点

spark安装(本地模式)基于spark2

本地模式

安装saprk(spark2.0.0)

上传spark-2.0.0-bin-hadoop2.6.tgz或其它版本的软件到有hadoop 的服务器、虚拟机上

# 解压并改名
[[email protected] spark-2.1.1]# tar -zxvf -C /opt/software/spark-2.1.1-bin-hadoop2.7.tgz /usr/local/src/

1.进入spark的bin目录
------------------
spark-shell  #启动spark的命令
spark-submit #打包上传的命令
---------------
2.进入sbin目录
start-master.sh
start-slave.sh
start-all.sh

配置环境变量
[[email protected] src]# vim /etc/profile
# set sqark environment
export SPARK_HOME=/usr/local/src/spark-2.1.1/
export PATH=$PATH:$SPARK_HOME/bin
export PATH=$PATH:$SPARK_HOME/sbin
[[email protected] src]# source /etc/profile


# 启动master并访问网页端
[email protected] src]# start-master.sh 
starting org.apache.spark.deploy.master.Master, logging to /usr/local/src/spark-2.1.1//logs/spark-root-org.apache.spark.deploy.master.Master-1-master.out
[[email protected] src]# jps
7873 Jps
7821 Master
# 在浏览器访问8080端口
           
spark安装(本地模式)基于spark2
启动slave
[[email protected] src]# start-slave.sh 
Usage: ./sbin/start-slave.sh [options] <master>
21/02/21 10:05:45 INFO Worker: Started daemon with process name: [email protected]
21/02/21 10:05:45 INFO SignalUtils: Registered signal handler for TERM
21/02/21 10:05:45 INFO SignalUtils: Registered signal handler for HUP
21/02/21 10:05:45 INFO SignalUtils: Registered signal handler for INT

Master must be a URL of the form spark://hostname:port

Options:
  -c CORES, --cores CORES  Number of cores to use
  -m MEM, --memory MEM     Amount of memory to use (e.g. 1000M, 2G)
  -d DIR, --work-dir DIR   Directory to run apps in (default: SPARK_HOME/work)
  -i HOST, --ip IP         Hostname to listen on (deprecated, please use --host or -h)
  -h HOST, --host HOST     Hostname to listen on
  -p PORT, --port PORT     Port to listen on (default: random)
  --webui-port PORT        Port for web UI (default: 8081)
  --properties-file FILE   Path to a custom Spark properties file.
                           Default is conf/spark-defaults.conf.

#   无法启动需要制定参数
 [[email protected] src]# start-slave.sh spark://master:7077
starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/src/spark-2.1.1//logs/spark-root-org.apache.spark.deploy.worker.Worker-1-master.out
[[email protected] src]# jps
7945 Worker
7993 Jps
7821 Master
# 刷新并访问网页端
           
spark安装(本地模式)基于spark2

slave工作在8081

spark安装(本地模式)基于spark2

继续阅读