Spark报告
金航1510122526
hadoop安装与使用实验报告Spark实验报告
一、环境搭建
1、下载scala2.11.4版本下载地址为:www.scala-
2、解压和安装:
解压:tar -xvf scala-2.
安装:mv scala-2.11.4 ~/opt/
3、编辑~/.bash_profile文件增加SCALA_HOME环境变量配置,
export JAVA_HOME=/home/spark/opt/java/jdk1.6.0_37
export
CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.j ar
export SCALA_HOME=/home/spark/opt/scala-2.11.4
export HADOOP_HOME=/home/spark/opt/hadoop-2.6.0
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:${SCALA_HOME}/bin
立即生效source ~/.bash_profile
4、验证scala:scala -version
5、copy到slave机器scp ~/.bash_profile  spark@10.126.45.56:~/.bash_profile
6、下载spark,wget d3kbcqa49mib13.cloudfront/spark-1.2.0-bin-
7、在master主机配置spark :
将下载的spark-1.2. 解压到~/opt/即~/opt/spark-1.2.0-bin-hadoop2.4,配置环境变量SPARK_HOME
# set  java env
export JAVA_HOME=/home/spark/opt/java/jdk1.6.0_37
export
CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar export SCALA_HOME=/home/spark/opt/scala-2.11.4
export HADOOP_HOME=/home/spark/opt/hadoop-2.6.0
export SPARK_HOME=/home/spark/opt/spark-1.2.0-bin-hadoop2.4
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:${SCALA_HOME}/bin:${SPARK_HO ME}/bin:${HADOOP_HOME}/bin
配置完成后使用source命令使配置生效
进入spark conf目录:
[spark@S1PA11 opt]$ cd spark-1.2.0-bin-hadoop2.4/
[spark@S1PA11 spark-1.2.0-bin-hadoop2.4]$ ls
bin  conf  data  ec2  examples  lib  LICENSE  logs  NOTICE  python  README.md
RELEASE  sbin  work
[spark@S1PA11 spark-1.2.0-bin-hadoop2.4]$ cd conf/
[spark@S1PA11 conf]$ ls
plate  slaves                      plate  plate
first :修改slaves文件,增加两个slave节点S1PA11、S1PA222
[spark@S1PA11 conf]$ vi slaves
S1PA11
S1PA222
second:配置spark-env.sh
首先把plate copy spark-env.sh
vi spark-env.s件在最下面增加:
export JAVA_HOME=/home/spark/opt/java/jdk1.6.0_37
export SCALA_HOME=/home/spark/opt/scala-2.11.4
export SPARK_MASTER_IP=10.58.44.47
export SPARK_WORKER_MEMORY=2g
export HADOOP_CONF_DIR=/home/spark/opt/hadoop-2.6.0/etc/hadoop
HADOOP_CONF_DIR是Hadoop配置文件目录,SPARK_MASTER_IP主机IP地址,SPARK_WORKER_MEMORY是worker使用的最大内存
完成配置后,将spark目录copy slave机器scp -r ~/opt/spark-1.2.0-bin-hadoop2.4
spark@10.126.45.56:~/opt/
8、启动spark分布式集并查看信息
[spark@S1PA11 sbin]$ ./start-all.sh
查看:
[spark@S1PA11 sbin]$ jps
31233 ResourceManager
27201 Jps
30498 NameNode
30733 SecondaryNameNode
5648 Worker
5399 Master
15888 JobHistoryServer
如果HDFS没有启动,启动起来.
查看slave节点:
[spark@S1PA222 scala]$ jps
20352 Bootstrap
30737 NodeManager
7219 Jps
30482 DataNode
29500 Bootstrap
757 Worker
9、页面查看集状况:
进去spark集的web管理页面,访问
因为我们看到两个worker节点,因为master和slave都是worker节点我们进入spark的bin目录,启动spark-shell控制台
访问master:4040/,我们可以看到spark WEBUI页面

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。