zeppelin在CDH上的安装及使⽤--填坑
zeppelin可以跟spark、flink、kylin等直接访问,将结果可视化显⽰。在安装zeppelin的过程中碰到各种问题,跟陈⼤神⼀起研究了好⼏天,终于把问题解决。我们安装zeppelin的⽬的主要是⽤spark快速的验证kylin的统计的可视化结果是否跟spark直接计算的可视化结果⼀致。
刚开始选择下载⼆进制⽂件(zeppelin-0.)直接安装,很简单,直接解压后运⾏./bin/zeppelin-daemon.sh start即可。运⾏官⽅案例时报如下错误:
java.lang.NoSuchMethodError:
at org.pl.SparkILoop.<init>(SparkILoop.scala:936)
at org.pl.SparkILoop.<init>(SparkILoop.scala:70)
at ppelin.spark.SparkInterpreter.open(SparkInterpreter.java:790)
at ppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
at RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:491) at ppelin.scheduler.Job.run(Job.java:175)
at ppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
at urrent.Executors$RunnableAdapter.call(Executors.java:511)
at urrent.FutureTask.run(FutureTask.java:266)
.......
报scala中的⽅法不到,查看了下scala2.11的源码没有此⽅法,我们的⽤的是CDH5.12.1⾃带的scala是2.10版本。因此,我们选择⾃⼰编译安装。
编译过程中⼜是各种报错。源⽂件:zeppelin-0.
编译:
[C:\Users\yiming\Desktop\zeppelin-0.7.3]$ mvn clean package -Pbuild-distr -Pyarn -Dspark.version=1.6.0 -
Dhadoop.version=2.6.0-cdh5.12.1 -Pscala-2.10 -Ppyspark -Psparkr -Pvendor-repo -DskipTests
报错:
[ERROR] mvn <goals> -rf :zeppelin-spark-dependencies_2.10
报没有到对应的py4j包,进⼊对应的⽬录可以看到对应的是py4j-0.9-src.zip,在maven仓库中到对应版本的包拷贝过来即可。
再次编译⼜报错了:
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 03:24 min
[INFO] Finished at: 2018-04-20T14:39:52+08:00
[INFO] Final Memory: 135M/1506M
[INFO] ------------------------------------------------------------------------
in-spark-dependencies_2.10:jar:0.7.3: Could not find artifact org.apache.hadoop:hadoop-client:jar:2.6.0-cdh5.7.0-SNAPSHOT in nexus (192.168.30.112:8081/nexus/content/groups/public) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
这个坑⼀直没到问题的原因,查看了源码也没搞清楚我指定的是cdh5.12.1的包,它却给我下cdg5.6.0的快照包。
接下来把编译命令中的-Dspark.version=1.6.0去掉之后再次编译:
[C:\Users\yiming\Desktop\zeppelin-0.7.3]$mvn clean package -Pbuild-distr -Pyarn -Pspark-1.6 -Ppyspark -
Dhadoop.version=2.6.0-cdh5.12.1 -Phadoop-2.6 -DskipTests
⼜报如下错误:
[WARNING] warning grunt-filerev@0.2.1: Deprecated
[WARNING] warning babel-preset-es2015@6.24.1: ?? Thanks for using Babel: we recommend using babel-preset-env now: please read babeljs.io/env to update!
[WARNING] warning grunt > coffee-script@1.3.3: CoffeeScript on NPM has moved to "coffeescript" (no hyphen) [WARNING] warning grunt > minimatch@0.2.14: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue [WARNING] warning grunt > glob > minimatch@0.2.14: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
[WARNING] warning grunt > findup-sync > glob > minimatch@0.3.0: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
[WARNING] warning grunt > glob > graceful-fs@1.2.3: please upgrade to graceful-fs 4 for compatibility with current and future versions of Node.js
[WARNING] warning load-grunt-tasks > multimatch > minimatch@0.2.14: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
[WARNING] warning grunt-wiredep > wiredep > glob > minimatch@2.0.10: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
[WARNING] warning grunt-google-fonts > cssparser > nomnom@1.8.1: Package no longer supported. Contact
support@npmjs for more info.
[WARNING] warning grunt-htmlhint > htmlhint > jshint > minimatch@2.0.10: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
[WARNING] warning grunt-replace > applause > cson-parser > coffee-script@1.12.7: CoffeeScript on NPM has moved to "coffeescript" (no hyphen)
[WARNING] warning grunt-wiredep > wiredep > bower-config > graceful-fs@2.0.3: please upgrade to graceful-fs 4 for compatibility with current and future versions of Node.js
[ERROR] error An unexpected error occurred: "registry.yarnpkg/autoprefixer: connect ETIMEDOUT
104.16.63.173:443".
[INFO] info If you think this is a bug, please open a bug report with the information provided in
"C:\\Users\\yiming\\Desktop\\zeppelin2\\zeppelin-0.7.3\\zeppelin-0.7.3\\zeppelin-web\\yarn-error.log".
[INFO] info Visit yarnpkg/en/docs/cli/install for documentation about this command.
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Zeppelin ........................................... SUCCESS [ 4.689 s]
[INFO] Zeppelin: Interpreter .............................. SUCCESS [ 20.595 s]
[INFO] Zeppelin: Zengine .................................. SUCCESS [ 16.617 s]
[INFO] Zeppelin: Display system apis ...................... SUCCESS [ 15.265 s]
[INFO] Zeppelin: Spark dependencies ....................... SUCCESS [03:20 min]
[INFO] Zeppelin: Spark .................................... SUCCESS [ 26.079 s]
[INFO] Zeppelin: Markdown interpreter ..................... SUCCESS [ 2.110 s]
[INFO] Zeppelin: Angular interpreter ...................... SUCCESS [ 1.747 s]
[INFO] Zeppelin: Shell interpreter ........................ SUCCESS [ 1.081 s]
[INFO] Zeppelin: Livy interpreter ......................... SUCCESS [ 15.909 s]
[INFO] Zeppelin: HBase interpreter ........................ SUCCESS [ 9.540 s]
[INFO] Zeppelin: Apache Pig Interpreter ................... SUCCESS [ 12.706 s]
[INFO] Zeppelin: PostgreSQL interpreter ................... SUCCESS [ 2.039 s]
[INFO] Zeppelin: JDBC interpreter ......................... SUCCESS [ 2.582 s]
[INFO] Zeppelin: File System Interpreters ................. SUCCESS [ 2.553 s]
[INFO] Zeppelin: Flink .................................... SUCCESS [ 12.948 s]
[INFO] Zeppelin: Apache Ignite interpreter ................ SUCCESS [ 4.426 s]
[INFO] Zeppelin: Kylin interpreter ........................ SUCCESS [ 1.072 s]
[INFO] Zeppelin: Python interpreter ....................... SUCCESS [ 8.722 s]
[INFO] Zeppelin: Lens interpreter ......................... SUCCESS [ 8.945 s]
[INFO] Zeppelin: Apache Cassandra interpreter ............. SUCCESS [ 48.842 s]
[INFO] Zeppelin: Elasticsearch interpreter ................ SUCCESS [ 5.995 s]
[INFO] Zeppelin: BigQuery interpreter ..................... SUCCESS [ 2.445 s]
[INFO] Zeppelin: Alluxio interpreter ...................... SUCCESS [ 5.870 s]
[INFO] Zeppelin: Scio ..................................... SUCCESS [ 42.271 s]
[INFO] Zeppelin: web Application .......................... FAILURE [01:14 min]
[INFO] Zeppelin: Server ................................... SKIPPED
[INFO] Zeppelin: Packaging distribution ................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 09:10 min
[INFO] Finished at: 2018-04-20T15:55:23+08:00
[INFO] Final Memory: 452M/1686M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.github.eirslett:frontend-maven-plugin:1.3:yarn (yarn install) on project zeppelin-web: Failed to run task: 'yarn install --no-lockfile' failed. (error code 1) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] /confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :zeppelin-web
⽇志中报了很多版本过期的错误,因此我们打开源码web Application的⽬录下的pom⽂件,发现yarn的版本设定太低,将<yarn.version>v0.18.1</yarn.version>修改为<yarn.version>v0.28.1</yarn.version>,编译终于通过。这些错误尝试很多次才解决。在这⾥只想说,开源软件就这点不好,软件的集成太费劲了。搞⼤数据,⼤部分时间都在处理各种版本兼容之间的问题,⼼累啊。。。。接下来就是安装编译好的包,从zeppelin-0.7.3\zeppelin-distribution这个⽬录下将zeppelin-0.7.上传到服务器解压,并配置下zeppelin的zeppelin-evn.sh,添加如下内容:
export JAVA_HOME=/opt/java
export HADOOP_CONF_DIR=/etc/hadoop/conf:/etc/hive/conf
export HADOOP_HOME=/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hadoop
export SPARK_HOME=/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/spark
export MASTER=yarn-client
export ZEPPELIN_LOG_DIR=/var/log/zeppelin
export ZEPPELIN_PID_DIR=/var/run/zeppelin
export ZEPPELIN_WAR_TEMPDIR=/var/tmp/zeppelin
我是在window下编译的,然后linux服务器上安装的。
接下来使⽤⼜是各种问题。。。。
import org.apachemons.io.IOUtils
import java.URL
import java.nio.charset.Charset
// Zeppelin creates and injects sc (SparkContext) and sqlContext (HiveContext or SqlContext)
// So you don't need create them manually
// load bank data
// val bankText = sc.parallelize(
/
/ String(
// new URL("s3.amazonaws/apache-zeppelin/tutorial/bank/bank.csv"),
// Charset.forName("utf8")).split("\n"))
val bankText = sc.textFile("/tmp/bank.csv")
case class Bank(age: Integer, job: String, marital: String, education: String, balance: Integer)
val bank = bankText.map(s => s.split(";")).filter(s => s(0) != "\"age\"").map(
s => Bank(s(0).toInt,
s(1).replaceAll("\"", ""),
s(2).replaceAll("\"", ""),
s(3).replaceAll("\"", ""),
s(5).replaceAll("\"", "").toInt
)
).toDF()
bank.show(10)
报如下错误:
java.lang.NoSuchMethodError: org.apache.hadoop.RpcTimeout(Lorg/apache/hadoop/conf/Configuration;)I
at org.apache.hadoop.hdfs.DFSClient$Conf.<init>(DFSClient.java:355)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:690)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:673)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:155)
angular安装at org.apache.hadoop.ateFileSystem(FileSystem.java:2596)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.Internal(FileSystem.java:2630)
at org.apache.hadoop.fs.(FileSystem.java:2612)
at org.apache.hadoop.(FileSystem.java:370)
at org.apache.spark.util.Utils$.getHadoopFileSystem(Utils.scala:1688)
at org.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:66)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:555)
at ppelin.ateSparkContext_1(SparkInterpreter.java:499)
at ppelin.ateSparkContext(SparkInterpreter.java:389)
at ppelin.SparkContext(SparkInterpreter.java:146)
at ppelin.spark.SparkInterpreter.open(SparkInterpreter.java:843)
at ppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
at RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:491) at ppelin.scheduler.Job.run(Job.java:175)
at ppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
at urrent.Executors$RunnableAdapter.call(Executors.java:511)
at urrent.FutureTask.run(FutureTask.java:266)
发现此过程调⽤的是zeppelin lib⽬录下的hadoop-common-2.6.0.jar包,查看源码,确实没有
getRpcTimeout(configuration)⽅法,只有getRpcTimeout()⽅法。getRpcTimeout(configuration是CDH包装的⽅法,此⽅法不需要再指定hadoop namenode的地址,直接访问l和l⽂件读取yarn和namenode的地址。解决办法是在zeppelin lib的⽬录下建⽴⼀个软连接,将hadoop-*-2.6.0.jar包指向CDH的包并备份原来的包:
[root@xxx-7 lib]#mv /opt/zeppelin/lib/hadoop-common-2.6.0.jar /opt/zeppelin/lib/hadoop-common-2.6.0.jar.bak
[root@xxx-7 lib]#mv /opt/zeppelin/lib/hadoop-auth-2.6.0.jar /opt/zeppelin/lib/hadoop-auth-2.6.0.jar.bak
[root@xxx-7 lib]#mv /opt/zeppelin/lib/hadoop-annotations-2.6.0.jar /opt/zeppelin/lib/hadoop-annotations-2.6.0.jar.bak
[root@xxx-7 lib]#ln -s /opt/cloudera/parcels/CDH/jars/hadoop-common-2.6.0-cdh5.12.1.jar /opt/zeppelin/lib/hadoop-common-2.6.0.jar
[root@xxx-7 lib]#ln -s /opt/cloudera/parcels/CDH/jars/hadoop-auth-2.6.0-cdh5.12.1.jar /opt/zeppelin/lib/hadoop-auth-2.6.0.jar
[root@xxx-7 lib]#ln -s /opt/cloudera/parcels/CDH/jars/hadoop-annotations-2.6.0-cdh5.12.1.jar /opt/zeppelin/lib/hadoop-annotations-2.6.0.jar
接下来运⾏⼜报如下错误:
com.fasterxml.jackson.databind.JsonMappingException: Could not find creator property with name 'id' (in class
org.apache.spark.rdd.RDDOperationScope)
at [Source: {"id":"0","name":"textFile"}; line: 1, column: 1]
at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
at com.fasterxml.jackson.databind.DeserializationContext.mappingException(DeserializationContext.java:843)
at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.addBeanProps(BeanDeserializerFactory.java:533)
at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.buildBeanDeserializer(BeanDeserializerFactory.java:220)
[root@xxx-7 lib]#ln -s /opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/jars/jackson-annotations-2.3.1.jar ../lib/jackson-annotations-2.3.1.jar [root@xxx-7 lib]#ln -s /opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/jars/jackson-core-2.3.1.jar ../lib/jackson-core-2.3.1.jar
[root@xxx-7 lib]#ln -s /opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/jars/jackson-databind-2.3.1.jar ../lib/jackson-databind-2.3.1.jar
终于,zeppelin可以跑成功了:
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。
发表评论