Centos7  部署HBASE
1,下载安装包
进入,到hbase版本:hbase-1.1.4- 并下载
(apache.fayea/hbase/1.1.4/)
2.利用winSCP上传到各个节点,master
3.将hbase 放到了 /home/hbase-1.1.4 目录下,并且解压
tar zxvf hbase-1.1.
4.将hbase添加到环境变量中(etc/profile )
首先查看hbase的home
#find / -name hbase-1.1.4
export HBASE_HOME=/home/hbase-1.1.4/hbase-1.1.4
export PATH=$HBASE_HOME/bin:$PATH
5.修改文件权限
6修改配置文件hbase-env.sh (/home/hbase-1.1.4-hbase-1.1.4/conf)
export JAVA_HOME=/home/jdk1.7.0_67 (在 /etc/profile 里面看到)
exportHBASE_MANAGES_ZK=true    #由HBase负责启动和关闭Zookeeper
7.修改l
<configuration>
<property>
<name></name>
<value>master,slave1,slave2,slave3</value>
下载apache<description>The directory shared by RegionServers.
</description>
</property>
<property>
<name></name>
<value>/home/tempData/zookeeper</value>(这个看zoo.cfg里面!)  <description>Property from ZooKeeper's config zoo.cfg.
    The directory where the snapshot is stored.
</description>
</property>
<property>
<name>zookeeper.session.timeout</name>
<value>2000</value>
</property>
<property>
<name></name>
<value>2181</value>
</property>
<property>
<name></name>
<value>hdfs://master:9000/hbase</value>
<description>The directory shared by RegionServers.
</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
      false: standalone and pseudo-distributed setups with managed Zookeeper
      true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
</description>
</property>
<property>
<name>hbase.master</name>
<value>master</value>
</property>
<property>
<name>hbase.master.info.port</name>
<value>60010</value>
</property>
<property>
<name>hbase.master.port</name>
<value>60000</value>
</property>
</configuration>
7.修改 /home/hbase-1.1.4/hbase-1.1.4/conf/hbase-regionservers
master
slave1
slave2
slave3
8.#scp -r /home/hbase-1.1.4 root@10.61.6.198:/home
把hbase-1.1.4 复制到每一个节点下!
9.运行
启动hbase时要确保hdfs已经启动,HBase的启动顺序为:HDFS->Zookeeper->HBase
#/home/hadoop/sbin/start-all.sh
#/home/hbase-1.1.4/bin/  ./start-hbase.sh
这样就显示成功了!
错误:
1.输入 #./hbase shell
错误表现:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
含义为:
发生jar包冲突了:
分别为:
file:/usr/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class
file:/usr/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class
移除其中一个jar包即可
解决方案:
使用下面命令:
/usr/hadoop/share/hadoop/common/lib/ mv slf4j-log4j12-1.6.4.jar /usr

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。