Hive安装详细步骤⼀、下载hive
⼆、安装mysql
执⾏以下⼏个命令安装8.0版本mysql
//1、下载MySQLyum源(8.0版本的)
wget sql/arch.rpm
(5.7版本的)
wget sql/arch.rpm
//2、添加MySQLyum源
sudo yum arch.rpm
(5.7版本)
sudo yum arch.rpm
//3、MySQL的GPG升级了,需要更新,如果是新安装的MySQL,执⾏以下脚本即可:
rpm --import sql/RPM-GPG-KEY-mysql-2022
//4、安装MySQL客户端
yum -y install mysql-community-server
开启mysql服务
# 1、开启Mysql服务
sudo service mysqld start
# 2、查看mysql是否开启
sudo servicee mysqld status
# 重启mysql服务
sudo service mysqld restart
# 3、查看初始密码
sudo grep 'temporary password' /var/log/mysqld.log
# 4、进⼊mysql客户端,输⼊查询到的初始化密码
mysql -uroot -p
mysql密码设置
# 1、⾸先要修改密码,⾼版本的mysql默认必须修改密码才能正常使⽤,所以如果不修改密码不能做任何事alter user 'root'@'localhost' identified by '新密码'
注意:因为我们还没有改密码的复杂度,所以这⾥的密码必须⾜够复杂,后⾯会改简单的
# 2、查看mysql初始化密码的策略
SHOW VARIABLES LIKE 'validate_password%';
mysql> SHOW VARIABLES LIKE 'validate_password%';
+--------------------------------------+-------+
| Variable_name                        | Value |
+--------------------------------------+-------+
| validate_password.check_user_name    | ON    |
| validate_password.dictionary_file    |      |
| validate_password.length            | 6    |
| validate_password.mixed_case_count  | 1    |
| validate_password.number_count      | 1    |
| validate_password.policy            | LOW  |
| validate_password.special_char_count | 1    |
+--------------------------------------+-------+
7 rows in set (0.00 sec)
# 3、修改密码验证强度(重启后就失效)
set global validate_password.policy=LOW;
# 4、修改密码允许最短长度,不能⼩于4
set global validate_password.length=6;
# 5、修改简单密码
alter user 'root'@'localhost' identified by '000000';
设置mysql远程登陆
8.0版本以前
# 1、修改权限
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '123456' WITH GRANT OPTION;
# %表⽰所有远程机器
# root表⽰远程登陆后使⽤root⽤户
# *.*表⽰所有表
# 刷新权限
flush privileges;
8.0版本的因为有新的安全要求,不能⾃⼰给⾃⼰授权,所以要新建⼀个⽤户,通过那个⽤户来授权
# 1、查看mysql⽤户
use mysql;
select user, host from user;
mysql> select user, host from user;
+------------------+-----------+
| user            | host      |
+------------------+-----------+
| mysql.infoschema | localhost |
| mysql.session    | localhost |
| mysql.sys        | localhost |
| root            | localhost |
+------------------+-----------+
4 rows in set (0.00 sec)
# 2、新建⽤户
create user 'hadoop100'@'%' identified by '000000';
# hadoop100表⽰⽤户名
# %表⽰任意ip
# 000000表⽰该⽤户的密码,记得如果重启需要从新修改密码验证策略
# 3、为⽤户授权
grant all on *.* to 'hadoop100'@'%';
# 4、刷新权限
flush privileges;
mysql开机⾃启动(在linux中执⾏)
systemctl enable mysqld
三、解压与配置
①进⼊到存放hive的⽬录下,输⼊以下命令解压到指定⽬录下
tar -zxvf apache-hive-3.1. -C /opt/modules/
②配置⽂件重命名
将plate重命名为l
将plate重命名为hive-env.sh
将plate重命名为hive-log4j.properties
③修改配置
1、修改hive-env.sh配置
添加JAVA_HOME和HADOOP_HOME,exportHIVE_CONF_DIR(即hive的conf⽬录地址)
2、修改l配置
<!--判断mysql下是否有这个数据库,没有的话创建-->
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://hadoop100:3306/metastore?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<!--使⽤mysql driver驱动,默认是hive内置数据库derby驱动-->
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value&sql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<!--mysql账号-->
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username to use against metastore database</description>
</property>
<!--mysql密码-->
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>000000</value>
<description>password to use against metastore database</description>
</property>
<!--显⽰查询出的数据的字段名称-->
<property>
<name>hive.cli.print.header</name>
<value>true</value>
<description>Whether to print the names of the columns in query output.</description>
</property>
<!--在hive中显⽰当前所在数据库名称-->
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
<description>Whether to include the current database in the Hive prompt.</description>
</property>
<!--使得查询单列数据不会执⾏mapreduce-->
<property>
<name>hive.version</name>
mysql下载jar包<value>more</value>
<description>
Some select queries can be converted to single FETCH task minimizing latency.
Currently the query should be single sourced not having any subquery and should not have    any aggregations or distincts (which incurs RS), lateral views and joins.
1. minimal : SELECT STAR, FILTER on partition columns, LIMIT only
2. more    : SELECT, FILTER, LIMIT only (TABLESAMPLE, virtual columns)
</description>
</property>
3、修改hive-log4j.properties配置
设置log路径⽤以存放hive的log⽇志⽂件
hive.log.dir=/opt/modules/hive-3.1.2/logs
4、拷贝数据库驱动包到hive的lib⽬录中

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。