CDH6.2开启Kerberos认证
⼀、配置KDC服务
由于使⽤的是内⽹机器,这⾥使⽤rpm包安装。需要的rpm包括:
服务端:krb5-server, krb5-workstation, krb5-libs,libkadm5
客户端:krb5-workstation, krb5-libs,libkadm5
安装的时候可能报不到words的rpm包:rpm -ivh words-3.arch.rpm
KDC服务配置(服务端配置)
vim /var/kerberos/f
# 更改下⾯的地⽅即可,配置⾃⼰的域名为HADOOP.COM, 授权票的有效期为1天,免密7天
[realms]
HADOOP.COM = {
#master_key_type = aes256-cts
max_life = 1d
max_renewable = 7d
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal }
KRB5配置(客户端配置)
vim /f
[libdefaults]
pkinit_anchors = /etc/pki/tls/
default_realm = HADOOP.COM
udp_preference_limit = 1
# default_ccache_name = ...
[realms]
HADOOP.COM = {
kdc = server的主机名, 我的这⾥是hadoop001
admin_server = server的主机名, hadoop001
}
# 将krb配置⽂件发送到其他节点
scp /f root@hadoop00x:/etc/
配置
# ⽣成数据库, 设置密码,123456
kdb5_util create -s
# 创建管理员账号
kadmin.local -q "addprinc admin/admin@HADOOP.COM"
# 赋予kerberos管理员所有的权限
vim /var/kerberos/krb5kdc/kadm5.ac1
*/admin@HADOOP.COM *
# 开启服务并设置⾃启
systemctl enable krb5kdc
systemctl enable kadmin
systemctl start krb5kdc
systemctl start kadmin
⼆、在CDH上开启Kerberos认证
1)创建CM管理⽤户,记住密码后⾯要⽤
kadmin.local -q "addprinc cloudera-scm/admin"
2)进⼊CM界⾯,启动Kerberos认证
3)确保以下配置都已经完成
4)KDC类型:MIT KDC; Kerberos加密类型:aes128-cts, des3-hmac-sha1, arcfour-hmac; 填写KDC所在服务所在的主机
5)不勾选 “通过CM管理f”
6)输⼊cm的kerberos的管理账号,点击继续,直到安装结束
三、kerberos常⽤命令
创建⽤户和keytab⽂件
# 创建linux⽤户
useradd -m baron
echo "123456" | passwd baron --stdin
# 创建kerberos⽤户
kadmin.local -q "addprinc -pw 123456 baron"
# ⽣成keytab⽂件
kadmin.local
ktadd -k /home/baron/baron.keytab -norandkey baron
# 查看keytab问价
klist -kt /home/baron/baron.keytab
在CM启动Kerberos过程中,CM会⾃动创建Princpal,访问集的所有资源都需要相应的账号密码进⾏访问,否则⽆法通过Kerberos的认证# 查看当前所有的princpal
kadmin.local -q "list_princpals"
# 创建⼀个hdfs超级⽤户,⼀般⼀个服务对应⼀个user,每⼀个节点上都需要创建相应的linux⽤户
kadmin.local -q "addprinc hdfs"
kadmin.local
ktadd -k /home/hdfs/hdfs.keytab -norandkey hdfs
# 将keytab发送到每⼀个节点
scp -r /home/hdfs/hdfs.keytab root@hadoop00x:/home/hdfs/
# 在每⼀个节点init
kinit -kt /home/hdfs/hdfs.keytab hdfs@HADOOP.COM 或者 kinit hdfs -> 输⼊密码
四、datax中配置Kerberos以及shell中init
Shell中使⽤
#!/bin/bash
# ⾸先需要kinit登录
kinit -kt /home/hdfs/hdfs.keytab hdfs@HADOOP.COM
if ! klist -s
then
echo"kerberos no init ----"
exit 1
else
# 执⾏程序
echo"success"
fi
Datax中配置
{
"job": {
"setting": {
"speed": {
"channel": 1
}
},
"content": [
{
"reader": {
"name": "hdfsreader",
"parameter": {
"path": "/workspace/*",
"defaultFS": "hdfs://hadoop001:8020",
"column": [
{
"index": 0,
"type": "long"
},
{
"index": 1,
"type": "string"
},
{
"index": 2,
"type": "double"
}
],
"fileType": "text",
"encoding": "UTF-8",
"fieldDelimiter": ",",
"haveKerberos": true,
"kerberosKeytabFilePath": "/home/hdfs/hdfs.keytab",
"kerberosPrincipal": "hdfs@HADOOP.COM"
}
},
"writer": {
"name": "streamwriter",
"parameter": {
"print": true
}
}
}
]
}
}
View Code
{
"job": {
"setting": {
"speed": {
"channel": 1
}
},
"content": [
{
"reader": {
"name": "mysqlreader",
"parameter": {
"username": "root",
"password": "root",
"column": [
"uid",
"event_type",
"time"
],
"splitPk": "uid",
"connection": [
{
"table": [
"action"
],
"jdbcUrl": [
"jdbc:mysql://node:3306/aucc"
]
}
]
}
},
"writer": {
"name": "hdfswriter",
"parameter": {
"defaultFS": "hdfs://hadoop001:8020",
"fileType": "text",
"path": "/workspace",
"fileName": "u",
"column": [
{
"name": "uid",
"type": "string"
},
{
"name": "event_type",
"type": "string"
},
{
"name": "time",
"type": "string"
}
],
"writeMode": "append",
"fieldDelimiter": "\t",
"compress":"bzip2",
"haveKerberos": true,
"kerberosKeytabFilePath": "/home/hdfs/hdfs.keytab",
"kerberosPrincipal": "hdfs@HADOOP.COM"
}
}
}
thrift]
}
}
View Code
五、禁⽤Kerberos
1、停⽌集的所有服务
2、Zookeeper:
1)Zookeeper的enableSecurity为false(取消勾选)
2)Zookeeper的Enable Kerberos Authentication为false(取消勾选) 3、修改hdfs的配置
1)hadoop.security.authentication选择simple
2)hadoop.security.authorization选择false(取消勾选)
3)修改dfs.datanode.data.dir.perm的数据⽬录权限为755
4)修改DataNode服务的端⼝号,dfs.datanode.address,9866 (for Kerberos) 改为 50010 (default);dfs.datanode.http.address,1006 (for Kerberos) 改为 9864 (default)
4、修改HBase的配置
1)hbase.security.authentication修改为simple
2)hbase.security.authorization选择false(取消勾选)
3)hbase.thrift.security.qop选择none
5、可能存在HBase启动不了,需要设置下zookeeper⽬录权限,跳过检查
zookeeper中的配置项中搜索 “Zookeeper Server中java配置项” 增加 -Dzookeeper.kipACL=true
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。
发表评论