Docker搭建带SASL⽤户密码验证的Kafka
最近需要⽤到带有鉴权的kafka,⽹上基本都是使⽤confluentinc的kafka,试了下有各种问题,因此使⽤ wurstmeister/zookeeper 和 wurstmeister/kafka 搭建了⼀个带有密码验证的kafka,简单记录下搭建的过程
1 配置ZOOKEEPER
1.1 新建放置配置⽂件的⽬录
/home/tool/kafka-sasl/conf
1.2 在⽂件夹内创建⼀个新的zookeeper配置⽂件zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/zookeeper-3.4.13/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# /doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=1
authProvider.1=keeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
zookeeper.sasl.client=true
1.3 新建密码验证的配置⽂件 f
Client {
kafka命令keeper.server.auth.DigestLoginModule required
username="admin"
password="12345678";
};
Server {
keeper.server.auth.DigestLoginModule required
username="admin"
password="12345678"
user_super="12345678"
user_admin="12345678";
};
KafkaServer {
org.apache.kafkamon.security.plain.PlainLoginModule required
username="admin"
password="12345678"
user_admin="12345678"
};
KafkaClient {
org.apache.kafkamon.security.plain.PlainLoginModule required
username="admin"
password="12345678";
};
1.4 新建配置⽂件 log4j.properties和configuration.xsl(也可以先不加参数直接启动zookeeper后,从容器/opt/zookeeper-3.4.13/conf/中复制过来)
log4j.properties
# Define some default values that can be overridden by system properties
zookeeper.log.dir=.
zookeeper.log.file=zookeeper.log
zookeeper.log.threshold=DEBUG
#
# ZooKeeper Logging Configuration
#
# Format is "<default threshold> (, <appender>)+
# DEFAULT: console appender only
# Example with rolling log file
#Logger=DEBUG, CONSOLE, ROLLINGFILE
# Example with rolling log file and tracing
#Logger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE
#
# Log INFO level and above messages to the console
#
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Threshold=${sole.threshold}
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n
#
# Add ROLLINGFILE to rootLogger to get log file output
# Log DEBUG level and above messages to a log file
log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.ROLLINGFILE.Threshold=${zookeeper.log.threshold}
log4j.appender.ROLLINGFILE.File=${zookeeper.log.dir}/${zookeeper.log.file}
# Max log file size of 10MB
log4j.appender.ROLLINGFILE.MaxFileSize=10MB
# uncomment the next line to limit number of backup files
log4j.appender.ROLLINGFILE.MaxBackupIndex=10
log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n
#
# Add TRACEFILE to rootLogger to get log file output
# Log DEBUG level and above messages to a log file
log4j.appender.TRACEFILE=org.apache.log4j.FileAppender
log4j.appender.TRACEFILE.Threshold=TRACE
log4j.appender.TRACEFILE.File=${acelog.dir}/${acelog.file}
log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout
### Notice we are including log4j's NDC here (%x)
log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L][%x] - %m%n
configuration.xsl
<?xml version="1.0"?>
<xsl:stylesheet xmlns:xsl="/1999/XSL/Transform" version="1.0">
<xsl:output method="html"/>
<xsl:template match="configuration">
<html>
<body>
<table border="1">
<tr>
<td>name</td>
<td>value</td>
<td>description</td>
</tr>
<xsl:for-each select="property">
<tr>
<td><a name="{name}"><xsl:value-of select="name"/></a></td>
<td><xsl:value-of select="value"/></td>
<td><xsl:value-of select="description"/></td>
</tr>
</xsl:for-each>
</table>
</body>
</html>
</xsl:template>
</xsl:stylesheet>
1.4 启动命令
docker run --name zookeeper_sasl -p 2181:2181 -e SERVER_JVMFLAGS="-Djava.security.fig=/opt/zookeeper-3.4.13/secrets/f" -v /home/tool/kafka-sasl/conf:/opt/zookeeper-3.4.13/conf -v /home/tool/kafka-sasl/con 2 配置 KAFKA
使⽤上⾯的f 配置⽂件作为密码验证⽂件
2.1 启动命令
docker run --name kafka_sasl -p 59092:9092 --link zookeeper_sasl:zookeeper_sasl -e KAFKA_BROKER_ID=0 -e KAFKA_ADVERTISED_LISTENERS=SASL_PLAINTEXT://10.18.104.202:
59092 -e KAFKA_ADVERTISED_PORT=59092 -e KAF 换端⼝再启动两个,可以建⽴⼀个集
3 验证
3.1 创建topic
kafka-topics.sh --zookeeper 10.18.104.202:2181 --create --partitions 2 --replication-factor 1 --topic testTopic
kafka-topics.sh --zookeeper 10.18.104.202:2181 --list
kafka-topics.sh --zookeeper 10.18.104.202:2181 --describe --topic testTopic
3.2 产出⼀条消息
执⾏脚本前,修改/opt/kafka/bin/kafka-console-producer.sh:
export KAFKA_HEAP_OPTS="-Xmx512M -Djava.security.fig=/opt/kafka/secrets/f"
修改/opt/kafka/config/producer.properties
metadata.broker.list=kafka外⽹IP:外⽹端⼝
添加:
security.protocol=SASL_PLAINTEXT
执⾏脚本:
./bin/kafka-console-producer.sh --broker-list 10.18.104.202:59092 --topic testTopic --fig config/producer.properties
3.3 消费⼀条消息
执⾏脚本前,修改/opt/kafka/bin/kafka-console-consumer.sh:
export KAFKA_HEAP_OPTS="-Xmx512M -Djava.security.fig=/opt/kafka/secrets/f"
修改/opt/kafka/config/consumer.properties
添加:
security.protocol=SASL_PLAINTEXT
执⾏脚本:
./bin/kafka-console-consumer.sh --bootstrap-server 10.18.104.202:59092 --topic testTopic --from-beginning --fig config/consumer.properties
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。
发表评论