【Kafka】kafka常⽤命令以及kafka压⼒测试
前⾔
本⽂kafka命令适⽤于kafka版本在0.10以上;
演⽰环境:kafka 版本 0.11.0.2 ,scala版本2.11
查看所有topic
kafka-topics.sh --zookeeper hadoop111:2181 --list
选项说明:
--zookeeper :设置zk的链接信息
--list :打印topic列表
创建topic
kafka-topics.sh --zookeeper hadoop111:2181 --create --replication-factor 3 --partitions 1 --topic test
选项说明:
--create :创建topic命令
--topic :定义topic名
--replication-factor :定义副本数
--partitions :定义分区数
以 -- 双横杠开头的配置,在kafka命令中没有先后顺序的规定,可以按照⾃⼰习惯书写。
删除topic
kafka-topics.sh --zookeeper hadoop111:2181 --delete --topic test
注意事项:
需要config/server.properties中设置able=true删除操作才会⽴即⽣效,默认配置为false,此时只是标记删除,重启kafka服务才会正式删除;
发送消息
kafka-console-producer.sh --broker-list hadoop111:9092 --topic test
参数说明:
--broker-list :指定集中任意⼀台kafka服务器的地址和端⼝号
消费消息
kafka-console-consumer.sh --bootstrap-server hadoop111:9092 --from-beginning --topic test
参数说明:
--bootstrap-server:指定集中任意⼀台kafka服务器的地址和端⼝号
--from-beginning:将主题中所有的消息从头开始消费
查看topic详情
kafka-topics.sh --zookeeper hadoop111:2181 --describe --topic test
启动kafka
kafka-server-start.sh config/server.properties &
& 代表后台运⾏
关闭kafka
kafka-server-stop.sh stop
Kafka Producer压⼒测试
kafka-producer-perf-test.sh --topic test --record-size 100 --num-records 100000 --throughput 1000 --producer-props bootstrap.servers=hadoop102:9092,hadoop103:9092,hadoop104:9092参数说明:
record-size:是⼀条信息有多⼤,单位是字节
num-records:是总共发送多少条信息
throughput :是每秒多少条信息
⽣产压⼒测试过程如下:
5002 records sent, 1000.2 records/sec (0.10 MB/sec), 3.1 ms avg latency, 165.0 max latency.
5033 records sent, 1006.6 records/sec (0.10 MB/sec), 1.0 ms avg latency, 35.0 max latency.
5001 records sent, 1000.0 records/sec (0.10 MB/sec), 1.5 ms avg latency, 66.0 max latency.
5002 records sent, 1000.4 records/sec (0.10 MB/sec), 0.9 ms avg latency, 14.0 max latency.
4998 records sent, 998.4 records/sec (0.10 MB/sec), 0.9 ms avg latency, 34.0 max latency.
5008 records sent, 1001.6 records/sec (0.10 MB/sec), 0.7 ms avg latency, 13.0 max latency.
5003 records sent, 1000.6 records/sec (0.10 MB/sec), 0.9 ms avg latency, 46.0 max latency.
5001 records sent, 1000.0 records/sec (0.10 MB/sec), 0.9 ms avg latency, 50.0 max latency.
5002 records sent, 1000.2 records/sec (0.10 MB/sec), 0.5 ms avg latency, 5.0 max latency.
5003 records sent, 1000.2 records/sec (0.10 MB/sec), 0.8 ms avg latency, 22.0 max latency.
5002 records sent, 1000.2 records/sec (0.10 MB/sec), 0.6 ms avg latency, 7.0 max latency.
5001 records sent, 1000.2 records/sec (0.10 MB/sec), 0.7 ms avg latency, 31.0 max latency.
5002 records sent, 1000.0 records/sec (0.10 MB/sec), 0.7 ms avg latency, 15.0 max latency.
5003 records sent, 1000.6 records/sec (0.10 MB/sec), 0.8 ms avg latency, 15.0 max latency.
5002 records sent, 1000.4 records/sec (0.10 MB/sec), 0.8 ms avg latency, 14.0 max latency.
5001 records sent, 1000.0 records/sec (0.10 MB/sec), 0.6 ms avg latency, 15.0 max latency.
5001 records sent, 1000.2 records/sec (0.10 MB/sec), 0.8 ms avg latency, 18.0 max latency.
5003 records sent, 1000.4 records/sec (0.10 MB/sec), 0.8 ms avg latency, 13.0 max latency.
5001 records sent, 1000.2 records/sec (0.10 MB/sec), 0.9 ms avg latency, 31.0 max latency.
100000 records sent, 999.970001 records/sec (0.10 MB/sec), 0.94 ms avg latency, 165.00 ms max latency, 1 ms 50th, 2 ms 95th, 7 ms 99th, 42 ms 99.9th.
参数解析:
本例中⼀共写⼊10万条消息,平均是999.970001条消息/秒,每秒向Kafka写⼊了0.10MB的数据,每次写⼊的平均延迟为0.94毫秒,最⼤的延迟为165毫秒。
Kafka Consumer压⼒测试
kafka-consumer-perf-test.sh --zookeeper hadoop111:2181 --topic test --fetch-size 10000 --messages 10000000 --threads 1
参数说明:
--zookeeper :指定zookeeper的链接信息,集中任意⼀台kafka服务器的地址和端⼝号
--topic :指定topic的名称
--fetch-size :指定每次拉取的数据的⼤⼩
--messages :总共要消费的消息个数
消费压⼒测试过程如下:
start.time, end.time, sumed.in.MB, MB.sec, sumed.in.nMsg, nMsg.sec
2020-05-15 17:41:57:339, 2020-05-15 17:41:59:243, 9.5367, 5.0088, 100000, 52521.0084
测试结果说明:
开始测试时间,测试结束数据,最⼤吞吐率9.5367MB/s,平均每秒消费5.0088MB/s,最⼤每秒消费100000条,平均每秒消费52521.0084条。
查看所有消费者
[ssrs@hadoop112 bin]$ kafka-consumer-groups.sh --bootstrap-server hadoop111:9092 --list
Note: This will only show information about consumers that use the Java consumer API (non-ZooKeeper-based consumers).
flume
KMOffsetCache-hadoop111
注意:
以上只显⽰有关使⽤Java消费者API的消费者的信息(⾮基于ZooKeeper的消费者)
[ssrs@hadoop112 bin]$ kafka-consumer-groups.sh --zookeeper hadoop111:2181 --list
kafka命令Note: This will only show information about consumers that use ZooKeeper (not those using the Java c
onsumer API).
mygroup
perf-consumer-52487
console-consumer-20318
console-consumer-44724
perf-consumer-49290
注意:
以上只显⽰有关使⽤ZooKeeper的消费者的信息(⽽不是那些使⽤Java consumer API的消费者)
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。
发表评论