用Docker部署kafka且不依赖zookeeper

目录

redpanda居然还要每隔30天更新一下License文件,叔可忍婶不可忍啊。

换掉换掉,直接用Docker部署kafka,后端不用zookeeper,用kraft,天下苦zk也久矣!

方法很简单,用docker-compose,编辑docker-compose.yml文件

 1name: 'stream'
 2services:
 3  kafka:
 4    image: confluentinc/cp-kafka:latest
 5    hostname: kafka
 6    container_name: kafka
 7    ports:
 8      - 9092:9092
 9      - 9093:9093
10    environment:
11      KAFKA_KRAFT_MODE: true  # This enables KRaft mode in Kafka.
12      KAFKA_PROCESS_ROLES: controller,broker  # Kafka acts as both broker and controller.
13      KAFKA_NODE_ID: 1  # A unique ID for this Kafka instance.
14      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@10.8.1.119:9093  # Defines the controller voters.
15      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
16      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT
17      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
18      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
19      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.8.1.119:9092
20      KAFKA_LOG_DIRS: /var/lib/kafka/data  # Where Kafka stores its logs.
21      KAFKA_AUTO_CREATE_TOPICS_ENABLE: true  # Kafka will automatically create topics if needed.
22      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1  # Since we’re running one broker, one replica is enough.
23      KAFKA_LOG_RETENTION_HOURS: 168  # Keep logs for 7 days.
24      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0  # No delay for consumer rebalancing.
25      CLUSTER_ID: Mk3OEYBSD34fcwNTJENDM2Qk  # A unique ID for the Kafka cluster.
26    volumes:
27      - /var/run/docker.sock:/var/run/docker.sock
28      - ./data:/var/lib/kafka/data  # Store Kafka logs on your local machine.

注意以上的文件,暴露了2个端口,9092和9093,9092用于客户端连接,9093用于集群通讯;KAFKA_KRAFT_MODE: true用kraft不用zk. ClUSTER_ID可以换个不同的。

另外配置有两个10.8.1.119地址,这是Docker宿主机的ip,这是对外开放必须要改的。

然后文件卷,docker-compose.yml文件的同级目录有个data文件夹,这个文件夹的属主必须是id=1000的用户,在我们的系统中是第一个添加的用户,debian

所以要设置一下属主:

1chown -R debian:debian data

如果属主不是1000,会爆出以下报错信息并退出:

1kafka  | ===> User
2kafka  | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
3kafka  | ===> Configuring ...
4kafka  | ===> Running preflight checks ... 
5kafka  | ===> Check if /var/lib/kafka/data is writable ...
6kafka  | Command [/usr/local/bin/dub path /var/lib/kafka/data writable] FAILED !
7kafka exited with code 1

然后启动即可:

1docker-compose up -d

随后测试一下:

 1#进入容器bash
 2docker exec -it kafka bash
 3
 4#建立一个test-topic
 5/usr/bin/kafka-topics --create --topic test-topic --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
 6
 7#发点消息进去
 8/usr/bin/kafka-console-producer --broker-list localhost:9092 --topic test-topic
 9> Hello Kafka!
10> This is a test message.
11
12#另外一个客户端收消息
13/usr/bin/kafka-console-consumer --bootstrap-server localhost:9092 --topic test-topic -
14Hello Kafka!
15This is a test message.
16
17#查看所有的topics
18/usr/bin/kafka-topics --list --bootstrap-server localhost:9092

Aws的EC2服务器用lego获得免费证书并更新到ACM
单机部署kafka用kraft不依赖zookeeper
comments powered by Disqus