redpanda居然还要每隔30天更新一下License文件,叔可忍婶不可忍啊。

换掉换掉,直接用Docker部署kafka,后端不用zookeeper,用kraft,天下苦zk也久矣!

方法很简单,用docker-compose,编辑docker-compose.yml文件

name: 'stream'
services:
  kafka:
    image: confluentinc/cp-kafka:latest
    hostname: kafka
    container_name: kafka
    ports:
      - 9092:9092
      - 9093:9093
    environment:
      KAFKA_KRAFT_MODE: true  # This enables KRaft mode in Kafka.
      KAFKA_PROCESS_ROLES: controller,broker  # Kafka acts as both broker and controller.
      KAFKA_NODE_ID: 1  # A unique ID for this Kafka instance.
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@10.8.1.119:9093  # Defines the controller voters.
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.8.1.119:9092
      KAFKA_LOG_DIRS: /var/lib/kafka/data  # Where Kafka stores its logs.
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: true  # Kafka will automatically create topics if needed.
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1  # Since we’re running one broker, one replica is enough.
      KAFKA_LOG_RETENTION_HOURS: 168  # Keep logs for 7 days.
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0  # No delay for consumer rebalancing.
      CLUSTER_ID: Mk3OEYBSD34fcwNTJENDM2Qk  # A unique ID for the Kafka cluster.
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./data:/var/lib/kafka/data  # Store Kafka logs on your local machine.

注意以上的文件,暴露了2个端口,9092和9093,9092用于客户端连接,9093用于集群通讯;KAFKA_KRAFT_MODE: true用kraft不用zk. ClUSTER_ID可以换个不同的。

另外配置有两个10.8.1.119地址,这是Docker宿主机的ip,这是对外开放必须要改的。

然后文件卷,docker-compose.yml文件的同级目录有个data文件夹,这个文件夹的属主必须是id=1000的用户,在我们的系统中是第一个添加的用户,debian

所以要设置一下属主:

chown -R debian:debian data

如果属主不是1000,会爆出以下报错信息并退出:

kafka  | ===> User
kafka  | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
kafka  | ===> Configuring ...
kafka  | ===> Running preflight checks ... 
kafka  | ===> Check if /var/lib/kafka/data is writable ...
kafka  | Command [/usr/local/bin/dub path /var/lib/kafka/data writable] FAILED !
kafka exited with code 1

然后启动即可:

docker-compose up -d

随后测试一下:

#进入容器bash
docker exec -it kafka bash

#建立一个test-topic
/usr/bin/kafka-topics --create --topic test-topic --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1

#发点消息进去
/usr/bin/kafka-console-producer --broker-list localhost:9092 --topic test-topic
> Hello Kafka!
> This is a test message.

#另外一个客户端收消息
/usr/bin/kafka-console-consumer --bootstrap-server localhost:9092 --topic test-topic -
Hello Kafka!
This is a test message.

#查看所有的topics
/usr/bin/kafka-topics --list --bootstrap-server localhost:9092