用Docker部署kafka且不依赖zookeeper
用redpanda居然还要每隔30天更新一下License文件,叔可忍婶不可忍啊。 换掉换掉,直接用Docker部署kafka,后端不用zookeeper,用kraft,天下苦zk也久矣! 方法很简单,用docker-compose,编辑docker-compose.yml文件 name: 'stream' services: kafka: image: confluentinc/cp-kafka:latest hostname: kafka container_name: kafka ports: - 9092:9092 - 9093:9093 environment: KAFKA_KRAFT_MODE: true # This enables KRaft mode in Kafka. KAFKA_PROCESS_ROLES: controller,broker # Kafka acts as both broker and controller. KAFKA_NODE_ID: 1 # A unique ID for this Kafka instance. KAFKA_CONTROLLER_QUORUM_VOTERS: 1@10.8.1.119:9093 # Defines the controller voters. KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.8.1.119:9092 KAFKA_LOG_DIRS: /var/lib/kafka/data # Where Kafka stores its logs. KAFKA_AUTO_CREATE_TOPICS_ENABLE: true # Kafka will automatically create topics if needed. KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 # Since we’re running one broker, one replica is enough. KAFKA_LOG_RETENTION_HOURS: 168 # Keep logs for 7 days. KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0 # No delay for consumer rebalancing. CLUSTER_ID: Mk3OEYBSD34fcwNTJENDM2Qk # A unique ID for the Kafka cluster. volumes: - /var/run/docker.sock:/var/run/docker.sock - ./data:/var/lib/kafka/data # Store Kafka logs on your local machine. 注意以上的文件,暴露了2个端口,9092和9093,9092用于客户端连接,9093用于集群通讯;KAFKA_KRAFT_MODE: true用kraft不用zk. ClUSTER_ID可以换个不同的。 ...