Convert Figma logo to code with AI

conduktor logokafka-stack-docker-compose

docker compose files to create a fully working kafka stack

3,230
1,246
3,230
5

Top Related Projects

Dockerfile for Apache Kafka

Kafka (and Zookeeper) in Docker

[DEPRECATED] Docker images for Confluent Platform.

Kafka Docker for development. Kafka, Zookeeper, Schema Registry, Kafka-Connect, Landoop Tools, 20+ connectors

docker compose files to create a fully working kafka stack

Quick Overview

The conduktor/kafka-stack-docker-compose repository provides a comprehensive Docker Compose setup for running Apache Kafka and its ecosystem components. It offers various configurations for different use cases, including single and multi-broker setups, as well as additional tools like Kafka Connect, Schema Registry, and KSQL.

Pros

  • Easy setup of a complete Kafka environment with just a few commands
  • Multiple pre-configured scenarios for different use cases and complexity levels
  • Includes additional tools and services commonly used in Kafka ecosystems
  • Regularly updated to keep up with the latest versions of Kafka and related components

Cons

  • May consume significant system resources, especially for larger setups
  • Requires Docker and Docker Compose knowledge for customization
  • Not suitable for production environments without further configuration and security measures
  • Limited documentation for advanced use cases or troubleshooting

Getting Started

To get started with the Kafka stack using this Docker Compose setup:

  1. Clone the repository:

    git clone https://github.com/conduktor/kafka-stack-docker-compose.git
    
  2. Navigate to the cloned directory:

    cd kafka-stack-docker-compose
    
  3. Start the desired Kafka stack (e.g., for a single Kafka broker setup):

    docker-compose -f zk-single-kafka-single.yml up -d
    
  4. Verify the services are running:

    docker-compose -f zk-single-kafka-single.yml ps
    
  5. To stop the services:

    docker-compose -f zk-single-kafka-single.yml down
    

Note: Replace zk-single-kafka-single.yml with the desired configuration file for different setups.

Competitor Comparisons

Dockerfile for Apache Kafka

Pros of kafka-docker

  • Simpler setup with fewer components, making it easier to understand and customize
  • Longer history and wider community adoption, potentially offering more stability
  • Flexibility to use with various Kafka versions

Cons of kafka-docker

  • Lacks additional tools like Kafka Connect, Schema Registry, and KSQL
  • May require more manual configuration for advanced setups
  • Less frequent updates compared to kafka-stack-docker-compose

Code Comparison

kafka-docker:

version: '2'
services:
  zookeeper:
    image: wurstmeister/zookeeper
    ports:
      - "2181:2181"
  kafka:
    build: .
    ports:
      - "9092:9092"

kafka-stack-docker-compose:

version: '3.5'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.3.0
    ports:
      - "2181:2181"
  kafka:
    image: confluentinc/cp-kafka:7.3.0
    ports:
      - "9092:9092"

The kafka-docker repository provides a more basic setup focused on Kafka and Zookeeper, while kafka-stack-docker-compose offers a comprehensive solution with additional components from the Confluent Platform. kafka-docker uses custom images, whereas kafka-stack-docker-compose utilizes official Confluent images. The choice between the two depends on the specific requirements of your project and the level of complexity you're comfortable managing.

Kafka (and Zookeeper) in Docker

Pros of docker-kafka

  • Simpler setup with fewer components, making it easier to understand and deploy
  • Lightweight and focused specifically on Kafka, suitable for basic use cases
  • Maintained by Spotify, potentially benefiting from their expertise in large-scale Kafka deployments

Cons of docker-kafka

  • Limited additional tools and services compared to kafka-stack-docker-compose
  • Less flexibility for complex Kafka cluster configurations
  • Fewer options for monitoring and management of the Kafka ecosystem

Code Comparison

kafka-stack-docker-compose:

version: '3.5'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.3.2
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

docker-kafka:

version: '2'
services:
  kafka:
    image: spotify/kafka
    ports:
     - "9092:9092"
    environment:
      ADVERTISED_HOST: localhost
      ADVERTISED_PORT: 9092

The kafka-stack-docker-compose repository offers a more comprehensive Kafka ecosystem setup, including Zookeeper, Schema Registry, and other tools. It provides greater flexibility and options for monitoring and management. On the other hand, docker-kafka presents a simpler, more focused approach, which may be preferable for basic Kafka deployments or learning purposes. The choice between the two depends on the specific requirements of your project and the level of complexity you're comfortable managing.

[DEPRECATED] Docker images for Confluent Platform.

Pros of cp-docker-images

  • Offers a comprehensive suite of Confluent Platform components, including Schema Registry, REST Proxy, and ksqlDB
  • Provides official, production-ready images maintained by Confluent
  • Includes advanced features like role-based access control and monitoring tools

Cons of cp-docker-images

  • More complex setup and configuration compared to kafka-stack-docker-compose
  • Requires a Confluent license for some enterprise features
  • Larger image sizes due to the inclusion of additional components

Code Comparison

kafka-stack-docker-compose:

version: '3.5'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.3.0
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181

cp-docker-images:

version: '2'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.3.0
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

Both repositories provide Docker Compose configurations for setting up Kafka environments. kafka-stack-docker-compose offers a simpler, more lightweight setup focused on core Kafka components, making it ideal for development and testing. cp-docker-images provides a more comprehensive solution with additional Confluent Platform components, suitable for production environments and advanced use cases.

Kafka Docker for development. Kafka, Zookeeper, Schema Registry, Kafka-Connect, Landoop Tools, 20+ connectors

Pros of fast-data-dev

  • All-in-one solution with Kafka and additional tools pre-configured
  • Includes a web UI for easier management and monitoring
  • Supports multiple Kafka versions out of the box

Cons of fast-data-dev

  • Less flexibility in configuration compared to kafka-stack-docker-compose
  • Larger image size due to inclusion of additional tools
  • May be overkill for simple Kafka setups or testing scenarios

Code Comparison

fast-data-dev:

version: '2'
services:
  fast-data-dev:
    image: lensesio/fast-data-dev
    ports:
      - "2181:2181"
      - "9092:9092"
      - "8081-8083:8081-8083"

kafka-stack-docker-compose:

version: '3'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.3.2
  kafka:
    image: confluentinc/cp-kafka:7.3.2
    depends_on:
      - zookeeper

The fast-data-dev setup is simpler, using a single container for all components, while kafka-stack-docker-compose provides separate containers for Zookeeper and Kafka, allowing for more granular control and scalability.

docker compose files to create a fully working kafka stack

Pros of kafka-stack-docker-compose

  • Identical functionality and features
  • Same ease of use and setup process
  • Equivalent performance and scalability

Cons of kafka-stack-docker-compose

  • No significant disadvantages compared to itself
  • Identical limitations and constraints
  • Same level of community support and documentation

Code Comparison

Both repositories contain the same code, so there are no differences to highlight. Here's a sample from the docker-compose.yml file found in both:

version: '3.5'

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.3.2
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

This comparison is unusual because you've asked to compare a repository with itself. The conduktor/kafka-stack-docker-compose repository is being compared to itself, resulting in identical pros, cons, and code. In a real-world scenario, you would typically compare two different repositories or projects to highlight their unique features, advantages, and disadvantages.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Actions Status

An open-source project by Conduktor.io

This project is sponsored by Conduktor.io, a graphical desktop user interface for Apache Kafka.

Once you have started your cluster, you can use Conduktor to easily manage it. Just connect against localhost:9092. If you are on Mac or Windows and want to connect from another container, use host.docker.internal:29092

kafka-stack-docker-compose

This replicates as well as possible real deployment configurations, where you have your zookeeper servers and kafka servers actually all distinct from each other. This solves all the networking hurdles that comes with Docker and docker compose, and is compatible cross platform.

UPDATE: No /etc/hosts file changes are necessary anymore. Explanations at: https://rmoff.net/2018/08/02/kafka-listeners-explained/

Stack version

  • Conduktor Platform: latest
  • Zookeeper version: 3.6.3 (Confluent 7.3.2)
  • Kafka version: 3.3.0 (Confluent 7.3.2)
  • Kafka Schema Registry: Confluent 7.3.2
  • Kafka Rest Proxy: Confluent 7.3.2
  • Kafka Connect: Confluent 7.3.2
  • ksqlDB Server: Confluent 7.3.2
  • Zoonavigator: 1.1.1

For a UI tool to access your local Kafka cluster, use the free version of Conduktor

Requirements

Kafka will be exposed on 127.0.0.1 or DOCKER_HOST_IP if set in the environment. (You probably don't need to set it if you're not using Docker-Toolbox)

Docker-Toolbox

Docker toolbox is deprecated and not maintained anymore for several years. We can't guarantee this stack will work with Docker Toolbox, but if you want to try anyway, please export your environment before starting the stack:

export DOCKER_HOST_IP=192.168.99.100

(your docker machine IP is usually 192.168.99.100)

Apple M1 support

Confluent platform supports Apple M1 (ARM64) since version 7.2.0! Basically, this stack will work out of the box.

If you want to downgrade confluent platform version, there are two ways:

  1. Add platform: linux/amd64. It will work as docker is able to emulate AMD64 instructions.
  2. Previous versions have been built for ARM64 by the community. If you want to use it, just change the image in the corresponding yml. Since it is a not an official image, use it at your own risks.

Full stack

To ease you journey with kafka just connect to localhost:8080

login: admin@admin.io password: admin

  • Conduktor-platform: $DOCKER_HOST_IP:8080
  • Single Zookeeper: $DOCKER_HOST_IP:2181
  • Single Kafka: $DOCKER_HOST_IP:9092
  • Kafka Schema Registry: $DOCKER_HOST_IP:8081
  • Kafka Rest Proxy: $DOCKER_HOST_IP:8082
  • Kafka Connect: $DOCKER_HOST_IP:8083
  • KSQL Server: $DOCKER_HOST_IP:8088
  • (experimental) JMX port at $DOCKER_HOST_IP:9001

Run with:

docker compose -f full-stack.yml up
docker compose -f full-stack.yml down

** Note: if you find that you can not connect to localhost:8080 please run docker compose -f full-stack.yml build to rebuild the port mappings.

Single Zookeeper / Single Kafka

This configuration fits most development requirements.

  • Zookeeper will be available at $DOCKER_HOST_IP:2181
  • Kafka will be available at $DOCKER_HOST_IP:9092
  • (experimental) JMX port at $DOCKER_HOST_IP:9999

Run with:

docker compose -f zk-single-kafka-single.yml up
docker compose -f zk-single-kafka-single.yml down

Single Zookeeper / Multiple Kafka

If you want to have three brokers and experiment with kafka replication / fault-tolerance.

  • Zookeeper will be available at $DOCKER_HOST_IP:2181
  • Kafka will be available at $DOCKER_HOST_IP:9092,$DOCKER_HOST_IP:9093,$DOCKER_HOST_IP:9094

Run with:

docker compose -f zk-single-kafka-multiple.yml up
docker compose -f zk-single-kafka-multiple.yml down

Multiple Zookeeper / Single Kafka

If you want to have three zookeeper nodes and experiment with zookeeper fault-tolerance.

  • Zookeeper will be available at $DOCKER_HOST_IP:2181,$DOCKER_HOST_IP:2182,$DOCKER_HOST_IP:2183
  • Kafka will be available at $DOCKER_HOST_IP:9092
  • (experimental) JMX port at $DOCKER_HOST_IP:9999

Run with:

docker compose -f zk-multiple-kafka-single.yml up
docker compose -f zk-multiple-kafka-single.yml down

Multiple Zookeeper / Multiple Kafka

If you want to have three zookeeper nodes and three kafka brokers to experiment with production setup.

  • Zookeeper will be available at $DOCKER_HOST_IP:2181,$DOCKER_HOST_IP:2182,$DOCKER_HOST_IP:2183
  • Kafka will be available at $DOCKER_HOST_IP:9092,$DOCKER_HOST_IP:9093,$DOCKER_HOST_IP:9094

Run with:

docker compose -f zk-multiple-kafka-multiple.yml up
docker compose -f zk-multiple-kafka-multiple.yml down

FAQ

Kafka

Q: Kafka's log is too verbose, how can I reduce it?

A: Add the following line to your docker compose environment variables: KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO". Full logging control can be accessed here: https://github.com/confluentinc/cp-docker-images/blob/master/debian/kafka/include/etc/confluent/docker/log4j.properties.template

Q: How do I delete data to start fresh?

A: Your data is persisted from within the docker compose folder, so if you want for example to reset the data in the full-stack docker compose, do a docker compose -f full-stack.yml down.

Q: Can I change the zookeeper ports?

A: yes. Say you want to change zoo1 port to 12181 (only relevant lines are shown):

  zoo1:
    ports:
      - "12181:12181"
    environment:
        ZOO_PORT: 12181
        
  kafka1:
    environment:
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:12181"

Q: Can I change the Kafka ports?

A: yes. Say you want to change kafka1 port to 12345 (only relevant lines are shown). Note only LISTENER_DOCKER_EXTERNAL changes:

  kafka1:
    image: confluentinc/cp-kafka:7.2.1
    hostname: kafka1
    ports:
      - "12345:12345"
    environment:
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:19092,EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:12345,DOCKER://host.docker.internal:29092

Q: Kafka is using a lot of disk space for testing. Can I reduce it?

A: yes. This is for testing only!!! Reduce the KAFKA_LOG_SEGMENT_BYTES to 16MB and the KAFKA_LOG_RETENTION_BYTES to 128MB

  kafka1:
    image: confluentinc/cp-kafka:7.2.1
    ...
    environment:
      ...
      # For testing small segments 16MB and retention of 128MB
      KAFKA_LOG_SEGMENT_BYTES: 16777216
      KAFKA_LOG_RETENTION_BYTES: 134217728

Q: How do I expose kafka?

A: If you want to expose kafka outside of your local machine, you must set KAFKA_ADVERTISED_LISTENERS to the IP of the machine so that kafka is externally accessible. To achieve this you can set LISTENER_DOCKER_EXTERNAL to the IP of the machine. For example, if the IP of your machine is 50.10.2.3, follow the sample mapping below:

  kafka1:
    image: confluentinc/cp-kafka:7.2.1
    ...
    environment:
      ...
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka2:19093,EXTERNAL://50.10.2.3:9093,DOCKER://host.docker.internal:29093

Q: How do I add connectors to kafka connect?

Create a connectors directory and place your connectors there (usually in a subdirectory) connectors/example/my.jar

The directory is automatically mounted by the kafka-connect Docker container

OR edit the bash command which pulls connectors at runtime

confluent-hub install --no-prompt debezium/debezium-connector-mysql:latest
        confluent-hub install 

Q: How to disable Confluent metrics?

Add this environment variable

KAFKA_CONFLUENT_SUPPORT_METRICS_ENABLE=false