cp-all-in-one
docker-compose.yml files for cp-all-in-one , cp-all-in-one-community, cp-all-in-one-cloud, Apache Kafka Confluent Platform
Top Related Projects
Awesome Docker Compose samples
Bitnami container images
Dockerfile for Apache Kafka
Kafka (and Zookeeper) in Docker
Kafka Docker for development. Kafka, Zookeeper, Schema Registry, Kafka-Connect, , 20+ connectors
Quick Overview
The confluentinc/cp-all-in-one repository is a collection of Docker Compose files and supporting resources for deploying Confluent Platform components in various configurations. It provides an easy way to set up and run Confluent Platform services for development, testing, and demonstration purposes.
Pros
- Simplifies the deployment of Confluent Platform components using Docker Compose
- Offers multiple configuration options to suit different use cases and requirements
- Provides a quick and easy way to get started with Confluent Platform for development and testing
- Includes examples and documentation for various deployment scenarios
Cons
- Not recommended for production use due to its focus on development and testing environments
- May require significant system resources when running all components simultaneously
- Limited customization options compared to manual deployments or Kubernetes-based solutions
- Potential for version conflicts or compatibility issues with specific Confluent Platform releases
Getting Started
To get started with cp-all-in-one, follow these steps:
-
Clone the repository:
git clone https://github.com/confluentinc/cp-all-in-one.git
-
Navigate to the desired configuration directory (e.g., cp-all-in-one-community):
cd cp-all-in-one/cp-all-in-one-community
-
Start the services using Docker Compose:
docker-compose up -d
-
Verify that the services are running:
docker-compose ps
-
Access the Confluent Control Center at http://localhost:9021 to manage and monitor your Kafka cluster.
To stop and remove the containers, run:
docker-compose down
Note: Ensure that you have Docker and Docker Compose installed on your system before running these commands.
Competitor Comparisons
Awesome Docker Compose samples
Pros of awesome-compose
- Offers a wide variety of Docker Compose examples for different technologies and stacks
- Provides a learning resource for Docker Compose best practices and configurations
- Includes examples for popular web frameworks, databases, and development tools
Cons of awesome-compose
- Not focused on a specific technology stack or platform
- May require more setup and configuration for production use
- Lacks the integrated ecosystem approach of cp-all-in-one
Code Comparison
awesome-compose example (Python Flask with Redis):
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
redis:
image: "redis:alpine"
cp-all-in-one example (Kafka and Zookeeper setup):
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.3.2
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
Summary
awesome-compose is a versatile collection of Docker Compose examples for various technologies, making it an excellent learning resource. However, it may require more setup for production use. cp-all-in-one, on the other hand, provides a more focused and integrated approach for Confluent Platform components but is limited to that specific ecosystem.
Bitnami container images
Pros of containers
- Broader scope: Offers containers for various applications beyond Kafka ecosystem
- More frequent updates: Actively maintained with regular releases
- Flexibility: Can be used independently or as part of larger Bitnami stack
Cons of containers
- Less Kafka-specific: May require more configuration for Kafka-centric deployments
- Steeper learning curve: Wider range of options can be overwhelming for beginners
- Limited integration: Lacks tight integration with Confluent-specific tools
Code comparison
cp-all-in-one:
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.3.0
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
containers:
---
version: '2'
services:
kafka:
image: bitnami/kafka:latest
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
The cp-all-in-one repository focuses specifically on Confluent Platform components, providing a streamlined setup for Kafka and related services. It offers a more opinionated and integrated approach, ideal for users wanting a quick start with Confluent's ecosystem.
containers, on the other hand, provides a wider range of containerized applications, including Kafka. It offers more flexibility and options but may require additional configuration for a complete Kafka setup. This repository is better suited for users who need a variety of containerized applications or prefer more control over their deployment.
Dockerfile for Apache Kafka
Pros of kafka-docker
- Lightweight and focused solely on Kafka, making it easier to understand and customize
- More flexibility in configuring Kafka settings and cluster topology
- Actively maintained with frequent updates and community contributions
Cons of kafka-docker
- Lacks additional components like Schema Registry, Kafka Connect, and ksqlDB
- Requires more manual setup and configuration for advanced features
- May not be as production-ready out of the box compared to cp-all-in-one
Code Comparison
kafka-docker:
version: '2'
services:
kafka:
image: wurstmeister/kafka:latest
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
cp-all-in-one:
version: '2'
services:
kafka:
image: confluentinc/cp-kafka:latest
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
The code snippets show that both repositories use Docker Compose to set up Kafka, but cp-all-in-one includes additional services and configuration options for a more comprehensive Kafka ecosystem.
Kafka (and Zookeeper) in Docker
Pros of docker-kafka
- Lightweight and focused solely on Kafka, making it easier to understand and customize
- Provides a simple Docker setup for Kafka and Zookeeper, ideal for development and testing
- Actively maintained by Spotify, benefiting from their expertise in large-scale Kafka deployments
Cons of docker-kafka
- Limited to basic Kafka functionality, lacking additional components like Schema Registry or Kafka Connect
- May require more manual configuration for advanced use cases or production environments
- Does not include a comprehensive suite of tools for managing and monitoring Kafka clusters
Code Comparison
docker-kafka:
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
kafka:
image: wurstmeister/kafka
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
cp-all-in-one:
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.3.2
kafka:
image: confluentinc/cp-kafka:7.3.2
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
The cp-all-in-one repository provides a more comprehensive Kafka ecosystem with additional components and tools, making it suitable for production-like environments. However, docker-kafka offers a simpler setup focused on core Kafka functionality, which can be beneficial for specific use cases or learning purposes.
Kafka Docker for development. Kafka, Zookeeper, Schema Registry, Kafka-Connect, , 20+ connectors
Pros of fast-data-dev
- Includes a wider range of Kafka ecosystem tools and connectors out-of-the-box
- Provides a web UI for easier management and monitoring
- Designed for quick setup and experimentation with minimal configuration
Cons of fast-data-dev
- Less frequently updated compared to cp-all-in-one
- May not always align with the latest Confluent Platform versions
- Limited to development and testing use cases, not suitable for production
Code Comparison
fast-data-dev:
FROM landoop/fast-data-dev:latest
ENV ADV_HOST=127.0.0.1
EXPOSE 2181 3030 8081-8083 9581-9585 9092
cp-all-in-one:
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.3.0
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
Both repositories aim to provide a comprehensive Kafka development environment, but they differ in their approach and included components. fast-data-dev offers a more user-friendly experience with its web UI and pre-configured tools, making it ideal for quick prototyping and learning. cp-all-in-one, being part of the official Confluent Platform, provides a more production-like setup and is regularly updated to match the latest Confluent releases. The choice between the two depends on the specific use case, with fast-data-dev being more suitable for rapid development and experimentation, while cp-all-in-one is better for those seeking a closer representation of a production Confluent environment.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
cp-all-in-one
cp-all-in-one
: Confluent Enterprise License version of Confluent Platform, including Confluent Server, Schema Registry, a Kafka Connect worker with the Datagen Source connector plugin installed, Confluent Control Center, REST Proxy, ksqlDB, and Flink.cp-all-in-one-community
: Confluent Community License version of Confluent Platform include the Kafka broker, Schema Registry, a Kafka Connect worker with the Datagen Source connector plugin installed, Confluent Control Center, REST Proxy, ksqlDB, and Flink.cp-all-in-one-cloud
: Docker Compose files that can be used to run Confluent Platform components (Schema Registry, a Kafka Connect worker with the Datagen Source connector plugin installed, Confluent Control Center, REST Proxy, or ksqlDB) against Confluent Cloud.cp-all-in-one-security/oauth
: Confluent Enterprise License version of Confluent Platform that showcases Confluent Platform's OAuth 2.0 support using the Keycloak identity provider.
Usage as a GitHub Action
service
: up to which service in the docker-compose.yml file to run. Default is none, so all services are rungithub-branch-version
: which GitHub branch of cp-all-in-one to run. Default islatest
.type
: cp-all-in-one (based on Confluent Server) or cp-all-in-one-community (based on Apache Kafka)
Example to run Confluent Server on Confluent Platform 7.7.1
:
steps:
- name: Run Confluent Platform (Confluent Server)
uses: confluentinc/cp-all-in-one@v0.1
with:
service: broker
github-branch-version: 7.7.1-post
Example to run all Apache Kafka services on latest
:
steps:
- name: Run Confluent Platform (Confluent Server)
uses: confluentinc/cp-all-in-one@v0.1
type: cp-all-in-one-community
Ports
To connect to services in Docker, refer to the following ports:
- Kafka broker: 9092
- Kafka broker JMX: 9101
- Confluent Schema Registry: 8081
- Kafka Connect: 8083
- Confluent Control Center: 9021
- ksqlDB: 8088
- Confluent REST Proxy: 8082
- Flink Job Manager: 9081
Top Related Projects
Awesome Docker Compose samples
Bitnami container images
Dockerfile for Apache Kafka
Kafka (and Zookeeper) in Docker
Kafka Docker for development. Kafka, Zookeeper, Schema Registry, Kafka-Connect, , 20+ connectors
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot