Convert Figma logo to code with AI

karmada-io logokarmada

Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

4,375
865
4,375
559

Top Related Projects

2,496

Kubernetes Cluster Federation

35,857

Connect, secure, control, and observe services.

109,710

Production-Grade Container Scheduling and Management

1,500

Deploy workloads from Git to large fleets of Kubernetes clusters

17,684

Declarative Continuous Deployment for Kubernetes

Quick Overview

Karmada (Kubernetes Armada) is an open-source project that enables multi-cluster application management for Kubernetes. It provides a unified control plane to automate the deployment and management of applications across multiple Kubernetes clusters, addressing challenges in multi-cloud and hybrid cloud scenarios.

Pros

  • Simplifies multi-cluster management with a centralized control plane
  • Supports automatic failover and load balancing across clusters
  • Enables consistent policy enforcement and resource propagation
  • Integrates well with existing Kubernetes ecosystems and tools

Cons

  • Adds complexity to the overall infrastructure setup
  • Requires additional learning curve for teams new to multi-cluster management
  • May introduce latency in cross-cluster operations
  • Limited maturity compared to single-cluster Kubernetes solutions

Getting Started

To get started with Karmada, follow these steps:

  1. Install Karmada on your Kubernetes cluster:
curl -s https://raw.githubusercontent.com/karmada-io/karmada/master/hack/install-cli.sh | bash
kubectl create namespace karmada-system
karmadactl init
  1. Register member clusters:
karmadactl join member1 --cluster-kubeconfig=/path/to/member1.kubeconfig
karmadactl join member2 --cluster-kubeconfig=/path/to/member2.kubeconfig
  1. Create a PropagationPolicy to define how resources should be distributed:
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: example-policy
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
  placement:
    clusterAffinity:
      clusterNames:
        - member1
        - member2
  1. Apply your Kubernetes resources as usual, and Karmada will handle the distribution based on the PropagationPolicy.

For more detailed instructions and advanced usage, refer to the Karmada documentation.

Competitor Comparisons

2,496

Kubernetes Cluster Federation

Pros of KubeFed

  • More mature project with longer history and wider adoption
  • Supports a broader range of Kubernetes versions
  • Offers more granular control over resource propagation

Cons of KubeFed

  • Less active development and community support
  • Limited support for multi-cluster scheduling and load balancing
  • More complex setup and configuration process

Code Comparison

KubeFed configuration example:

apiVersion: types.kubefed.io/v1beta1
kind: FederatedDeployment
metadata:
  name: test-deployment
  namespace: test-namespace
spec:
  template:
    metadata:
      labels:
        app: nginx
    spec:
      replicas: 3
      template:
        spec:
          containers:
          - image: nginx:1.7.9
            name: nginx

Karmada configuration example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2

Karmada uses native Kubernetes resources, while KubeFed introduces custom resource types for federation.

35,857

Connect, secure, control, and observe services.

Pros of Istio

  • Robust service mesh capabilities with advanced traffic management and security features
  • Extensive observability and telemetry for microservices
  • Large, active community and ecosystem support

Cons of Istio

  • Steeper learning curve and complexity compared to Karmada
  • Higher resource overhead due to sidecar proxy deployment
  • Primarily focused on service mesh, while Karmada offers broader multi-cluster management

Code Comparison

Istio (service mesh configuration):

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews-route
spec:
  hosts:
  - reviews.prod.svc.cluster.local
  http:
  - route:
    - destination:
        host: reviews.prod.svc.cluster.local
        subset: v2

Karmada (multi-cluster resource propagation):

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: example-policy
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
  placement:
    clusterAffinity:
      clusterNames:
        - cluster1
        - cluster2
109,710

Production-Grade Container Scheduling and Management

Pros of Kubernetes

  • Mature and widely adopted container orchestration platform
  • Extensive ecosystem with a vast array of tools and integrations
  • Robust community support and extensive documentation

Cons of Kubernetes

  • Complex setup and management, especially for multi-cluster environments
  • Limited built-in support for multi-cluster resource management
  • Steep learning curve for newcomers

Code Comparison

Kubernetes manifest example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx

Karmada manifest example:

apiVersion: apps.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: nginx-propagation
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: nginx-deployment
  placement:
    clusterAffinity:
      clusterNames:
        - cluster1
        - cluster2

The Kubernetes manifest defines a deployment within a single cluster, while the Karmada manifest demonstrates how to propagate a deployment across multiple clusters using a PropagationPolicy.

1,500

Deploy workloads from Git to large fleets of Kubernetes clusters

Pros of Fleet

  • Simpler setup and configuration process
  • Tighter integration with Rancher ecosystem
  • Better support for GitOps workflows

Cons of Fleet

  • Less flexible in terms of multi-cluster resource distribution
  • Limited support for advanced scheduling and placement strategies
  • Fewer options for customizing resource propagation policies

Code Comparison

Fleet configuration example:

apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: example-repo
  namespace: fleet-default
spec:
  repo: https://github.com/example/repo
  paths:
  - manifests

Karmada configuration example:

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: example-policy
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
  placement:
    clusterAffinity:
      clusterNames:
        - cluster1
        - cluster2

Fleet focuses on GitOps-based deployments, while Karmada provides more granular control over resource distribution across clusters. Fleet is better suited for simpler multi-cluster setups within the Rancher ecosystem, whereas Karmada offers more advanced features for complex multi-cluster environments and resource scheduling strategies.

17,684

Declarative Continuous Deployment for Kubernetes

Pros of Argo CD

  • Mature and widely adopted GitOps tool for Kubernetes
  • Rich UI and visualization features for application deployments
  • Supports multiple cluster management and multi-tenancy

Cons of Argo CD

  • Primarily focused on application deployment, not multi-cluster management
  • Limited support for cross-cluster resource scheduling and balancing
  • May require additional tools for comprehensive multi-cluster orchestration

Code Comparison

Argo CD (Application CRD):

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: myapp
spec:
  destination:
    namespace: default
    server: https://kubernetes.default.svc
  project: default
  source:
    path: apps/myapp
    repoURL: https://github.com/argoproj/argocd-example-apps.git

Karmada (PropagationPolicy):

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: example-policy
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
  placement:
    clusterAffinity:
      clusterNames:
        - member1
        - member2

Argo CD excels in GitOps-based application deployment and visualization, while Karmada focuses on multi-cluster resource management and scheduling. Argo CD is more suitable for application-centric workflows, whereas Karmada provides broader multi-cluster orchestration capabilities.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Karmada

Karmada-logo

LICENSE Releases Slack CII Best Practices OpenSSF Scorecard build Go Report Card codecov FOSSA Status Artifact HUB CLOMonitor

Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Karmada (Kubernetes Armada) is a Kubernetes management system that enables you to run your cloud-native applications across multiple Kubernetes clusters and clouds, with no changes to your applications. By speaking Kubernetes-native APIs and providing advanced scheduling capabilities, Karmada enables truly open, multi-cloud Kubernetes.

Karmada aims to provide turnkey automation for multi-cluster application management in multi-cloud and hybrid cloud scenarios, with key features such as centralized multi-cloud management, high availability, failure recovery, and traffic scheduling.

cncf_logo

Karmada is an incubation project of the Cloud Native Computing Foundation (CNCF).

Why Karmada:

  • K8s Native API Compatible

    • Zero change upgrade, from single-cluster to multi-cluster
    • Seamless integration of existing K8s tool chain
  • Out of the Box

    • Built-in policy sets for scenarios, including: Active-active, Remote DR, Geo Redundant, etc.
    • Cross-cluster applications auto-scaling, failover and load-balancing on multi-cluster.
  • Avoid Vendor Lock-in

    • Integration with mainstream cloud providers
    • Automatic allocation, migration across clusters
    • Not tied to proprietary vendor orchestration
  • Centralized Management

    • Location agnostic cluster management
    • Support clusters in Public cloud, on-prem or edge
  • Fruitful Multi-Cluster Scheduling Policies

    • Cluster Affinity, Multi Cluster Splitting/Rebalancing
    • Multi-Dimension HA: Region/AZ/Cluster/Provider
  • Open and Neutral

    • Jointly initiated by Internet, finance, manufacturing, teleco, cloud providers, etc.
    • Target for open governance with CNCF

Notice: this project is developed in continuation of Kubernetes Federation v1 and v2. Some basic concepts are inherited from these two versions.

Architecture

Architecture

The Karmada Control Plane consists of the following components:

  • Karmada API Server
  • Karmada Controller Manager
  • Karmada Scheduler

ETCD stores the Karmada API objects, the API Server is the REST endpoint all other components talk to, and the Karmada Controller Manager performs operations based on the API objects you create through the API server.

The Karmada Controller Manager runs the various controllers, the controllers watch Karmada objects and then talk to the underlying clusters' API servers to create regular Kubernetes resources.

  1. Cluster Controller: attach Kubernetes clusters to Karmada for managing the lifecycle of the clusters by creating cluster objects.
  2. Policy Controller: the controller watches PropagationPolicy objects. When the PropagationPolicy object is added, it selects a group of resources matching the resourceSelector and creates ResourceBinding with each single resource object.
  3. Binding Controller: the controller watches ResourceBinding object and create Work object corresponding to each cluster with a single resource manifest.
  4. Execution Controller: the controller watches Work objects. When Work objects are created, it will distribute the resources to member clusters.

Concepts

Resource template: Karmada uses Kubernetes Native API definition for federated resource template, to make it easy to integrate with existing tools that already adopt on Kubernetes

Propagation Policy: Karmada offers a standalone Propagation(placement) Policy API to define multi-cluster scheduling and spreading requirements.

  • Support 1:n mapping of Policy: workload, users don't need to indicate scheduling constraints every time creating federated applications.
  • With default policies, users can just interact with K8s API

Override Policy: Karmada provides standalone Override Policy API for specializing cluster relevant configuration automation. E.g.:

  • Override image prefix according to member cluster region
  • Override StorageClass according to cloud provider

The following diagram shows how Karmada resources are involved when propagating resources to member clusters.

karmada-resource-relation

Quick Start

This guide will cover:

  • Install karmada control plane components in a Kubernetes cluster which is known as host cluster.
  • Join a member cluster to karmada control plane.
  • Propagate an application by using karmada.

Prerequisites

Install the Karmada control plane

1. Clone this repo to your machine:

git clone https://github.com/karmada-io/karmada

2. Change to the karmada directory:

cd karmada

3. Deploy and run Karmada control plane:

run the following script:

hack/local-up-karmada.sh

This script will do the following tasks for you:

  • Start a Kubernetes cluster to run the Karmada control plane, aka. the host cluster.
  • Build Karmada control plane components based on a current codebase.
  • Deploy Karmada control plane components on the host cluster.
  • Create member clusters and join Karmada.

If everything goes well, at the end of the script output, you will see similar messages as follows:

Local Karmada is running.

To start using your Karmada environment, run:
  export KUBECONFIG="$HOME/.kube/karmada.config"
Please use 'kubectl config use-context karmada-host/karmada-apiserver' to switch the host and control plane cluster.

To manage your member clusters, run:
  export KUBECONFIG="$HOME/.kube/members.config"
Please use 'kubectl config use-context member1/member2/member3' to switch to the different member cluster.

There are two contexts in Karmada:

  • karmada-apiserver kubectl config use-context karmada-apiserver
  • karmada-host kubectl config use-context karmada-host

The karmada-apiserver is the main kubeconfig to be used when interacting with the Karmada control plane, while karmada-host is only used for debugging Karmada installation with the host cluster. You can check all clusters at any time by running: kubectl config view. To switch cluster contexts, run kubectl config use-context [CONTEXT_NAME]

Demo

Demo

Propagate application

In the following steps, we are going to propagate a deployment by Karmada.

1. Create nginx deployment in Karmada.

First, create a deployment named nginx:

kubectl create -f samples/nginx/deployment.yaml

2. Create PropagationPolicy that will propagate nginx to member cluster

Then, we need to create a policy to propagate the deployment to our member cluster.

kubectl create -f samples/nginx/propagationpolicy.yaml

3. Check the deployment status from Karmada

You can check deployment status from Karmada, don't need to access member cluster:

$ kubectl get deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   2/2     2            2           20s

Kubernetes compatibility

Kubernetes 1.16Kubernetes 1.17Kubernetes 1.18Kubernetes 1.19Kubernetes 1.20Kubernetes 1.21Kubernetes 1.22Kubernetes 1.23Kubernetes 1.24Kubernetes 1.25Kubernetes 1.26Kubernetes 1.27Kubernetes 1.28Kubernetes 1.29
Karmada v1.7✓✓✓✓✓✓✓✓✓✓✓✓✓✓
Karmada v1.8✓✓✓✓✓✓✓✓✓✓✓✓✓✓
Karmada v1.9✓✓✓✓✓✓✓✓✓✓✓✓✓✓
Karmada HEAD (master)✓✓✓✓✓✓✓✓✓✓✓✓✓✓

Key:

  • ✓ Karmada and the Kubernetes version are exactly compatible.
  • + Karmada has features or API objects that may not be present in the Kubernetes version.
  • - The Kubernetes version has features or API objects that Karmada can't use.

Meeting

Regular Community Meeting:

Resources:

Contact

If you have questions, feel free to reach out to us in the following ways:

Talks and References

Link
KubeCon(EU 2021)Beyond federation: automating multi-cloud workloads with K8s native APIs
KubeCon(EU 2022)Sailing Multi Cloud Traffic Management With Karmada
KubeDay(Israel 2023)Simplifying Multi-cluster Kubernetes Management with Karmada
KubeCon(China 2023)Multi-Cloud Multi-Cluster HPA Helps Trip.com Group Deal with Business Downturn and Rapid Recovery
KubeCon(China 2023)Break Through Cluster Boundaries to Autoscale Workloads Across Them on a Large Scale
KubeCon(China 2023)Cross-Cluster Traffic Orchestration with eBPF
KubeCon(China 2023)Non-Intrusively Enable OpenKruise and Argo Workflow in a Multi-Cluster Federation

For blogs, please refer to website.

Contributing

If you're interested in being a contributor and want to get involved in developing the Karmada code, please see CONTRIBUTING for details on submitting patches and the contribution workflow.

License

Karmada is under the Apache 2.0 license. See the LICENSE file for details.