Top Related Projects
Run Kubernetes locally
Kubernetes IN Docker - local clusters for testing Kubernetes
Talos Linux is a modern Linux distribution built for Kubernetes.
k0s - The Zero Friction Kubernetes
MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
Quick Overview
K3s is a lightweight, certified Kubernetes distribution that is easy to install, highly available, and works everywhere. It is designed to be a production-grade Kubernetes distribution for resource-constrained environments and edge computing.
Pros
- Lightweight and Efficient: K3s is a highly optimized and compact Kubernetes distribution, making it suitable for environments with limited resources, such as edge devices and IoT applications.
- Easy to Install and Manage: K3s provides a simple and straightforward installation process, with a single binary that can be easily deployed on various platforms.
- Highly Available: K3s supports high availability out of the box, allowing for seamless failover and ensuring the reliability of your Kubernetes cluster.
- Broad Compatibility: K3s is compatible with a wide range of hardware and software platforms, making it a versatile choice for diverse deployment scenarios.
Cons
- Limited Customization: While K3s aims to be a lightweight and opinionated distribution, this may limit the ability to customize certain aspects of the Kubernetes cluster to fit specific requirements.
- Reduced Feature Set: Compared to a full-fledged Kubernetes distribution, K3s may have a reduced feature set, which could be a drawback for users with more advanced requirements.
- Potential Vendor Lock-in: As K3s is primarily developed and maintained by Rancher (now part of SUSE), there may be concerns about potential vendor lock-in for users.
- Limited Community Support: While K3s has a growing community, it may not have the same level of community support and resources as larger Kubernetes distributions.
Getting Started
To get started with K3s, follow these steps:
- Download the K3s binary from the official GitHub repository:
curl -sfL https://get.k3s.io | sh -
- Verify the installation by checking the K3s version:
k3s --version
- Start the K3s server:
k3s server
- To join a node to the K3s cluster, run the following command on the node:
k3s agent --server https://<server-ip>:6443 --token <token>
Replace <server-ip>
with the IP address of the K3s server and <token>
with the token provided by the server.
- Interact with the K3s cluster using the
kubectl
command-line tool:
kubectl get nodes
This will display the nodes in your K3s cluster.
For more detailed instructions and advanced configuration options, please refer to the K3s documentation.
Competitor Comparisons
Pros of RKE2
- More closely aligned with upstream Kubernetes, offering better compatibility with standard K8s tools and practices
- Enhanced security features, including FIPS 140-2 compliance for government and regulated industries
- Better suited for larger, production-grade deployments with built-in high availability
Cons of RKE2
- Higher resource requirements and complexity compared to K3s
- Longer startup times and potentially slower deployment process
- Less suitable for edge computing or IoT scenarios due to its larger footprint
Code Comparison
K3s:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: traefik
RKE2:
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-ingress-nginx
namespace: kube-system
spec:
valuesContent: |-
controller:
kind: DaemonSet
Both projects use similar Helm-based configurations, but RKE2 tends to use more standard Kubernetes resources and configurations, while K3s often employs custom resource definitions for simplicity.
Run Kubernetes locally
Pros of Minikube
- More mature and widely adopted, with extensive documentation and community support
- Supports multiple hypervisors and container runtimes
- Provides a full-featured Kubernetes environment, closely mimicking production clusters
Cons of Minikube
- Heavier resource consumption, requiring more system resources to run
- Slower startup times compared to K3s
- More complex setup and configuration process
Code Comparison
K3s:
curl -sfL https://get.k3s.io | sh -
Minikube:
minikube start --driver=docker
kubectl config use-context minikube
K3s focuses on simplicity and lightweight deployment, making it easier to set up with a single command. Minikube, on the other hand, requires additional steps for configuration and context switching.
Both projects aim to provide local Kubernetes environments, but K3s is designed for production use in resource-constrained environments, while Minikube is primarily for development and testing purposes. K3s offers a more streamlined experience with faster startup times and lower resource usage, making it suitable for edge computing and IoT scenarios. Minikube provides a more comprehensive Kubernetes experience, closely resembling production clusters, which can be beneficial for developers working on complex applications.
Kubernetes IN Docker - local clusters for testing Kubernetes
Pros of kind
- Runs Kubernetes clusters inside Docker containers, making it lightweight and easy to set up on any system with Docker installed
- Closely mimics production Kubernetes environments, providing a more accurate representation for testing and development
- Supports multi-node clusters, allowing for more complex testing scenarios
Cons of kind
- Generally slower to start up compared to k3s, especially for multi-node clusters
- Requires Docker to be installed and running, which may not be suitable for all environments
- Can be more resource-intensive, particularly when running multiple nodes
Code Comparison
k3s:
curl -sfL https://get.k3s.io | sh -
k3s kubectl get nodes
kind:
kind create cluster
kubectl get nodes
Both projects aim to provide lightweight Kubernetes environments, but they take different approaches. k3s is a standalone binary that runs a minimized Kubernetes distribution, while kind creates Kubernetes clusters using Docker containers. k3s is often faster to start and more resource-efficient, making it suitable for edge computing and IoT scenarios. kind, on the other hand, provides a more production-like environment and is excellent for testing Kubernetes configurations and applications in a local setting.
Talos Linux is a modern Linux distribution built for Kubernetes.
Pros of Talos
- Immutable and minimal Linux distribution, enhancing security and reducing attack surface
- Built-in support for high availability and self-healing capabilities
- Designed for automated, API-driven operations with a strong focus on declarative configuration
Cons of Talos
- Steeper learning curve due to its unique architecture and design principles
- Less flexibility in terms of customization compared to traditional Linux distributions
- Smaller community and ecosystem compared to K3s
Code Comparison
K3s configuration example:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: myapp
namespace: kube-system
spec:
chart: stable/myapp
Talos configuration example:
machine:
type: worker
kubelet:
extraArgs:
node-labels: "node.kubernetes.io/worker"
Both projects aim to simplify Kubernetes deployment, but they take different approaches. K3s focuses on being a lightweight Kubernetes distribution, while Talos reimagines the entire operating system for running Kubernetes. The choice between them depends on specific use cases, infrastructure requirements, and team expertise.
k0s - The Zero Friction Kubernetes
Pros of k0s
- Zero dependencies: k0s is a single binary with no external dependencies
- Flexible networking: Supports multiple CNI providers out-of-the-box
- Smaller footprint: Generally has a smaller resource footprint than k3s
Cons of k0s
- Less mature: k0s is a newer project with a smaller community compared to k3s
- Limited ARM support: k0s has limited support for ARM architectures
- Fewer integrations: k3s has more built-in integrations with other tools and services
Code Comparison
k0s configuration example:
apiVersion: k0s.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s-cluster
spec:
api:
address: 192.168.1.100
k3s configuration example:
apiVersion: k3s.io/v1alpha1
kind: K3sConfig
metadata:
name: k3s-cluster
spec:
servers:
- address: 192.168.1.100
Both k0s and k3s aim to provide lightweight Kubernetes distributions, but they have different approaches and trade-offs. k0s focuses on being a single binary with zero dependencies, while k3s emphasizes ease of use and integration with existing systems. The choice between them depends on specific use cases and requirements.
MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
Pros of MicroK8s
- Snap-based installation provides easy updates and rollbacks
- Built-in add-ons for common services (e.g., dashboard, registry)
- Multi-node clustering support out-of-the-box
Cons of MicroK8s
- Larger resource footprint than K3s
- Limited to Ubuntu and snap-supported systems
- Slower startup time compared to K3s
Code Comparison
MicroK8s installation:
sudo snap install microk8s --classic
microk8s status --wait-ready
microk8s kubectl get nodes
K3s installation:
curl -sfL https://get.k3s.io | sh -
sudo kubectl get nodes
Both K3s and MicroK8s aim to provide lightweight Kubernetes distributions for edge, IoT, and development environments. K3s focuses on being extremely lightweight and fast, while MicroK8s offers a more feature-rich experience with built-in add-ons. K3s has broader OS support and a smaller footprint, making it ideal for resource-constrained environments. MicroK8s, with its snap-based installation, provides easier updates and a more Ubuntu-centric experience. The choice between the two depends on specific use cases, resource availability, and preferred operating systems.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
K3s - Lightweight Kubernetes
Lightweight Kubernetes. Production ready, easy to install, half the memory, all in a binary less than 100 MB.
Great for:
- Edge
- IoT
- CI
- Development
- ARM
- Embedding k8s
- Situations where a PhD in k8s clusterology is infeasible
What is this?
K3s is a fully conformant production-ready Kubernetes distribution with the following changes:
- It is packaged as a single binary.
- It adds support for sqlite3 as the default storage backend. Etcd3, MariaDB, MySQL, and Postgres are also supported.
- It wraps Kubernetes and other components in a single, simple launcher.
- It is secure by default with reasonable defaults for lightweight environments.
- It has minimal to no OS dependencies (just a sane kernel and cgroup mounts needed).
- It eliminates the need to expose a port on Kubernetes worker nodes for the kubelet API by exposing this API to the Kubernetes control plane nodes over a websocket tunnel.
K3s bundles the following technologies together into a single cohesive distribution:
- Containerd & runc
- Flannel for CNI
- CoreDNS
- Metrics Server
- Traefik for ingress
- Klipper-lb as an embedded service load balancer provider
- Kube-router netpol controller for network policy
- Helm-controller to allow for CRD-driven deployment of helm manifests
- Kine as a datastore shim that allows etcd to be replaced with other databases
- Local-path-provisioner for provisioning volumes using local storage
- Host utilities such as iptables/nftables, ebtables, ethtool, & socat
These technologies can be disabled or swapped out for technologies of your choice.
Additionally, K3s simplifies Kubernetes operations by maintaining functionality for:
- Managing the TLS certificates of Kubernetes components
- Managing the connection between worker and server nodes
- Auto-deploying Kubernetes resources from local manifests in realtime as they are changed.
- Managing an embedded etcd cluster
Current Status
What's with the name?
We wanted an installation of Kubernetes that was half the size in terms of memory footprint. Kubernetes is a 10 letter word stylized as k8s. So something half as big as Kubernetes would be a 5 letter word stylized as K3s. There is neither a long-form of K3s nor official pronunciation.
Is this a fork?
No, it's a distribution. A fork implies continued divergence from the original. This is not K3s's goal or practice. K3s explicitly intends not to change any core Kubernetes functionality. We seek to remain as close to upstream Kubernetes as possible. However, we maintain a small set of patches (well under 1000 lines) important to K3s's use case and deployment model. We maintain patches for other components as well. When possible, we contribute these changes back to the upstream projects, for example, with SELinux support in containerd. This is a common practice amongst software distributions.
K3s is a distribution because it packages additional components and services necessary for a fully functional cluster that go beyond vanilla Kubernetes. These are opinionated choices on technologies for components like ingress, storage class, network policy, service load balancer, and even container runtime. These choices and technologies are touched on in more detail in the What is this? section.
How is this lightweight or smaller than upstream Kubernetes?
There are two major ways that K3s is lighter weight than upstream Kubernetes:
- The memory footprint to run it is smaller
- The binary, which contains all the non-containerized components needed to run a cluster, is smaller
The memory footprint is reduced primarily by running many components inside of a single process. This eliminates significant overhead that would otherwise be duplicated for each component.
The binary is smaller by removing third-party storage drivers and cloud providers, explained in more detail below.
What have you removed from upstream Kubernetes?
This is a common point of confusion because it has changed over time. Early versions of K3s had much more removed than the current version. K3s currently removes two things:
- In-tree storage drivers
- In-tree cloud provider
Both of these have out-of-tree alternatives in the form of CSI and CCM, which work in K3s and which upstream is moving towards.
We remove these to achieve a smaller binary size. They can be removed while remaining conformant because neither affects core Kubernetes functionality. They are also dependent on third-party cloud or data center technologies/services, which may not be available in many K3s' use cases.
What's next?
Check out our roadmap to see what we have planned moving forward.
Release cadence
K3s maintains pace with upstream Kubernetes releases. Our goal is to release patch releases within one week, and new minors within 30 days.
Our release versioning reflects the version of upstream Kubernetes that is being released. For example, the K3s release v1.27.4+k3s1 maps to the v1.27.4
Kubernetes release. We add a postfix in the form of +k3s<number>
to allow us to make additional releases using the same version of upstream Kubernetes while remaining semver compliant. For example, if we discovered a high severity bug in v1.27.4+k3s1
and needed to release an immediate fix for it, we would release v1.27.4+k3s2
.
Documentation
Please see the official docs site for complete documentation.
Quick-Start - Install Script
The install.sh
script provides a convenient way to download K3s and add a service to systemd or openrc.
To install k3s as a service, run:
curl -sfL https://get.k3s.io | sh -
A kubeconfig file is written to /etc/rancher/k3s/k3s.yaml
and the service is automatically started or restarted.
The install script will install K3s and additional utilities, such as kubectl
, crictl
, k3s-killall.sh
, and k3s-uninstall.sh
, for example:
sudo kubectl get nodes
K3S_TOKEN
is created at /var/lib/rancher/k3s/server/node-token
on your server.
To install on worker nodes, pass K3S_URL
along with
K3S_TOKEN
environment variables, for example:
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=XXX sh -
Manual Download
- Download
k3s
from latest release, x86_64, armhf, arm64 and s390x are supported. - Run the server.
sudo k3s server &
# Kubeconfig is written to /etc/rancher/k3s/k3s.yaml
sudo k3s kubectl get nodes
# On a different node run the below. NODE_TOKEN comes from
# /var/lib/rancher/k3s/server/node-token on your server
sudo k3s agent --server https://myserver:6443 --token ${NODE_TOKEN}
Contributing
Please check out our contributing guide if you're interested in contributing to K3s.
Security
Security issues in K3s can be reported by sending an email to security@k3s.io. Please do not file issues about security issues.
Top Related Projects
Run Kubernetes locally
Kubernetes IN Docker - local clusters for testing Kubernetes
Talos Linux is a modern Linux distribution built for Kubernetes.
k0s - The Zero Friction Kubernetes
MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot