Top Related Projects
Kubebuilder - SDK for building Kubernetes APIs using CRDs
SDK for building Kubernetes applications. Provides high level APIs, useful abstractions, and project scaffolding.
Go client for Kubernetes.
Repo for the controller-runtime subproject of kubebuilder (sig-apimachinery)
Customization of kubernetes YAML configurations
Run Kubernetes locally
Quick Overview
The kubernetes/sample-controller is a GitHub repository that provides a basic example of how to write a custom Kubernetes controller. It demonstrates the implementation of a simple controller that manages a custom resource definition (CRD) and serves as a starting point for developers looking to create their own Kubernetes controllers.
Pros
- Offers a clear, well-documented example of a Kubernetes controller
- Provides a solid foundation for building more complex controllers
- Demonstrates best practices for controller implementation
- Includes detailed comments explaining each step of the process
Cons
- May be too simplistic for advanced use cases
- Requires prior knowledge of Kubernetes concepts
- Limited to a single CRD example
- May not cover all edge cases or error handling scenarios
Code Examples
- Defining a Custom Resource Definition (CRD):
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: foos.samplecontroller.k8s.io
spec:
group: samplecontroller.k8s.io
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
deploymentName:
type: string
replicas:
type: integer
minimum: 1
maximum: 10
scope: Namespaced
names:
plural: foos
singular: foo
kind: Foo
shortNames:
- fo
- Implementing the controller's main reconciliation loop:
func (c *Controller) syncHandler(key string) error {
namespace, name, err := cache.SplitMetaNamespaceKey(key)
if err != nil {
return err
}
foo, err := c.foosLister.Foos(namespace).Get(name)
if errors.IsNotFound(err) {
return nil
}
if err != nil {
return err
}
deploymentName := foo.Spec.DeploymentName
if deploymentName == "" {
return fmt.Errorf("deploymentName must be specified")
}
deployment, err := c.deploymentsLister.Deployments(foo.Namespace).Get(deploymentName)
if errors.IsNotFound(err) {
deployment, err = c.kubeclientset.AppsV1().Deployments(foo.Namespace).Create(context.TODO(), newDeployment(foo), metav1.CreateOptions{})
}
if err != nil {
return err
}
if !metav1.IsControlledBy(deployment, foo) {
msg := fmt.Sprintf("Resource %q already exists and is not managed by Foo", deployment.Name)
c.recorder.Event(foo, corev1.EventTypeWarning, ErrResourceExists, msg)
return fmt.Errorf(msg)
}
if foo.Spec.Replicas != nil && *foo.Spec.Replicas != *deployment.Spec.Replicas {
deployment, err = c.kubeclientset.AppsV1().Deployments(foo.Namespace).Update(context.TODO(), newDeployment(foo), metav1.UpdateOptions{})
}
if err != nil {
return err
}
c.recorder.Event(foo, corev1.EventTypeNormal, SuccessSynced, MessageResourceSynced)
return nil
}
- Registering event handlers:
fooInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: controller.enqueueFoo,
UpdateFunc: func(old, new interface{}) {
controller.enqueueFoo(new)
},
})
deploymentInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: controller.handleObject,
UpdateFunc: func(old, new interface
Competitor Comparisons
Kubebuilder - SDK for building Kubernetes APIs using CRDs
Pros of Kubebuilder
- Provides a more comprehensive framework for building Kubernetes operators
- Offers scaffolding tools to quickly generate project structure and boilerplate code
- Includes built-in support for webhooks, RBAC, and other advanced Kubernetes features
Cons of Kubebuilder
- Steeper learning curve due to its more complex structure and abstractions
- May introduce unnecessary overhead for simple custom controllers
- Less flexibility in terms of project structure and code organization
Code Comparison
Sample-controller:
// Create a new controller
controller := NewController(
kubeClient,
exampleClient,
kubeInformerFactory.Apps().V1().Deployments(),
exampleInformerFactory.Samplecontroller().V1alpha1().Foos(),
)
Kubebuilder:
// SetupWithManager sets up the controller with the Manager
func (r *FooReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&samplev1alpha1.Foo{}).
Complete(r)
}
The sample-controller code shows manual controller setup, while Kubebuilder uses a more declarative approach with built-in manager integration.
SDK for building Kubernetes applications. Provides high level APIs, useful abstractions, and project scaffolding.
Pros of operator-sdk
- More comprehensive framework for building Kubernetes operators
- Provides scaffolding, code generation, and testing utilities
- Supports multiple programming languages (Go, Ansible, Helm)
Cons of operator-sdk
- Steeper learning curve due to more complex architecture
- Potentially overkill for simple use cases
- Requires additional dependencies and tooling
Code Comparison
sample-controller:
func (c *Controller) syncHandler(key string) error {
namespace, name, err := cache.SplitMetaNamespaceKey(key)
if err != nil {
return err
}
foo, err := c.foosLister.Foos(namespace).Get(name)
if err != nil {
// Handle error...
}
// Process the Foo resource...
}
operator-sdk:
func (r *FooReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
ctx := context.Background()
log := r.Log.WithValues("foo", req.NamespacedName)
var foo myapiv1.Foo
if err := r.Get(ctx, req.NamespacedName, &foo); err != nil {
// Handle error...
}
// Process the Foo resource...
}
The sample-controller provides a basic example of implementing a custom controller, while operator-sdk offers a more structured approach with additional features and abstractions for building full-fledged operators.
Go client for Kubernetes.
Pros of client-go
- Comprehensive Kubernetes API client library for Go
- Provides low-level access to Kubernetes resources and operations
- Regularly updated with new Kubernetes API versions
Cons of client-go
- Steeper learning curve for beginners
- Requires more boilerplate code for common operations
- Less opinionated, leaving more implementation details to the developer
Code Comparison
sample-controller:
func (c *Controller) syncHandler(key string) error {
namespace, name, err := cache.SplitMetaNamespaceKey(key)
if err != nil {
return err
}
foo, err := c.foosLister.Foos(namespace).Get(name)
if err != nil {
return err
}
// ... (controller logic)
}
client-go:
fooClient := clientset.SamplecontrollerV1alpha1().Foos(namespace)
foo, err := fooClient.Get(context.TODO(), name, metav1.GetOptions{})
if err != nil {
return err
}
// ... (direct API interaction)
The sample-controller provides a higher-level abstraction with caching and event handling, while client-go offers more direct API access but requires additional setup for similar functionality.
Repo for the controller-runtime subproject of kubebuilder (sig-apimachinery)
Pros of controller-runtime
- Provides a higher-level abstraction for building Kubernetes controllers
- Offers built-in support for leader election, metrics, and health probes
- Simplifies reconciliation logic with predefined patterns and utilities
Cons of controller-runtime
- Steeper learning curve due to more complex architecture
- May introduce unnecessary overhead for simple controllers
- Less flexibility in low-level implementation details
Code Comparison
sample-controller:
func (c *Controller) syncHandler(key string) error {
namespace, name, err := cache.SplitMetaNamespaceKey(key)
if err != nil {
return err
}
foo, err := c.foosLister.Foos(namespace).Get(name)
if err != nil {
// Handle error
}
// Controller logic
}
controller-runtime:
func (r *ReconcileFoo) Reconcile(req reconcile.Request) (reconcile.Result, error) {
foo := &samplev1alpha1.Foo{}
err := r.Get(context.TODO(), req.NamespacedName, foo)
if err != nil {
// Handle error
}
// Reconciliation logic
}
The sample-controller uses a more traditional approach with explicit listers and informers, while controller-runtime abstracts these details, focusing on the reconciliation logic. controller-runtime's approach is generally more concise and easier to read, but may hide some implementation details that could be important in certain scenarios.
Customization of kubernetes YAML configurations
Pros of Kustomize
- More comprehensive tool for customizing Kubernetes manifests
- Supports overlays and patches for flexible configuration management
- Widely adopted in the Kubernetes ecosystem
Cons of Kustomize
- Steeper learning curve compared to sample-controller
- May introduce complexity for simple use cases
- Requires additional tooling and setup
Code Comparison
sample-controller:
// Define a new custom resource
type Foo struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec FooSpec `json:"spec"`
Status FooStatus `json:"status"`
}
Kustomize:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
patchesStrategicMerge:
- patch.yaml
The sample-controller focuses on creating custom controllers and resources, while Kustomize is designed for customizing existing Kubernetes manifests. sample-controller provides a foundation for building controllers, whereas Kustomize offers a declarative approach to managing application configurations across different environments.
Run Kubernetes locally
Pros of Minikube
- Provides a full-fledged local Kubernetes environment for development and testing
- Supports multiple hypervisors and container runtimes
- Offers a user-friendly CLI for managing local clusters
Cons of Minikube
- Larger resource footprint compared to sample-controller
- More complex setup and configuration process
- May not be suitable for learning specific controller patterns
Code Comparison
Sample-controller:
func (c *Controller) syncHandler(key string) error {
namespace, name, err := cache.SplitMetaNamespaceKey(key)
if err != nil {
return err
}
foo, err := c.foosLister.Foos(namespace).Get(name)
if err != nil {
// Handle error
}
// Process the Foo resource
}
Minikube:
func (k *Bootstrapper) StartCluster(cfg config.ClusterConfig) error {
// ...
if err := k.startKubelet(cfg); err != nil {
return errors.Wrap(err, "starting kubelet")
}
if err := k.applyNodeLabels(cfg); err != nil {
return errors.Wrap(err, "applying node labels")
}
// ...
}
The sample-controller focuses on implementing a specific controller pattern, while Minikube provides a broader infrastructure for running a local Kubernetes cluster. The code snippets reflect these different purposes, with sample-controller handling resource synchronization and Minikube managing cluster components.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
sample-controller
This repository implements a simple controller for watching Foo resources as defined with a CustomResourceDefinition (CRD).
Note: go-get or vendor this package as k8s.io/sample-controller
.
This particular example demonstrates how to perform basic operations such as:
- How to register a new custom resource (custom resource type) of type
Foo
using a CustomResourceDefinition. - How to create/get/list instances of your new resource type
Foo
. - How to setup a controller on resource handling create/update/delete events.
It makes use of the generators in k8s.io/code-generator
to generate a typed client, informers, listers and deep-copy functions. You can
do this yourself using the ./hack/update-codegen.sh
script.
The update-codegen
script will automatically generate the following files &
directories:
pkg/apis/samplecontroller/v1alpha1/zz_generated.deepcopy.go
pkg/generated/
Changes should not be made to these files manually, and when creating your own
controller based off of this implementation you should not copy these files and
instead run the update-codegen
script to generate your own.
Details
The sample controller uses client-go library extensively. The details of interaction points of the sample controller with various mechanisms from this library are explained here.
Fetch sample-controller and its dependencies
Issue the following commands --- starting in whatever working directory you like.
git clone https://github.com/kubernetes/sample-controller
cd sample-controller
Note, however, that if you intend to
generate code then you will also need the
code-generator repo to exist in an old-style location. One easy way
to do this is to use the command go mod vendor
to create and
populate the vendor
directory.
A Note on kubernetes/kubernetes
If you are developing Kubernetes according to
https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md
then you already have a copy of this demo in
kubernetes/staging/src/k8s.io/sample-controller
and its dependencies
--- including the code generator --- are in usable locations
(valid for all Go versions).
Purpose
This is an example of how to build a kube-like controller with a single type.
Running
Prerequisite: Since the sample-controller uses apps/v1
deployments, the Kubernetes cluster version should be greater than 1.9.
# assumes you have a working kubeconfig, not required if operating in-cluster
go build -o sample-controller .
./sample-controller -kubeconfig=$HOME/.kube/config
# create a CustomResourceDefinition
kubectl create -f artifacts/examples/crd-status-subresource.yaml
# create a custom resource of type Foo
kubectl create -f artifacts/examples/example-foo.yaml
# check deployments created through the custom resource
kubectl get deployments
Use Cases
CustomResourceDefinitions can be used to implement custom resource types for your Kubernetes cluster.
These act like most other Resources in Kubernetes, and may be kubectl apply
'd, etc.
Some example use cases:
- Provisioning/Management of external datastores/databases (eg. CloudSQL/RDS instances)
- Higher level abstractions around Kubernetes primitives (eg. a single Resource to define an etcd cluster, backed by a Service and a ReplicationController)
Defining types
Each instance of your custom resource has an attached Spec, which should be defined via a struct{}
to provide data format validation.
In practice, this Spec is arbitrary key-value data that specifies the configuration/behavior of your Resource.
For example, if you were implementing a custom resource for a Database, you might provide a DatabaseSpec like the following:
type DatabaseSpec struct {
Databases []string `json:"databases"`
Users []User `json:"users"`
Version string `json:"version"`
}
type User struct {
Name string `json:"name"`
Password string `json:"password"`
}
Note, the JSON tag json:
is required on all user facing fields within your type. Typically API types contain only user facing fields. When the JSON tag is omitted from the field, Kubernetes generators consider the field to be internal and will not expose the field in their generated external output. For example, this means that the field would not be included in a generated CRD schema.
Validation
To validate custom resources, use the CustomResourceValidation
feature. Validation in the form of a structured schema is mandatory to be provided for apiextensions.k8s.io/v1
.
Example
The schema in crd.yaml
applies the following validation on the custom resource:
spec.replicas
must be an integer and must have a minimum value of 1 and a maximum value of 10.
Subresources
Custom Resources support /status
and /scale
subresources. The CustomResourceSubresources
feature is in GA from v1.16.
Example
The CRD in crd-status-subresource.yaml
enables the /status
subresource for custom resources.
This means that UpdateStatus
can be used by the controller to update only the status part of the custom resource.
To understand why only the status part of the custom resource should be updated, please refer to the Kubernetes API conventions.
In the above steps, use crd-status-subresource.yaml
to create the CRD:
# create a CustomResourceDefinition supporting the status subresource
kubectl create -f artifacts/examples/crd-status-subresource.yaml
A Note on the API version
The group version of the custom resource in crd.yaml
is v1alpha
, this can be evolved to a stable API version, v1
, using CRD Versioning.
Cleanup
You can clean up the created CustomResourceDefinition with:
kubectl delete crd foos.samplecontroller.k8s.io
Compatibility
HEAD of this repository will match HEAD of k8s.io/apimachinery and k8s.io/client-go.
Where does it come from?
sample-controller
is synced from
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/sample-controller.
Code changes are made in that location, merged into k8s.io/kubernetes and
later synced here.
Top Related Projects
Kubebuilder - SDK for building Kubernetes APIs using CRDs
SDK for building Kubernetes applications. Provides high level APIs, useful abstractions, and project scaffolding.
Go client for Kubernetes.
Repo for the controller-runtime subproject of kubebuilder (sig-apimachinery)
Customization of kubernetes YAML configurations
Run Kubernetes locally
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot