Convert Figma logo to code with AI

yannh logokubeconform

A FAST Kubernetes manifests validator, with support for Custom Resources!

2,149
121
2,149
49

Top Related Projects

3,158

Validate your Kubernetes configuration files, supports multiple Kubernetes versions

6,384

Prevent Kubernetes misconfigurations from reaching production (again 😤 )! From code to cloud, Datree provides an E2E policy enforcement solution to run automatic checks for rule violations. See our docs: https://hub.datree.io

2,746

Container Image Linter for Security, Helping build the Best-Practice Docker Image, Easy to start

22,801

Find vulnerabilities, misconfigurations, secrets, SBOM in containers, Kubernetes, code repositories, clouds and more

1,205

Security risk analysis for Kubernetes resources

KubeLinter is a static analysis tool that checks Kubernetes YAML files and Helm charts to ensure the applications represented in them adhere to best practices.

Quick Overview

Kubeconform is a Kubernetes manifest validation tool. It's designed to be fast, lightweight, and can be used as a pre-commit hook or in CI pipelines to validate Kubernetes YAML files against the official Kubernetes JSON schemas.

Pros

  • Fast and efficient, capable of validating hundreds of files in seconds
  • Supports multiple Kubernetes versions and custom schemas
  • Can be easily integrated into CI/CD pipelines and pre-commit hooks
  • Provides detailed error messages for easier debugging

Cons

  • Limited to schema validation, doesn't check for best practices or security issues
  • May not catch all runtime issues that could occur in a live cluster
  • Requires keeping schemas up-to-date for accurate validation
  • Limited customization options compared to more comprehensive tools

Getting Started

To install Kubeconform, you can use one of the following methods:

# Using Homebrew
brew install kubeconform

# Using Go
go install github.com/yannh/kubeconform/cmd/kubeconform@latest

# Download binary from GitHub releases
# Replace VERSION with the desired version
wget https://github.com/yannh/kubeconform/releases/download/vVERSION/kubeconform-linux-amd64.tar.gz
tar xf kubeconform-linux-amd64.tar.gz
sudo mv kubeconform /usr/local/bin/

To validate a Kubernetes manifest:

kubeconform path/to/your-manifest.yaml

To validate multiple files or directories:

kubeconform -summary path/to/manifests/

For more advanced usage, such as specifying Kubernetes versions or custom schemas, refer to the project's documentation on GitHub.

Competitor Comparisons

3,158

Validate your Kubernetes configuration files, supports multiple Kubernetes versions

Pros of kubeval

  • More established project with a larger user base and longer history
  • Supports custom schemas for validating CRDs
  • Offers a web UI for quick validation without installation

Cons of kubeval

  • Slower validation speed, especially for large sets of manifests
  • Less frequent updates and maintenance compared to kubeconform
  • Limited support for newer Kubernetes API versions

Code Comparison

kubeval:

kubeval my-manifest.yaml

kubeconform:

kubeconform my-manifest.yaml

Both tools use similar command-line syntax for basic validation. However, kubeconform offers more advanced options for performance tuning and parallel processing:

kubeconform -parallel 4 -output json -summary manifests/

This command validates manifests in parallel, outputs results in JSON format, and provides a summary, showcasing kubeconform's focus on performance and flexibility.

While both tools serve similar purposes, kubeconform generally offers better performance and more frequent updates, making it a strong alternative to kubeval for Kubernetes manifest validation. However, kubeval's longer history and larger user base may provide more community support and resources for some users.

6,384

Prevent Kubernetes misconfigurations from reaching production (again 😤 )! From code to cloud, Datree provides an E2E policy enforcement solution to run automatic checks for rule violations. See our docs: https://hub.datree.io

Pros of Datree

  • Offers a more comprehensive policy engine with customizable rules
  • Provides a web-based dashboard for visualizing and managing policy violations
  • Integrates with CI/CD pipelines and offers team collaboration features

Cons of Datree

  • Requires an account and potentially a paid subscription for advanced features
  • May have a steeper learning curve due to its more extensive feature set
  • Can be slower to run compared to Kubeconform's lightweight approach

Code Comparison

Kubeconform:

kubeconform -schema-location 'https://raw.githubusercontent.com/yannh/kubernetes-json-schema/master/{{ .NormalizedKubernetesVersion }}-standalone{{ .StrictSuffix }}/{{ .ResourceKind }}{{ .KindSuffix }}.json' manifest.yaml

Datree:

datree test ./manifests/* --schema-version 1.20.0

Both tools can be used to validate Kubernetes manifests, but Datree offers a more feature-rich approach with policy enforcement, while Kubeconform focuses on fast, lightweight schema validation.

2,746

Container Image Linter for Security, Helping build the Best-Practice Docker Image, Easy to start

Pros of Dockle

  • Focuses on Docker image security and best practices
  • Provides comprehensive checks for Dockerfile and image content
  • Offers CIS benchmarks and custom rule support

Cons of Dockle

  • Limited to Docker image analysis
  • May require more setup time for custom rules

Code Comparison

Dockle:

dockle --exit-code 1 --exit-level warn myimage:latest

Kubeconform:

kubeconform -summary -output json deployment.yaml

Key Differences

Dockle is specifically designed for Docker image analysis and security checks, while Kubeconform is focused on validating Kubernetes manifests against the official Kubernetes schema.

Dockle provides a more comprehensive set of checks for Docker images, including CIS benchmarks and best practices. Kubeconform, on the other hand, excels at quickly validating Kubernetes YAML files for correctness.

Dockle's output is geared towards Docker image security and compliance, while Kubeconform's output is tailored for Kubernetes manifest validation.

Use Cases

Choose Dockle when:

  • You need to analyze Docker images for security vulnerabilities
  • You want to ensure Docker best practices are followed
  • You require CIS benchmark compliance for Docker images

Choose Kubeconform when:

  • You need to validate Kubernetes manifests
  • You want fast and efficient schema validation
  • You're working primarily with Kubernetes YAML files
22,801

Find vulnerabilities, misconfigurations, secrets, SBOM in containers, Kubernetes, code repositories, clouds and more

Pros of Trivy

  • Comprehensive security scanner for containers, filesystems, and Git repositories
  • Detects vulnerabilities in OS packages and language-specific dependencies
  • Supports multiple scanning targets beyond Kubernetes manifests

Cons of Trivy

  • Larger footprint and more complex setup compared to Kubeconform
  • May have longer scan times due to its broader scope
  • Requires more system resources for full functionality

Code Comparison

Kubeconform usage:

kubeconform -schema-location default -schema-location 'https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json' ./manifests

Trivy usage:

trivy k8s --report summary ./manifests

Key Differences

  • Kubeconform focuses solely on Kubernetes manifest validation
  • Trivy offers a wider range of security scanning capabilities
  • Kubeconform is lighter and faster for specific Kubernetes validation tasks
  • Trivy provides more comprehensive security analysis but with increased complexity

Both tools serve different purposes within the Kubernetes ecosystem. Kubeconform is ideal for quick and efficient manifest validation, while Trivy offers a more extensive security scanning solution for various components of a Kubernetes environment.

1,205

Security risk analysis for Kubernetes resources

Pros of Kubesec

  • Focuses on security-specific checks for Kubernetes manifests
  • Provides a risk score and detailed explanations for each security issue
  • Offers both CLI and web interface options for scanning

Cons of Kubesec

  • Limited to security-focused checks, not general Kubernetes manifest validation
  • Less frequent updates compared to Kubeconform
  • Smaller community and fewer contributors

Code Comparison

Kubesec example:

kubesec scan deployment.yaml

Kubeconform example:

kubeconform -schema-location default -kubernetes-version 1.18.0 deployment.yaml

Key Differences

  1. Focus: Kubesec specializes in security checks, while Kubeconform performs general Kubernetes manifest validation.
  2. Output: Kubesec provides a risk score and detailed security explanations, whereas Kubeconform offers pass/fail results for schema validation.
  3. Flexibility: Kubeconform supports multiple schema sources and Kubernetes versions, while Kubesec is more focused on security best practices.
  4. Community: Kubeconform has a larger community and more frequent updates compared to Kubesec.
  5. Use case: Choose Kubesec for dedicated security audits and Kubeconform for general manifest validation in CI/CD pipelines.

Both tools serve different purposes and can be complementary in a Kubernetes development workflow, with Kubeconform ensuring manifest validity and Kubesec providing security-specific insights.

KubeLinter is a static analysis tool that checks Kubernetes YAML files and Helm charts to ensure the applications represented in them adhere to best practices.

Pros of kube-linter

  • More comprehensive linting capabilities, covering a wider range of Kubernetes best practices and security checks
  • Customizable rules and the ability to create custom checks
  • Integrates well with CI/CD pipelines and provides detailed reports

Cons of kube-linter

  • Slower performance compared to Kubeconform, especially for large-scale validations
  • More complex setup and configuration process
  • Potentially overwhelming output for users new to Kubernetes linting

Code Comparison

Kubeconform:

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
  - name: example-container
    image: nginx:latest

kube-linter:

checks:
  - name: latest-tag
    description: Ensure container images are not using the 'latest' tag
    remediation: Specify a specific version tag for container images
    template: image-tag
    params:
      forbiddenTags:
        - latest

Both tools can validate Kubernetes manifests, but kube-linter offers more advanced linting capabilities with customizable rules. Kubeconform focuses on schema validation, while kube-linter provides a broader range of checks for best practices and security concerns. The choice between the two depends on the specific needs of the project and the desired level of validation complexity.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Kubeconform-GitHub-Hero

Build status Homebrew Go Report card PkgGoDev

Kubeconform is a Kubernetes manifest validation tool. Incorporate it into your CI, or use it locally to validate your Kubernetes configuration!

It is inspired by, contains code from and is designed to stay close to Kubeval, but with the following improvements:

  • high performance: will validate & download manifests over multiple routines, caching downloaded files in memory
  • configurable list of remote, or local schemas locations, enabling validating Kubernetes custom resources (CRDs) and offline validation capabilities
  • uses by default a self-updating fork of the schemas registry maintained by the kubernetes-json-schema project - which guarantees up-to-date schemas for all recent versions of Kubernetes.

Speed comparison with Kubeval

Running on a pretty large kubeconfigs setup, on a laptop with 4 cores:

$ time kubeconform -ignore-missing-schemas -n 8 -summary  preview staging production
Summary: 50714 resources found in 35139 files - Valid: 27334, Invalid: 0, Errors: 0 Skipped: 23380
real	0m6,710s
user	0m38,701s
sys	0m1,161s
$ time kubeval -d preview,staging,production --ignore-missing-schemas --quiet
[... Skipping output]
real	0m35,336s
user	0m0,717s
sys	0m1,069s

Table of contents

A small overview of Kubernetes manifest validation

Kubernetes's API is described using the OpenAPI (formerly swagger) specification, in a file checked into the main Kubernetes repository.

Because of the state of the tooling to perform validation against OpenAPI schemas, projects usually convert the OpenAPI schemas to JSON schemas first. Kubeval relies on instrumenta/OpenApi2JsonSchema to convert Kubernetes' Swagger file and break it down into multiple JSON schemas, stored in github at instrumenta/kubernetes-json-schema and published on kubernetesjsonschema.dev.

Kubeconform relies on a fork of kubernetes-json-schema that is more meticulously kept up-to-date, and contains schemas for all recent versions of Kubernetes.

Limits of Kubeconform validation

Kubeconform, similar to kubeval, only validates manifests using the official Kubernetes OpenAPI specifications. The Kubernetes controllers still perform additional server-side validations that are not part of the OpenAPI specifications. Those server-side validations are not covered by Kubeconform (examples: #65, #122, #142). You can use a 3rd-party tool or the kubectl --dry-run=server command to fill the missing (validation) gap.

Installation

If you are a Homebrew user, you can install by running:

$ brew install kubeconform

If you are a Windows user, you can install with winget by running:

winget install YannHamon.kubeconform

You can also download the latest version from the release page.

Another way of installation is via Golang's package manager:

# With a specific version tag
$ go install github.com/yannh/kubeconform/cmd/kubeconform@v0.4.13

# Latest version
$ go install github.com/yannh/kubeconform/cmd/kubeconform@latest

Usage

$ kubeconform -h
Usage: kubeconform [OPTION]... [FILE OR FOLDER]...
  -cache string
    	cache schemas downloaded via HTTP to this folder
  -debug
    	print debug information
  -exit-on-error
    	immediately stop execution when the first error is encountered
  -h	show help information
  -ignore-filename-pattern value
    	regular expression specifying paths to ignore (can be specified multiple times)
  -ignore-missing-schemas
    	skip files with missing schemas instead of failing
  -insecure-skip-tls-verify
    	disable verification of the server's SSL certificate. This will make your HTTPS connections insecure
  -kubernetes-version string
    	version of Kubernetes to validate against, e.g.: 1.18.0 (default "master")
  -n int
    	number of goroutines to run concurrently (default 4)
  -output string
    	output format - json, junit, pretty, tap, text (default "text")
  -reject string
    	comma-separated list of kinds or GVKs to reject
  -schema-location value
    	override schemas location search path (can be specified multiple times)
  -skip string
    	comma-separated list of kinds or GVKs to ignore
  -strict
    	disallow additional properties not in schema or duplicated keys
  -summary
    	print a summary at the end (ignored for junit output)
  -v	show version information
  -verbose
    	print results for all resources (ignored for tap and junit output)

Usage examples

  • Validating a single, valid file
$ kubeconform fixtures/valid.yaml
$ echo $?
0
  • Validating a single invalid file, setting output to json, and printing a summary
$ kubeconform -summary -output json fixtures/invalid.yaml
{
  "resources": [
    {
      "filename": "fixtures/invalid.yaml",
      "kind": "ReplicationController",
      "version": "v1",
      "status": "INVALID",
      "msg": "Additional property templates is not allowed - Invalid type. Expected: [integer,null], given: string"
    }
  ],
  "summary": {
    "valid": 0,
    "invalid": 1,
    "errors": 0,
    "skipped": 0
  }
}
$ echo $?
1
  • Passing manifests via Stdin
cat fixtures/valid.yaml  | ./bin/kubeconform -summary
Summary: 1 resource found parsing stdin - Valid: 1, Invalid: 0, Errors: 0 Skipped: 0
  • Validating a file, ignoring its resource using both Kind, and GVK (Group, Version, Kind) notations
# This will ignore ReplicationController for all apiVersions
$ kubeconform -summary -skip ReplicationController fixtures/valid.yaml
Summary: 1 resource found in 1 file - Valid: 0, Invalid: 0, Errors: 0, Skipped: 1

# This will ignore ReplicationController only for apiVersion v1
$ kubeconform -summary -skip v1/ReplicationController fixtures/valid.yaml
Summary: 1 resource found in 1 file - Valid: 0, Invalid: 0, Errors: 0, Skipped: 1
  • Validating a folder, increasing the number of parallel workers
$ kubeconform -summary -n 16 fixtures
fixtures/crd_schema.yaml - CustomResourceDefinition trainingjobs.sagemaker.aws.amazon.com failed validation: could not find schema for CustomResourceDefinition
fixtures/invalid.yaml - ReplicationController bob is invalid: Invalid type. Expected: [integer,null], given: string
[...]
Summary: 65 resources found in 34 files - Valid: 55, Invalid: 2, Errors: 8 Skipped: 0

Proxy support

Kubeconform will respect the HTTPS_PROXY variable when downloading schema files.

$ HTTPS_PROXY=proxy.local bin/kubeconform fixtures/valid.yaml

Overriding schemas location

When the -schema-location parameter is not used, or set to default, kubeconform will default to downloading schemas from https://github.com/yannh/kubernetes-json-schema. Kubeconform however supports passing one, or multiple, schemas locations - HTTP(s) URLs, or local filesystem paths, in which case it will lookup for schema definitions in each of them, in order, stopping as soon as a matching file is found.

  • If the -schema-location value does not end with .json, Kubeconform will assume filenames / a file structure identical to that of kubernetesjsonschema.dev or yannh/kubernetes-json-schema.
  • if the -schema-location value ends with .json - Kubeconform assumes the value is a Go templated string that indicates how to search for JSON schemas.
  • the -schema-location value of default is an alias for https://raw.githubusercontent.com/yannh/kubernetes-json-schema/master/{{.NormalizedKubernetesVersion}}-standalone{{.StrictSuffix}}/{{.ResourceKind}}{{.KindSuffix}}.json.

The following command lines are equivalent:

$ kubeconform fixtures/valid.yaml
$ kubeconform -schema-location default fixtures/valid.yaml
$ kubeconform -schema-location 'https://raw.githubusercontent.com/yannh/kubernetes-json-schema/master/{{.NormalizedKubernetesVersion}}-standalone{{.StrictSuffix}}/{{.ResourceKind}}{{.KindSuffix}}.json' fixtures/valid.yaml

Here are the variables you can use in -schema-location:

  • NormalizedKubernetesVersion - Kubernetes Version, prefixed by v
  • StrictSuffix - "-strict" or "" depending on whether validation is running in strict mode or not
  • ResourceKind - Kind of the Kubernetes Resource
  • ResourceAPIVersion - Version of API used for the resource - "v1" in "apiVersion: monitoring.coreos.com/v1"
  • Group - the group name as stated in this resource's definition - "monitoring.coreos.com" in "apiVersion: monitoring.coreos.com/v1"
  • KindSuffix - suffix computed from apiVersion - for compatibility with Kubeval schema registries

CustomResourceDefinition (CRD) Support

Because Custom Resources (CR) are not native Kubernetes objects, they are not included in the default schema.
If your CRs are present in Datree's CRDs-catalog, you can specify this project as an additional registry to lookup:

# Look in the CRDs-catalog for the desired schema/s
$ kubeconform -schema-location default -schema-location 'https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json' [MANIFEST]

If your CRs are not present in the CRDs-catalog, you will need to manually pull the CRDs manifests from your cluster and convert the OpenAPI.spec to JSON schema format.

Converting an OpenAPI file to a JSON Schema

Kubeconform uses JSON schemas to validate Kubernetes resources. For Custom Resource, the CustomResourceDefinition first needs to be converted to JSON Schema. A script is provided to convert these CustomResourceDefinitions to JSON schema. Here is an example how to use it:

$ python ./scripts/openapi2jsonschema.py https://raw.githubusercontent.com/aws/amazon-sagemaker-operator-for-k8s/master/config/crd/bases/sagemaker.aws.amazon.com_trainingjobs.yaml
JSON schema written to trainingjob_v1.json

By default, the file name output format is {kind}_{version}. The FILENAME_FORMAT environment variable can be used to change the output file name (Available variables: kind, group, fullgroup, version):

$ export FILENAME_FORMAT='{kind}-{group}-{version}'
$ ./scripts/openapi2jsonschema.py https://raw.githubusercontent.com/aws/amazon-sagemaker-operator-for-k8s/master/config/crd/bases/sagemaker.aws.amazon.com_trainingjobs.yaml
JSON schema written to trainingjob-sagemaker-v1.json

$ export FILENAME_FORMAT='{kind}-{fullgroup}-{version}'
$ ./scripts/openapi2jsonschema.py https://raw.githubusercontent.com/aws/amazon-sagemaker-operator-for-k8s/master/config/crd/bases/sagemaker.aws.amazon.com_trainingjobs.yaml
JSON schema written to trainingjob-sagemaker.aws.amazon.com-v1.json

After converting your CRDs to JSON schema files, you can use kubeconform to validate your CRs against them:

# If the resource Kind is not found in default, also lookup in the schemas/ folder for a matching file
$ kubeconform -schema-location default -schema-location 'schemas/{{ .ResourceKind }}{{ .KindSuffix }}.json' fixtures/custom-resource.yaml

ℹ️ Datree's CRD Extractor is a utility that can be used instead of this manual process.

OpenShift schema Support

You can validate Openshift manifests using a custom schema location. Set the OpenShift version (v3.10.0-4.1.0) to validate against using -kubernetes-version.

kubeconform -kubernetes-version 3.8.0  -schema-location 'https://raw.githubusercontent.com/garethr/openshift-json-schema/master/{{ .NormalizedKubernetesVersion }}-standalone{{ .StrictSuffix }}/{{ .ResourceKind }}.json'  -summary fixtures/valid.yaml
Summary: 1 resource found in 1 file - Valid: 1, Invalid: 0, Errors: 0 Skipped: 0

Integrating Kubeconform in the CI

Kubeconform publishes Docker Images to Github's new Container Registry (ghcr.io). These images can be used directly in a Github Action, once logged in using a Github Token.

Github Workflow

Example:

name: kubeconform
on: push
jobs:
  kubeconform:
    runs-on: ubuntu-latest
    steps:
      - name: login to Github Packages
        run: echo "${{ github.token }}" | docker login https://ghcr.io -u ${GITHUB_ACTOR} --password-stdin
      - uses: actions/checkout@v2
      - uses: docker://ghcr.io/yannh/kubeconform:latest
        with:
          entrypoint: '/kubeconform'
          args: "-summary -output json kubeconfigs/"

Note on pricing: Kubeconform relies on Github Container Registry which is currently in Beta. During that period, bandwidth is free. After that period, bandwidth costs might be applicable. Since bandwidth from Github Packages within Github Actions is free, I expect Github Container Registry to also be usable for free within Github Actions in the future. If that were not to be the case, I might publish the Docker image to a different platform.

Gitlab-CI

The Kubeconform Docker image can be used in Gitlab-CI. Here is an example of a Gitlab-CI job:

lint-kubeconform:
  stage: validate
  image:
    name: ghcr.io/yannh/kubeconform:latest-alpine
    entrypoint: [""]
  script:
  - /kubeconform -summary -output json kubeconfigs/

See issue 106 for more details.

Helm charts

There is a 3rd party repository that allows to use kubeconform to test Helm charts in the form of a Helm plugin and pre-commit hook.

Using kubeconform as a Go Module

Warning: This is a work-in-progress, the interface is not yet considered stable. Feedback is encouraged.

Kubeconform contains a package that can be used as a library. An example of usage can be found in examples/main.go

Additional documentation on pkg.go.dev

Credits