Top Related Projects
Production-Grade Container Scheduling and Management
networking for containers
Cloud native networking and network security
eBPF-based Networking, Security, and Observability
flannel is a network fabric for containers, designed for Kubernetes
Simple, resilient multi-host containers networking and more.
Quick Overview
The Container Network Interface (CNI) is a specification and set of libraries for configuring network interfaces in Linux containers. It provides a common interface between container runtimes and network plugins, allowing for flexible and interoperable networking solutions in containerized environments.
Pros
- Standardized interface for container networking across different runtimes and plugins
- Simplifies network configuration and management in container ecosystems
- Supports a wide range of network plugins and implementations
- Enables portability and interoperability between different container platforms
Cons
- Limited to Linux-based systems
- May require additional configuration for complex networking scenarios
- Learning curve for developers new to container networking concepts
- Some advanced features may not be supported by all plugins
Code Examples
- Basic CNI configuration file (JSON):
{
"cniVersion": "0.4.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"ipam": {
"type": "host-local",
"subnet": "10.1.0.0/16",
"gateway": "10.1.0.1"
}
}
This example defines a simple bridge network configuration for containers.
- Go code to load and execute a CNI plugin:
package main
import (
"fmt"
"github.com/containernetworking/cni/libcni"
)
func main() {
conf := &libcni.CNIConfig{Path: []string{"/opt/cni/bin"}}
networks, err := conf.LoadConf("/etc/cni/net.d", "mynet")
if err != nil {
fmt.Println("Error loading CNI config:", err)
return
}
result, err := conf.AddNetwork(networks)
if err != nil {
fmt.Println("Error adding network:", err)
return
}
fmt.Printf("Network added successfully: %+v\n", result)
}
This example demonstrates how to load a CNI configuration and add a network using the CNI library in Go.
- Bash script to invoke a CNI plugin:
#!/bin/bash
export CNI_COMMAND=ADD
export CNI_CONTAINERID=example-container
export CNI_NETNS=/var/run/netns/example-ns
export CNI_IFNAME=eth0
export CNI_PATH=/opt/cni/bin
echo '{
"cniVersion": "0.4.0",
"name": "mynet",
"type": "bridge"
}' | /opt/cni/bin/bridge
This script shows how to manually invoke a CNI plugin (in this case, the bridge plugin) using environment variables and a JSON configuration.
Getting Started
To get started with CNI:
-
Install CNI plugins:
git clone https://github.com/containernetworking/plugins.git cd plugins ./build_linux.sh sudo mkdir -p /opt/cni/bin sudo cp bin/* /opt/cni/bin/
-
Create a network configuration file (e.g.,
/etc/cni/net.d/10-mynet.conf
):{ "cniVersion": "0.4.0", "name": "mynet", "type": "bridge", "bridge": "cni0", "ipam": { "type": "host-local", "subnet": "10.1.0.0/16" } }
-
Use CNI with your container runtime or orchestration tool (e.g., Kubernetes, containerd, or Docker with CNI plugins).
Competitor Comparisons
Production-Grade Container Scheduling and Management
Pros of Kubernetes
- Comprehensive container orchestration platform with extensive features
- Large, active community and ecosystem with numerous tools and integrations
- Built-in scaling, load balancing, and self-healing capabilities
Cons of Kubernetes
- Steeper learning curve and more complex setup compared to CNI
- Heavier resource requirements for running a full cluster
- Potential overkill for simple container networking needs
Code Comparison
CNI (example plugin configuration):
{
"cniVersion": "0.4.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"ipam": {
"type": "host-local",
"subnet": "10.1.0.0/16"
}
}
Kubernetes (example pod networking specification):
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mycontainer
image: myimage
networkPolicy:
podSelector: {}
CNI focuses on container network interface specifications, while Kubernetes provides a full-featured container orchestration platform. CNI is more lightweight and flexible for custom networking solutions, whereas Kubernetes offers a complete ecosystem for managing containerized applications at scale.
networking for containers
Pros of libnetwork
- Tightly integrated with Docker, providing seamless networking for Docker containers
- Supports multiple network drivers (bridge, overlay, macvlan) out of the box
- Offers built-in service discovery and load balancing features
Cons of libnetwork
- Less flexible for non-Docker container runtimes
- More complex architecture, potentially harder to extend or customize
- Limited support for third-party network plugins compared to CNI
Code Comparison
libnetwork:
network, err := controller.NewNetwork("bridge", "mynet", "",
libnetwork.NetworkOptionEnableIPv6(false),
libnetwork.NetworkOptionIpam(ipam.DefaultIPAM, nil))
CNI:
netconf := &types.NetConf{
Name: "mynet",
Type: "bridge",
IPAM: &types.IPAM{Type: "host-local"},
}
Summary
libnetwork is tailored for Docker environments, offering tight integration and built-in features. CNI, on the other hand, provides a more flexible and standardized approach to container networking across various runtimes. While libnetwork excels in Docker-specific scenarios, CNI's simplicity and broad ecosystem support make it a popular choice for diverse container environments, especially in Kubernetes deployments.
Cloud native networking and network security
Pros of Calico
- Provides a complete networking solution with advanced features like network policy enforcement and security controls
- Offers better performance and scalability for large clusters
- Includes built-in support for network isolation and microsegmentation
Cons of Calico
- More complex to set up and configure compared to CNI
- Requires additional components and resources to run
- May have a steeper learning curve for newcomers to container networking
Code Comparison
CNI (basic configuration):
{
"cniVersion": "0.4.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"ipam": {
"type": "host-local",
"subnet": "10.1.0.0/16"
}
}
Calico (basic configuration):
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: default-ipv4-ippool
spec:
cidr: 192.168.0.0/16
ipipMode: Always
natOutgoing: true
Summary
While CNI provides a standardized interface for container networking, Calico offers a more comprehensive solution with advanced features and better performance for large-scale deployments. However, Calico's additional complexity may be overkill for simpler use cases where CNI's simplicity and ease of use are sufficient.
eBPF-based Networking, Security, and Observability
Pros of Cilium
- Advanced network security features with eBPF-based filtering and policy enforcement
- Built-in observability and monitoring capabilities
- Supports multi-cluster networking and service mesh functionality
Cons of Cilium
- Steeper learning curve due to its complexity and advanced features
- Requires more resources to run compared to simpler CNI implementations
- May be overkill for basic container networking needs
Code Comparison
CNI (basic network configuration):
{
"cniVersion": "0.4.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"ipam": {
"type": "host-local",
"subnet": "10.1.0.0/16"
}
}
Cilium (network policy example):
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "allow-frontend-backend"
spec:
endpointSelector:
matchLabels:
app: backend
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
The CNI example shows a basic network configuration, while the Cilium example demonstrates its advanced policy capabilities.
flannel is a network fabric for containers, designed for Kubernetes
Pros of Flannel
- Simple and easy to set up for basic networking needs
- Provides a flat network across multiple nodes
- Works well with Kubernetes out of the box
Cons of Flannel
- Limited advanced networking features compared to CNI
- Less flexibility in network configuration options
- May not be suitable for complex networking requirements
Code Comparison
Flannel configuration example:
{
"Network": "10.0.0.0/8",
"Backend": {
"Type": "vxlan"
}
}
CNI configuration example:
{
"cniVersion": "0.4.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"ipam": {
"type": "host-local",
"subnet": "10.0.0.0/24"
}
}
Flannel focuses on providing a simple overlay network, while CNI offers a more flexible and extensible approach to container networking. Flannel is easier to set up for basic use cases, but CNI provides more options for advanced networking configurations. The code examples show the difference in configuration complexity, with Flannel requiring less detailed setup compared to CNI's more granular approach.
Simple, resilient multi-host containers networking and more.
Pros of Weave
- Provides a complete networking solution with built-in service discovery and DNS
- Offers encryption and network policy features out-of-the-box
- Supports multi-host networking without additional configuration
Cons of Weave
- Can be more complex to set up and manage compared to CNI
- May have higher resource overhead due to its comprehensive feature set
- Less flexibility for customization as it's a more opinionated solution
Code Comparison
Weave configuration example:
apiVersion: v1
kind: ConfigMap
metadata:
name: weave-net
namespace: kube-system
data:
network: "10.32.0.0/12"
CNI configuration example:
{
"cniVersion": "0.4.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"ipam": {
"type": "host-local",
"subnet": "10.22.0.0/16"
}
}
The Weave configuration is typically more concise, while CNI configurations offer more granular control over network settings.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
CNI - the Container Network Interface
What is CNI?
CNI (Container Network Interface), a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. Because of this focus, CNI has a wide range of support and the specification is simple to implement.
As well as the specification, this repository contains the Go source code of a library for integrating CNI into applications and an example command-line tool for executing CNI plugins. A separate repository contains reference plugins and a template for making new plugins.
The template code makes it straight-forward to create a CNI plugin for an existing container networking project. CNI also makes a good framework for creating a new container networking project from scratch.
Here are the recordings of two sessions that the CNI maintainers hosted at KubeCon/CloudNativeCon 2019:
Contributing to CNI
We welcome contributions, including bug reports, and code and documentation improvements. If you intend to contribute to code or documentation, please read CONTRIBUTING.md. Also see the contact section in this README.
The CNI project has a weekly meeting. It takes place Mondays at 11:00 US/Eastern. All are welcome to join.
Why develop CNI?
Application containers on Linux are a rapidly evolving area, and within this area networking is not well addressed as it is highly environment-specific. We believe that many container runtimes and orchestrators will seek to solve the same problem of making the network layer pluggable.
To avoid duplication, we think it is prudent to define a common interface between the network plugins and container execution: hence we put forward this specification, along with libraries for Go and a set of plugins.
Who is using CNI?
Container runtimes
- Kubernetes - a system to simplify container operations
- OpenShift - Kubernetes with additional enterprise features
- Cloud Foundry - a platform for cloud applications
- Apache Mesos - a distributed systems kernel
- Amazon ECS - a highly scalable, high performance container management service
- Singularity - container platform optimized for HPC, EPC, and AI
- OpenSVC - orchestrator for legacy and containerized application stacks
3rd party plugins
- Project Calico - a layer 3 virtual network
- Contiv Networking - policy networking for various use cases
- SR-IOV
- Cilium - eBPF & XDP for containers
- Multus - a Multi plugin
- Romana - Layer 3 CNI plugin supporting network policy for Kubernetes
- CNI-Genie - generic CNI network plugin
- Nuage CNI - Nuage Networks SDN plugin for network policy kubernetes support
- Linen - a CNI plugin designed for overlay networks with Open vSwitch and fit in SDN/OpenFlow network environment
- Vhostuser - a Dataplane network plugin - Supports OVS-DPDK & VPP
- Amazon ECS CNI Plugins - a collection of CNI Plugins to configure containers with Amazon EC2 elastic network interfaces (ENIs)
- Bonding CNI - a Link aggregating plugin to address failover and high availability network
- ovn-kubernetes - an container network plugin built on Open vSwitch (OVS) and Open Virtual Networking (OVN) with support for both Linux and Windows
- Juniper Contrail / TungstenFabric - Provides overlay SDN solution, delivering multicloud networking, hybrid cloud networking, simultaneous overlay-underlay support, network policy enforcement, network isolation, service chaining and flexible load balancing
- Knitter - a CNI plugin supporting multiple networking for Kubernetes
- DANM - a CNI-compliant networking solution for TelCo workloads running on Kubernetes
- cni-route-override - a meta CNI plugin that override route information
- Terway - a collection of CNI Plugins based on alibaba cloud VPC/ECS network product
- Cisco ACI CNI - for on-prem and cloud container networking with consistent policy and security model.
- Kube-OVN - a CNI plugin that bases on OVN/OVS and provides advanced features like subnet, static ip, ACL, QoS, etc.
- Project Antrea - an Open vSwitch k8s CNI
- Azure CNI - a CNI plugin that natively extends Azure Virtual Networks to containers
- Hybridnet - a CNI plugin designed for hybrid clouds which provides both overlay and underlay networking for containers in one or more clusters. Overlay and underlay containers can run on the same node and have cluster-wide bidirectional network connectivity.
- Spiderpool - An IP Address Management (IPAM) CNI plugin of Kubernetes for managing static ip for underlay network
- AWS VPC CNI - Networking plugin for pod networking in Kubernetes using Elastic Network Interfaces on AWS
The CNI team also maintains some core plugins in a separate repository.
How do I use CNI?
Requirements
The CNI spec is language agnostic. To use the Go language libraries in this repository, you'll need a recent version of Go. You can find the Go versions covered by our automated tests in .travis.yaml.
Reference Plugins
The CNI project maintains a set of reference plugins that implement the CNI specification. NOTE: the reference plugins used to live in this repository but have been split out into a separate repository as of May 2017.
Running the plugins
After building and installing the reference plugins, you can use the priv-net-run.sh
and docker-run.sh
scripts in the scripts/
directory to exercise the plugins.
note - priv-net-run.sh depends on jq
Start out by creating a netconf file to describe a network:
$ mkdir -p /etc/cni/net.d
$ cat >/etc/cni/net.d/10-mynet.conf <<EOF
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.22.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
EOF
$ cat >/etc/cni/net.d/99-loopback.conf <<EOF
{
"cniVersion": "0.2.0",
"name": "lo",
"type": "loopback"
}
EOF
The directory /etc/cni/net.d
is the default location in which the scripts will look for net configurations.
Next, build the plugins:
$ cd $GOPATH/src/github.com/containernetworking/plugins
$ ./build_linux.sh # or build_windows.sh
Finally, execute a command (ifconfig
in this example) in a private network namespace that has joined the mynet
network:
$ CNI_PATH=$GOPATH/src/github.com/containernetworking/plugins/bin
$ cd $GOPATH/src/github.com/containernetworking/cni/scripts
$ sudo CNI_PATH=$CNI_PATH ./priv-net-run.sh ifconfig
eth0 Link encap:Ethernet HWaddr f2:c2:6f:54:b8:2b
inet addr:10.22.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::f0c2:6fff:fe54:b82b/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:90 (90.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
The environment variable CNI_PATH
tells the scripts and library where to look for plugin executables.
Running a Docker container with network namespace set up by CNI plugins
Use the instructions in the previous section to define a netconf and build the plugins.
Next, docker-run.sh script wraps docker run
, to execute the plugins prior to entering the container:
$ CNI_PATH=$GOPATH/src/github.com/containernetworking/plugins/bin
$ cd $GOPATH/src/github.com/containernetworking/cni/scripts
$ sudo CNI_PATH=$CNI_PATH ./docker-run.sh --rm busybox:latest ifconfig
eth0 Link encap:Ethernet HWaddr fa:60:70:aa:07:d1
inet addr:10.22.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::f860:70ff:feaa:7d1/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:90 (90.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
What might CNI do in the future?
CNI currently covers a wide range of needs for network configuration due to its simple model and API. However, in the future CNI might want to branch out into other directions:
- Dynamic updates to existing network configuration
- Dynamic policies for network bandwidth and firewall rules
If these topics are of interest, please contact the team via the mailing list or IRC and find some like-minded people in the community to put a proposal together.
Where are the binaries?
The plugins moved to a separate repo: https://github.com/containernetworking/plugins, and the releases there include binaries and checksums.
Prior to release 0.7.0 the cni
release also included a cnitool
binary; as this is a developer tool we suggest you build it yourself.
Contact
For any questions about CNI, please reach out via:
- Email: cni-dev
- IRC: #containernetworking channel on freenode.net
- Slack: #cni on the CNCF slack. NOTE: the previous CNI Slack (containernetworking.slack.com) has been sunsetted.
Security
If you have a security issue to report, please do so privately to the email addresses listed in the MAINTAINERS file.
Top Related Projects
Production-Grade Container Scheduling and Management
networking for containers
Cloud native networking and network security
eBPF-based Networking, Security, and Observability
flannel is a network fabric for containers, designed for Kubernetes
Simple, resilient multi-host containers networking and more.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot