Convert Figma logo to code with AI

containernetworking logoplugins

Some reference and example networking plugins, maintained by the CNI team.

2,371
822
2,371
71

Top Related Projects

6,557

Cloud native networking and network security

22,159

eBPF-based Networking, Security, and Observability

9,167

flannel is a network fabric for containers, designed for Kubernetes

6,626

Simple, resilient multi-host containers networking and more.

Quick Overview

The containernetworking/plugins repository is a collection of standard networking plugins for container runtimes, such as Docker and Kubernetes. These plugins provide a consistent and extensible way to manage network interfaces for containers, allowing for the creation of complex network topologies and the integration of various networking technologies.

Pros

  • Extensibility: The plugin architecture allows for the easy addition of new networking technologies and features, making the system highly adaptable to changing requirements.
  • Standardization: The plugins adhere to the Container Network Interface (CNI) specification, ensuring compatibility with a wide range of container runtimes and platforms.
  • Performance: The plugins are designed to be lightweight and efficient, minimizing the overhead on container networking.
  • Community Support: The project has a large and active community of contributors, ensuring ongoing development and maintenance.

Cons

  • Complexity: The plugin system can be complex to set up and configure, especially for users new to container networking.
  • Limited Documentation: While the project has good documentation, some aspects may be less well-documented, making it challenging for newcomers to get started.
  • Dependency on Container Runtimes: The plugins are tightly coupled with the container runtime, which can limit their portability across different platforms.
  • Potential Performance Issues: In some cases, the use of plugins may introduce additional overhead or performance bottlenecks, depending on the specific networking requirements.

Code Examples

Here are a few examples of how to use the containernetworking/plugins in your container networking setup:

  1. Creating a Bridge Network:
package main

import (
    "fmt"
    "os"

    "github.com/containernetworking/cni/pkg/skel"
    "github.com/containernetworking/cni/pkg/types"
    "github.com/containernetworking/plugins/plugins/main/bridge"
)

func main() {
    skel.PluginMain(bridge.New, nil, nil)
}

This example demonstrates how to create a bridge network using the bridge plugin from the containernetworking/plugins repository.

  1. Configuring a Loopback Interface:
package main

import (
    "github.com/containernetworking/cni/pkg/skel"
    "github.com/containernetworking/plugins/plugins/ipam/host-local"
    "github.com/containernetworking/plugins/plugins/main/loopback"
)

func main() {
    skel.PluginMain(loopback.New, host_local.New, nil)
}

This example shows how to configure a loopback interface using the loopback plugin and the host-local IPAM plugin.

  1. Implementing a Custom Network Plugin:
package main

import (
    "github.com/containernetworking/cni/pkg/skel"
    "github.com/containernetworking/plugins/pkg/ns"
    "github.com/containernetworking/plugins/pkg/testutils"
)

type myPlugin struct{}

func (p *myPlugin) GetCapabilities() (*types.PluginCapabilities, error) {
    return nil, nil
}

func (p *myPlugin) PrepareNetwork(args *skel.CmdArgs) (*types.Result, error) {
    // Implement your custom network plugin logic here
    return nil, nil
}

func main() {
    skel.PluginMain(&myPlugin{}, nil, nil)
}

This example demonstrates how to implement a custom network plugin by extending the containernetworking/plugins framework.

Getting Started

To get started with the containernetworking/plugins repository, follow these steps:

  1. Clone the repository:
git clone https://github.com/containernetworking/plugins.git
  1. Build the plugins:
cd plugins
make
  1. Install the plugins:
sudo mkdir -p /opt/cni/bin
sudo cp bin/* /opt/cni/bin/
  1. Configure your container runtime (e.g., Docker or Kubernetes) to use the installed plugins.

For more detailed instructions and configuration options, please refer to

Competitor Comparisons

6,557

Cloud native networking and network security

Pros of Calico

  • Advanced network policy features for fine-grained control
  • Better performance and scalability for large clusters
  • Integrated BGP routing for efficient traffic management

Cons of Calico

  • More complex setup and configuration
  • Steeper learning curve for administrators
  • Limited support for non-Kubernetes environments

Code Comparison

Calico (YAML configuration):

apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
  name: allow-tcp-6379
spec:
  selector: app == 'database'
  ingress:
  - action: Allow
    protocol: TCP
    destination:
      ports:
      - 6379

CNI Plugins (JSON configuration):

{
  "cniVersion": "0.3.1",
  "name": "mynet",
  "type": "bridge",
  "bridge": "cni0",
  "ipam": {
    "type": "host-local",
    "subnet": "10.22.0.0/16"
  }
}

Calico offers more advanced network policies and routing capabilities, while CNI Plugins provide a simpler, more flexible approach for basic networking needs. Calico excels in large-scale Kubernetes deployments, whereas CNI Plugins are more versatile across different container runtimes and orchestrators.

22,159

eBPF-based Networking, Security, and Observability

Pros of Cilium

  • Advanced network security features with eBPF-based filtering and policy enforcement
  • High-performance networking with XDP and eBPF optimizations
  • Integrated service mesh and load balancing capabilities

Cons of Cilium

  • Steeper learning curve due to more complex architecture
  • Requires newer kernel versions for full feature support
  • Potentially higher resource usage compared to simpler CNI plugins

Code Comparison

Cilium (eBPF-based packet processing):

static __always_inline int handle_ipv6(struct __ctx_buff *ctx, __u32 src_id)
{
    struct ipv6_ct_tuple tuple = {};
    void *data, *data_end;
    struct ipv6hdr *ip6;

    if (!revalidate_data(ctx, &data, &data_end, &ip6))
        return DROP_INVALID;

Plugins (traditional network processing):

func cmdAdd(args *skel.CmdArgs) error {
    n, cniVersion, err := loadNetConf(args.StdinData)
    if err != nil {
        return err
    }

    isLayer3 := n.IPAM.Type != ""

Cilium offers more advanced networking features and security capabilities, leveraging eBPF for high performance. However, it comes with increased complexity and resource requirements. Plugins provides a simpler, more traditional approach to container networking, which may be sufficient for many use cases but lacks some of Cilium's advanced features.

9,167

flannel is a network fabric for containers, designed for Kubernetes

Pros of Flannel

  • Simpler setup and configuration for basic networking needs
  • Built-in support for multiple backend types (e.g., VXLAN, host-gw)
  • Lightweight and focused specifically on container networking

Cons of Flannel

  • Less flexible than CNI plugins for advanced networking scenarios
  • Limited support for network policies and security features
  • Fewer options for customization and extensibility

Code Comparison

Flannel configuration example:

{
  "Network": "10.1.0.0/16",
  "Backend": {
    "Type": "vxlan"
  }
}

CNI plugins configuration example:

{
  "cniVersion": "0.4.0",
  "name": "mynet",
  "type": "bridge",
  "bridge": "cni0",
  "ipam": {
    "type": "host-local",
    "subnet": "10.1.0.0/16"
  }
}

Both projects aim to provide networking solutions for container environments, but they differ in their approach and scope. Flannel offers a more straightforward solution for basic container networking needs, while CNI plugins provide a more flexible and extensible framework for various networking scenarios. The code examples demonstrate the difference in configuration complexity, with Flannel having a simpler setup compared to the more detailed CNI plugin configuration.

6,626

Simple, resilient multi-host containers networking and more.

Pros of Weave

  • Provides a virtual network that connects Docker containers across multiple hosts
  • Offers automatic IP address allocation and service discovery
  • Includes built-in encryption and network policy features

Cons of Weave

  • More complex setup compared to simpler CNI plugins
  • Can have performance overhead due to its overlay network approach
  • May require more resources and management compared to lightweight alternatives

Code Comparison

Weave:

func (nw *netWrapper) Attach(id string) (*network.ScopeInfo, error) {
    if err := nw.watcher.PauseCheck(); err != nil {
        return nil, err
    }
    defer nw.watcher.UnpauseCheck()
    return nw.Network.Attach(id)
}

CNI Plugins:

func (t *SimpleBridge) cmdAdd(args *skel.CmdArgs) error {
    netConf, cniVersion, err := loadNetConf(args.StdinData)
    if err != nil {
        return err
    }
    return t.add(args, netConf, cniVersion)
}

Both repositories provide networking solutions for containerized environments, but they differ in scope and implementation. Weave offers a more comprehensive networking solution with advanced features, while CNI Plugins focuses on providing a set of reference implementations for the Container Network Interface specification.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

test

Plugins

Some CNI network plugins, maintained by the containernetworking team. For more information, see the CNI website.

Read CONTRIBUTING for build and test instructions.

Plugins supplied:

Main: interface-creating

  • bridge: Creates a bridge, adds the host and the container to it.
  • ipvlan: Adds an ipvlan interface in the container.
  • loopback: Set the state of loopback interface to up.
  • macvlan: Creates a new MAC address, forwards all traffic to that to the container.
  • ptp: Creates a veth pair.
  • vlan: Allocates a vlan device.
  • host-device: Move an already-existing device into a container.
  • dummy: Creates a new Dummy device in the container.

Windows: Windows specific

  • win-bridge: Creates a bridge, adds the host and the container to it.
  • win-overlay: Creates an overlay interface to the container.

IPAM: IP address allocation

  • dhcp: Runs a daemon on the host to make DHCP requests on behalf of the container
  • host-local: Maintains a local database of allocated IPs
  • static: Allocate a single static IPv4/IPv6 address to container. It's useful in debugging purpose.

Meta: other plugins

  • tuning: Tweaks sysctl parameters of an existing interface
  • portmap: An iptables-based portmapping plugin. Maps ports from the host's address space to the container.
  • bandwidth: Allows bandwidth-limiting through use of traffic control tbf (ingress/egress).
  • sbr: A plugin that configures source based routing for an interface (from which it is chained).
  • firewall: A firewall plugin which uses iptables or firewalld to add rules to allow traffic to/from the container.

Sample

The sample plugin provides an example for building your own plugin.

Contact

For any questions about CNI, please reach out via:

If you have a security issue to report, please do so privately to the email addresses listed in the OWNERS file.