Convert Figma logo to code with AI

moby logolibnetwork

networking for containers

2,189
880
2,189
210

Top Related Projects

networking for containers

5,800

Container Network Interface - networking for Linux containers

22,159

eBPF-based Networking, Security, and Observability

6,557

Cloud native networking and network security

6,626

Simple, resilient multi-host containers networking and more.

9,168

flannel is a network fabric for containers, designed for Kubernetes

Quick Overview

libnetwork is a Go-based networking library that provides a unified networking API and runtime to manage network connectivity of containers and applications. It is a core component of the Moby (Docker) project, responsible for managing the networking aspects of containers.

Pros

  • Unified Networking API: libnetwork provides a consistent and extensible networking API, allowing developers to manage network connectivity of their applications and containers.
  • Modular Design: The library is designed with a modular architecture, making it easy to integrate with different networking backends and plugins.
  • Cross-platform Compatibility: libnetwork supports multiple operating systems and networking backends, ensuring compatibility across different environments.
  • Active Development and Community: The project is actively maintained by the Moby (Docker) team and has a strong community of contributors.

Cons

  • Complexity: As a core networking component, libnetwork can be complex to understand and integrate, especially for developers new to container networking.
  • Limited Documentation: The project's documentation, while improving, could be more comprehensive and user-friendly for developers.
  • Performance Overhead: Depending on the networking backend and use case, libnetwork may introduce some performance overhead compared to direct network management.
  • Dependency on Moby (Docker): libnetwork is tightly coupled with the Moby (Docker) project, which may limit its adoption in non-Docker environments.

Code Examples

Here are a few examples of how to use libnetwork in Go:

  1. Creating a Network:
package main

import (
    "fmt"
    "github.com/docker/libnetwork"
)

func main() {
    // Create a new network
    network, err := libnetwork.New("bridge", "mynetwork", nil)
    if err != nil {
        fmt.Println("Error creating network:", err)
        return
    }

    fmt.Println("Network created:", network.Name())
}
  1. Creating a Container Endpoint:
package main

import (
    "fmt"
    "github.com/docker/libnetwork"
)

func main() {
    // Create a new network
    network, err := libnetwork.New("bridge", "mynetwork", nil)
    if err != nil {
        fmt.Println("Error creating network:", err)
        return
    }

    // Create a new container endpoint
    endpoint, err := network.CreateEndpoint("mycontainer")
    if err != nil {
        fmt.Println("Error creating endpoint:", err)
        return
    }

    fmt.Println("Endpoint created:", endpoint.Name())
}
  1. Connecting a Container to a Network:
package main

import (
    "fmt"
    "github.com/docker/libnetwork"
)

func main() {
    // Create a new network
    network, err := libnetwork.New("bridge", "mynetwork", nil)
    if err != nil {
        fmt.Println("Error creating network:", err)
        return
    }

    // Create a new container endpoint
    endpoint, err := network.CreateEndpoint("mycontainer")
    if err != nil {
        fmt.Println("Error creating endpoint:", err)
        return
    }

    // Connect the container to the network
    err = endpoint.Join("mycontainer")
    if err != nil {
        fmt.Println("Error joining endpoint:", err)
        return
    }

    fmt.Println("Container connected to network")
}

Getting Started

To get started with libnetwork, you can follow these steps:

  1. Install the Go programming language on your system.
  2. Create a new Go project and add the libnetwork dependency to your go.mod file:
require github.com/docker/libnetwork v0.8.0-dev.2.0.20210927162939-3c4f71f1cabd
  1. Import the libnetwork package in your Go code and start using the provided APIs to manage network connectivity for your applications and containers.

For more detailed information on using libnetwork, please refer to the project's documentation.

Competitor Comparisons

networking for containers

Pros of libnetwork

  • Active development and maintenance
  • Extensive documentation and community support
  • Well-integrated with Docker ecosystem

Cons of libnetwork

  • Complexity may be overwhelming for simple use cases
  • Potential performance overhead in certain scenarios
  • Learning curve for newcomers to container networking

Code Comparison

libnetwork:

func (c *controller) NewNetwork(networkType, name string, id string, options ...NetworkOption) (Network, error) {
    if !config.IsValidName(name) {
        return nil, ErrInvalidName(name)
    }
    // ... (additional code)
}

Since both repositories are the same, there is no difference in the code structure or implementation. The code snippet above is an example of how network creation is handled in libnetwork.

Summary

As the comparison is between the same repository (moby/libnetwork), there are no distinct differences to highlight. libnetwork is a crucial component of the Docker ecosystem, providing networking capabilities for containers. It offers robust features and integration but may have a steeper learning curve for beginners. The repository is actively maintained and benefits from community support, making it a reliable choice for container networking needs.

5,800

Container Network Interface - networking for Linux containers

Pros of CNI

  • More flexible and extensible architecture
  • Broader ecosystem support beyond Docker
  • Standardized interface for multiple container runtimes

Cons of CNI

  • Steeper learning curve for beginners
  • Less tightly integrated with Docker ecosystem
  • May require additional configuration for some use cases

Code Comparison

libnetwork

func (c *controller) NewNetwork(networkType, name string, options ...NetworkOption) (Network, error) {
    // Network creation logic
}

CNI

func (c *CNIConfig) AddNetwork(net *NetworkConfig, rt *RuntimeConf) (types.Result, error) {
    // Network addition logic
}

Key Differences

  • libnetwork is Docker-specific, while CNI is container runtime-agnostic
  • CNI focuses on network connectivity, while libnetwork includes more features like service discovery
  • libnetwork uses a monolithic approach, whereas CNI employs a plugin-based architecture

Use Cases

  • libnetwork: Ideal for Docker-centric environments and simpler setups
  • CNI: Better suited for multi-runtime environments, Kubernetes, and complex networking scenarios

Community and Adoption

  • libnetwork: Primarily supported by Docker community
  • CNI: Wider adoption across container ecosystems, including Kubernetes
22,159

eBPF-based Networking, Security, and Observability

Pros of Cilium

  • Advanced network security features with eBPF-based filtering and policy enforcement
  • Better performance and scalability for large Kubernetes clusters
  • Integrated service mesh capabilities and Envoy integration

Cons of Cilium

  • Steeper learning curve due to more complex architecture
  • Requires newer kernel versions for full feature support
  • May be overkill for simpler container networking use cases

Code Comparison

Cilium (Go):

func (d *Daemon) compileBase() error {
    opts := bpf.CommonOptions{
        Debug:         option.Config.BPFDebug,
        TargetArch:    option.Config.BPFTargetArch,
        BPFRoot:       option.Config.BPFRoot,
        StateDir:      option.Config.StateDir,
    }
    // ... (additional code)
}

Libnetwork (Go):

func (c *controller) NewNetwork(networkType, name string, id string, options ...NetworkOption) (Network, error) {
    if !config.IsValidName(name) {
        return nil, ErrInvalidName(name)
    }
    // ... (additional code)
}

Both projects use Go, but Cilium's code reflects its focus on eBPF and advanced networking features, while Libnetwork's code is more centered on basic network management for Docker containers.

6,557

Cloud native networking and network security

Pros of Calico

  • Advanced network policy capabilities, offering fine-grained control over network traffic
  • Better scalability for large clusters, particularly in cloud environments
  • Supports multiple data planes (e.g., Linux eBPF, standard Linux networking)

Cons of Calico

  • More complex setup and configuration compared to libnetwork
  • Requires additional components and resources to run effectively
  • May have a steeper learning curve for newcomers to container networking

Code Comparison

Calico (network policy example):

apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
  name: allow-tcp-6379
spec:
  selector: app == 'database'
  ingress:
  - action: Allow
    protocol: TCP
    destination:
      ports:
      - 6379

libnetwork (network creation example):

bridgeNetwork, err := client.NetworkCreate(context.Background(), "my-bridge-network", types.NetworkCreate{
    Driver: "bridge",
})

Calico offers more advanced network policy definitions, while libnetwork provides simpler network creation and management within Docker. Calico's approach is more suitable for complex, multi-node environments, whereas libnetwork is often sufficient for single-host Docker setups or simpler network configurations.

6,626

Simple, resilient multi-host containers networking and more.

Pros of Weave

  • Provides a virtual network that connects Docker containers across multiple hosts
  • Offers automatic IP address allocation and service discovery
  • Supports encryption for secure communication between containers

Cons of Weave

  • Can be more complex to set up and manage compared to libnetwork
  • May introduce additional overhead due to its overlay network approach
  • Requires separate installation and configuration outside of Docker

Code Comparison

Weave network creation:

weave launch
eval $(weave env)
docker run --net=weave ...

libnetwork network creation:

docker network create -d bridge mynetwork
docker run --network=mynetwork ...

Both Weave and libnetwork aim to provide networking solutions for containerized environments, but they differ in their approach and integration with Docker. Weave offers a more feature-rich solution with multi-host networking capabilities, while libnetwork is more tightly integrated with Docker and provides simpler setup for basic networking needs. The choice between the two depends on specific requirements, such as multi-host networking, ease of use, and desired level of integration with Docker.

9,168

flannel is a network fabric for containers, designed for Kubernetes

Pros of flannel

  • Simpler setup and configuration for Kubernetes networking
  • Better support for multi-host overlay networks
  • More lightweight and focused specifically on container networking

Cons of flannel

  • Less flexible for non-Kubernetes environments
  • Limited advanced networking features compared to libnetwork
  • Smaller community and ecosystem

Code Comparison

flannel configuration example:

net-conf.json: |
  {
    "Network": "10.244.0.0/16",
    "Backend": {
      "Type": "vxlan"
    }
  }

libnetwork network creation example:

network, err := client.NetworkCreate(context.Background(), "mynet", types.NetworkCreate{
    Driver: "overlay",
    IPAM: &network.IPAM{
        Config: []network.IPAMConfig{{Subnet: "10.0.0.0/24"}},
    },
})

Both projects aim to provide networking solutions for containerized environments, but they have different focuses. flannel is designed specifically for Kubernetes networking, offering a simpler setup and better support for multi-host overlay networks. libnetwork, on the other hand, is more flexible and feature-rich, making it suitable for a wider range of container networking scenarios beyond Kubernetes. The code examples demonstrate the difference in configuration complexity between the two projects.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Warning libnetwork was moved to https://github.com/moby/moby/tree/master/libnetwork

libnetwork has been merged to the main repo of Moby since Docker 22.06.

The old libnetwork repo (https://github.com/moby/libnetwork) now only accepts PR for Docker 20.10, and will be archived after the EOL of Docker 20.10.


libnetwork - networking for containers

Circle CI Coverage Status GoDoc Go Report Card

Libnetwork provides a native Go implementation for connecting containers

The goal of libnetwork is to deliver a robust Container Network Model that provides a consistent programming interface and the required network abstractions for applications.

Design

Please refer to the design for more information.

Using libnetwork

There are many networking solutions available to suit a broad range of use-cases. libnetwork uses a driver / plugin model to support all of these solutions while abstracting the complexity of the driver implementations by exposing a simple and consistent Network Model to users.

import (
	"fmt"
	"log"

	"github.com/docker/docker/pkg/reexec"
	"github.com/docker/libnetwork"
	"github.com/docker/libnetwork/config"
	"github.com/docker/libnetwork/netlabel"
	"github.com/docker/libnetwork/options"
)

func main() {
	if reexec.Init() {
		return
	}

	// Select and configure the network driver
	networkType := "bridge"

	// Create a new controller instance
	driverOptions := options.Generic{}
	genericOption := make(map[string]interface{})
	genericOption[netlabel.GenericData] = driverOptions
	controller, err := libnetwork.New(config.OptionDriverConfig(networkType, genericOption))
	if err != nil {
		log.Fatalf("libnetwork.New: %s", err)
	}

	// Create a network for containers to join.
	// NewNetwork accepts Variadic optional arguments that libnetwork and Drivers can use.
	network, err := controller.NewNetwork(networkType, "network1", "")
	if err != nil {
		log.Fatalf("controller.NewNetwork: %s", err)
	}

	// For each new container: allocate IP and interfaces. The returned network
	// settings will be used for container infos (inspect and such), as well as
	// iptables rules for port publishing. This info is contained or accessible
	// from the returned endpoint.
	ep, err := network.CreateEndpoint("Endpoint1")
	if err != nil {
		log.Fatalf("network.CreateEndpoint: %s", err)
	}

	// Create the sandbox for the container.
	// NewSandbox accepts Variadic optional arguments which libnetwork can use.
	sbx, err := controller.NewSandbox("container1",
		libnetwork.OptionHostname("test"),
		libnetwork.OptionDomainname("docker.io"))
	if err != nil {
		log.Fatalf("controller.NewSandbox: %s", err)
	}

	// A sandbox can join the endpoint via the join api.
	err = ep.Join(sbx)
	if err != nil {
		log.Fatalf("ep.Join: %s", err)
	}

	// libnetwork client can check the endpoint's operational data via the Info() API
	epInfo, err := ep.DriverInfo()
	if err != nil {
		log.Fatalf("ep.DriverInfo: %s", err)
	}

	macAddress, ok := epInfo[netlabel.MacAddress]
	if !ok {
		log.Fatalf("failed to get mac address from endpoint info")
	}

	fmt.Printf("Joined endpoint %s (%s) to sandbox %s (%s)\n", ep.Name(), macAddress, sbx.ContainerID(), sbx.Key())
}

Contributing

Want to hack on libnetwork? Docker's contributions guidelines apply.

Copyright and license

Code and documentation copyright 2015 Docker, inc. Code released under the Apache 2.0 license. Docs released under Creative commons.