Convert Figma logo to code with AI

envoyproxy logoenvoy

Cloud-native high-performance edge/middle/service proxy

24,693
4,747
24,693
1,658

Top Related Projects

35,688

Connect, secure, control, and observe services.

50,061

The Cloud Native Application Proxy

22,139

The official NGINX Open Source repository.

4,812

HAProxy Load Balancer's development branch (mirror of git.haproxy.org)

38,788

🦍 The Cloud-Native API Gateway and AI Gateway.

10,567

Ultralight, security-first service mesh for Kubernetes. Main repo for Linkerd 2.x.

Quick Overview

Envoy is a high-performance, open-source edge and service proxy designed for cloud-native applications. It provides a robust set of features for traffic management, observability, and security, making it an essential component in modern microservices architectures.

Pros

  • Highly extensible and configurable
  • Excellent performance and low latency
  • Strong support for modern protocols (HTTP/2, gRPC)
  • Comprehensive observability features (metrics, tracing, logging)

Cons

  • Steep learning curve for beginners
  • Complex configuration for advanced use cases
  • Limited built-in UI for management and visualization
  • Resource-intensive for small-scale deployments

Code Examples

  1. Basic HTTP proxy configuration:
static_resources:
  listeners:
  - address:
      socket_address:
        address: 0.0.0.0
        port_value: 8080
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains:
              - "*"
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: service_backend
          http_filters:
          - name: envoy.filters.http.router
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
  clusters:
  - name: service_backend
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: service_backend
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: backend-service
                port_value: 8080

This example configures Envoy as a basic HTTP proxy, routing traffic to a backend service.

  1. Circuit breaker configuration:
clusters:
- name: service_backend
  connect_timeout: 0.25s
  type: STRICT_DNS
  lb_policy: ROUND_ROBIN
  circuit_breakers:
    thresholds:
      - priority: DEFAULT
        max_connections: 1000
        max_pending_requests: 1000
        max_requests: 1000
        max_retries: 3
  load_assignment:
    cluster_name: service_backend
    endpoints:
    - lb_endpoints:
      - endpoint:
          address:
            socket_address:
              address: backend-service
              port_value: 8080

This example adds circuit breaker configuration to the backend service cluster.

  1. Rate limiting configuration:
rate_limit_service:
  grpc_service:
    envoy_grpc:
      cluster_name: rate_limit_cluster
  transport_api_version: V3

This snippet configures Envoy to use a gRPC-based rate limiting service.

Getting Started

To get started with Envoy:

  1. Install Envoy (e.g., using Docker):

    docker pull envoyproxy/envoy:v1.24-latest
    
  2. Create a basic configuration file (e.g., envoy.yaml) with the desired settings.

  3. Run Envoy with the configuration:

    docker run -d --name envoy -p 9901:9901 -p 8080:8080 -v $(pwd)/envoy.yaml:/etc/envoy/envoy.yaml envoyproxy/envoy:v1.24-latest
    
  4. Access the Envoy admin interface at http://localhost:9901 for monitoring and management.

Competitor Comparisons

35,688

Connect, secure, control, and observe services.

Pros of Istio

  • Provides a complete service mesh solution with advanced traffic management, security, and observability features
  • Offers a higher-level abstraction for managing microservices, simplifying complex deployments
  • Includes built-in support for multi-cluster and multi-cloud environments

Cons of Istio

  • Higher complexity and steeper learning curve compared to Envoy
  • Requires more resources and can introduce additional overhead
  • May be overkill for smaller or less complex applications

Code Comparison

Envoy configuration example:

static_resources:
  listeners:
  - address:
      socket_address:
        address: 0.0.0.0
        port_value: 8080

Istio configuration example:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: my-service
spec:
  hosts:
  - my-service

Envoy focuses on low-level proxy configuration, while Istio provides higher-level abstractions for service mesh management. Istio uses Envoy as its data plane proxy but adds control plane functionality and additional features on top of it. Envoy is more lightweight and flexible, suitable for various use cases, while Istio is specifically designed for complex microservices architectures in Kubernetes environments.

50,061

The Cloud Native Application Proxy

Pros of Traefik

  • Easier to configure and use, especially for beginners
  • Built-in automatic HTTPS with Let's Encrypt integration
  • Dynamic configuration updates without restarts

Cons of Traefik

  • Less performant than Envoy for high-traffic scenarios
  • More limited in terms of advanced traffic management features
  • Smaller ecosystem and community compared to Envoy

Code Comparison

Traefik configuration (YAML):

http:
  routers:
    my-router:
      rule: "Host(`example.com`)"
      service: my-service
  services:
    my-service:
      loadBalancer:
        servers:
          - url: "http://backend1:8080"
          - url: "http://backend2:8080"

Envoy configuration (YAML):

static_resources:
  listeners:
  - address:
      socket_address:
        address: 0.0.0.0
        port_value: 80
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          codec_type: auto
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains:
              - "*"
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: backend_service
  clusters:
  - name: backend_service
    connect_timeout: 0.25s
    type: strict_dns
    lb_policy: round_robin
    load_assignment:
      cluster_name: backend_service
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: backend1
                port_value: 8080
        - endpoint:
            address:
              socket_address:
                address: backend2
                port_value: 8080
22,139

The official NGINX Open Source repository.

Pros of Nginx

  • Lightweight and efficient, consuming less memory and resources
  • Excellent static content serving and caching capabilities
  • Simpler configuration for basic use cases

Cons of Nginx

  • Less feature-rich for complex service mesh scenarios
  • Limited built-in observability and tracing capabilities
  • Not designed primarily for dynamic request handling and routing

Code Comparison

Nginx configuration example:

http {
    server {
        listen 80;
        location / {
            proxy_pass http://backend;
        }
    }
}

Envoy configuration example:

static_resources:
  listeners:
  - address:
      socket_address:
        address: 0.0.0.0
        port_value: 80
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          codec_type: auto
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains:
              - "*"
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: backend_service

This comparison highlights the simplicity of Nginx configuration for basic proxying tasks, while showcasing Envoy's more detailed and flexible configuration options for advanced service mesh scenarios.

4,812

HAProxy Load Balancer's development branch (mirror of git.haproxy.org)

Pros of HAProxy

  • Mature and battle-tested, with a long history of production use
  • Excellent performance for Layer 4 (TCP) load balancing
  • Simpler configuration and lower resource usage for basic scenarios

Cons of HAProxy

  • Limited extensibility compared to Envoy's filter chain model
  • Less robust support for modern protocols like gRPC and HTTP/3
  • Fewer advanced traffic management features out-of-the-box

Code Comparison

HAProxy configuration example:

frontend http
    bind *:80
    default_backend web_servers

backend web_servers
    balance roundrobin
    server server1 192.168.1.10:80 check
    server server2 192.168.1.11:80 check

Envoy configuration example:

static_resources:
  listeners:
  - address:
      socket_address:
        address: 0.0.0.0
        port_value: 80
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          codec_type: auto
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains:
              - "*"
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: web_servers
  clusters:
  - name: web_servers
    connect_timeout: 0.25s
    type: strict_dns
    lb_policy: round_robin
    load_assignment:
      cluster_name: web_servers
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: 192.168.1.10
                port_value: 80
        - endpoint:
            address:
              socket_address:
                address: 192.168.1.11
                port_value: 80
38,788

🦍 The Cloud-Native API Gateway and AI Gateway.

Pros of Kong

  • More extensive plugin ecosystem with 100+ plugins available
  • Easier setup and configuration for basic use cases
  • Better suited for API management and gateway functionalities

Cons of Kong

  • Less performant than Envoy for high-throughput scenarios
  • More limited in terms of protocol support and advanced traffic management features
  • Less flexible for complex service mesh architectures

Code Comparison

Kong configuration example:

http {
  upstream backend {
    server backend1.example.com;
    server backend2.example.com;
  }
  server {
    listen 80;
    location / {
      proxy_pass http://backend;
    }
  }
}

Envoy configuration example:

static_resources:
  listeners:
  - address:
      socket_address:
        address: 0.0.0.0
        port_value: 80
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          codec_type: auto
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains:
              - "*"
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: backend_service
  clusters:
  - name: backend_service
    connect_timeout: 0.25s
    type: strict_dns
    lb_policy: round_robin
    load_assignment:
      cluster_name: backend_service
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: backend1.example.com
                port_value: 80
        - endpoint:
            address:
              socket_address:
                address: backend2.example.com
                port_value: 80
10,567

Ultralight, security-first service mesh for Kubernetes. Main repo for Linkerd 2.x.

Pros of Linkerd2

  • Simpler to install and use, with a focus on ease of adoption
  • Lightweight and optimized for Kubernetes environments
  • Automatic mTLS encryption and identity-based security

Cons of Linkerd2

  • Less feature-rich compared to Envoy's extensive capabilities
  • Limited to Kubernetes environments, while Envoy is more versatile
  • Smaller community and ecosystem compared to Envoy

Code Comparison

Linkerd2 (Rust):

pub fn new_inbound(
    config: Config,
    local_identity: tls::Conditional<identity::Local>,
) -> impl svc::NewService<
    Target,
    Service = impl tower::Service<
        http::Request<Body>,
        Response = http::Response<Body>,
        Error = Error,
        Future = impl Send,
    > + Clone
    + Send,
> + Clone {
    // ...
}

Envoy (C++):

void Filter::onData(Buffer::Instance& data, bool end_stream) {
  conn_log_trace("processing {} bytes", read_callbacks_->connection(), data.length());
  upstream_request_->encodeData(data, end_stream);
}

The code snippets showcase different language choices and architectural approaches between the two projects.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Envoy Logo

Cloud-native high-performance edge/middle/service proxy

Envoy is hosted by the Cloud Native Computing Foundation (CNCF). If you are a company that wants to help shape the evolution of technologies that are container-packaged, dynamically-scheduled and microservices-oriented, consider joining the CNCF. For details about who's involved and how Envoy plays a role, read the CNCF announcement.

CII Best Practices OpenSSF Scorecard CLOMonitor Azure Pipelines Fuzzing Status Jenkins Jenkins

Documentation

Related

Contact

  • envoy-announce: Low frequency mailing list where we will email announcements only.
  • envoy-security-announce: Low frequency mailing list where we will email security related announcements only.
  • envoy-users: General user discussion.
  • envoy-dev: Envoy developer discussion (APIs, feature design, etc.).
  • envoy-maintainers: Use this list to reach all core Envoy maintainers.
  • Twitter: Follow along on Twitter!
  • Slack: Slack, to get invited go here.
    • NOTE: Response to user questions is best effort on Slack. For a "guaranteed" response please email envoy-users@ per the guidance in the following linked thread.

Please see this email thread for information on email list usage.

Contributing

Contributing to Envoy is fun and modern C++ is a lot less scary than you might think if you don't have prior experience. To get started:

Community Meeting

The Envoy team has a scheduled meeting time twice per month on Tuesday at 9am PT. The public Google calendar is here. The meeting will only be held if there are agenda items listed in the meeting minutes. Any member of the community should be able to propose agenda items by adding to the minutes. The maintainers will either confirm the additions to the agenda, or will cancel the meeting within 24 hours of the scheduled date if there is no confirmed agenda.

Security

Security Audit

There has been several third party engagements focused on Envoy security:

  • In 2018 Cure53 performed a security audit, full report.
  • In 2021 Ada Logics performed an audit on our fuzzing infrastructure with recommendations for improvements, full report.

Reporting security vulnerabilities

If you've found a vulnerability or a potential vulnerability in Envoy please let us know at envoy-security. We'll send a confirmation email to acknowledge your report, and we'll send an additional email when we've identified the issue positively or negatively.

For further details please see our complete security release process.

Releases

For further details please see our release process.