Top Related Projects
Ingress NGINX Controller for Kubernetes
The Cloud Native Application Proxy
Connect, secure, control, and observe services.
:gorilla: Kong for Kubernetes: The official Ingress Controller for Kubernetes.
Quick Overview
The nginx/kubernetes-ingress repository is an NGINX Ingress Controller for Kubernetes. It allows you to use NGINX as a load balancer and ingress controller in Kubernetes environments, providing advanced traffic management, security features, and integration with the Kubernetes ecosystem.
Pros
- Highly performant and scalable, leveraging NGINX's efficient architecture
- Supports advanced traffic routing, SSL/TLS termination, and content-based routing
- Integrates well with Kubernetes ecosystem and supports various annotations for fine-grained control
- Offers both open-source and commercial versions with additional features
Cons
- Configuration can be complex for advanced use cases
- May require deeper understanding of NGINX concepts for optimal usage
- Limited built-in monitoring capabilities compared to some alternatives
- Documentation could be more comprehensive for some advanced features
Code Examples
- Basic Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example-service
port:
number: 80
This example defines a basic Ingress resource that routes traffic for example.com to an example-service.
- HTTPS redirect:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
name: https-redirect
spec:
# ... (rest of the Ingress spec)
This annotation ensures that all HTTP traffic is redirected to HTTPS.
- Rate limiting:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/limit-rps: "10"
name: rate-limit
spec:
# ... (rest of the Ingress spec)
This example sets a rate limit of 10 requests per second for the Ingress resource.
Getting Started
- Install NGINX Ingress Controller:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
- Create an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example-service
port:
number: 80
- Apply the Ingress resource:
kubectl apply -f example-ingress.yaml
- Access your application using the Ingress IP or hostname.
Competitor Comparisons
Ingress NGINX Controller for Kubernetes
Pros of ingress-nginx
- More widely adopted and community-supported
- Extensive documentation and active development
- Better integration with Kubernetes ecosystem
Cons of ingress-nginx
- Higher resource consumption
- Steeper learning curve for advanced configurations
- Less optimized for NGINX-specific features
Code Comparison
ingress-nginx:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example-service
port:
number: 80
kubernetes-ingress:
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: example-virtualserver
spec:
host: example.com
upstreams:
- name: example-app
service: example-service
port: 80
routes:
- path: /
action:
pass: example-app
The code examples show different approaches to configuring ingress. ingress-nginx uses the standard Kubernetes Ingress resource, while kubernetes-ingress introduces custom resources like VirtualServer for more NGINX-specific configurations.
The Cloud Native Application Proxy
Pros of Traefik
- More user-friendly configuration with automatic service discovery
- Built-in support for multiple providers (Docker, Kubernetes, etc.)
- Dynamic configuration updates without restarts
Cons of Traefik
- Less mature and battle-tested compared to NGINX
- Potentially higher resource usage in some scenarios
- Steeper learning curve for those familiar with NGINX
Code Comparison
Traefik configuration (YAML):
http:
routers:
my-router:
rule: "Host(`example.com`)"
service: my-service
services:
my-service:
loadBalancer:
servers:
- url: "http://backend1:80"
- url: "http://backend2:80"
NGINX Ingress configuration (YAML):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
Both Traefik and NGINX Ingress are popular choices for Kubernetes ingress controllers. Traefik offers easier configuration and built-in service discovery, while NGINX Ingress benefits from its maturity and widespread adoption. The choice between them often depends on specific project requirements and team expertise.
Connect, secure, control, and observe services.
Pros of Istio
- Offers a comprehensive service mesh solution with advanced traffic management, security, and observability features
- Provides automatic sidecar injection for easier deployment and management
- Supports multi-cluster and multi-cloud environments out of the box
Cons of Istio
- Higher complexity and steeper learning curve compared to simpler ingress solutions
- Requires more resources and can introduce additional latency due to its sidecar proxy architecture
- May be overkill for smaller applications or simpler use cases
Code Comparison
Istio (Virtual Service configuration):
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-service
spec:
hosts:
- my-service.example.com
http:
- route:
- destination:
host: my-service
subset: v1
Kubernetes-ingress (Nginx Ingress configuration):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: my-service.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
:gorilla: Kong for Kubernetes: The official Ingress Controller for Kubernetes.
Pros of kubernetes-ingress-controller
- More extensive API management capabilities, including rate limiting and authentication
- Built-in analytics and monitoring features
- Supports multiple protocols beyond HTTP, such as TCP and gRPC
Cons of kubernetes-ingress-controller
- Steeper learning curve due to additional features and complexity
- Potentially higher resource consumption compared to the NGINX ingress controller
- May require additional configuration for advanced features
Code Comparison
kubernetes-ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: example.com
http:
paths:
- path: /prefix(/|$)(.*)
pathType: Prefix
backend:
service:
name: example-service
port:
number: 80
kubernetes-ingress-controller:
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
name: example-ingress
config:
protocols:
- http
- https
methods:
- GET
- POST
strip_path: true
preserve_host: false
upstream:
hash_on: none
hash_fallback: none
healthchecks:
active:
healthy:
http_statuses:
- 200
interval: 5
successes: 5
Both ingress controllers offer robust solutions for managing ingress traffic in Kubernetes clusters. The NGINX ingress controller is known for its simplicity and performance, while the Kong ingress controller provides more advanced features for API management and monitoring. The choice between the two depends on specific project requirements and the level of complexity needed in the ingress solution.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
NGINX Ingress Controller
This repo provides an implementation of an Ingress Controller for NGINX and NGINX Plus from the people behind NGINX.
Join The Next Community Call
We value community input and would love to see you at the next community call. At these calls, we discuss PRs by community members as well as issues, discussions and feature requests.
Microsoft Teams Link: NIC - GitHub Issues Triage
Meeting ID: 298 140 979 789
Passcode: jpx5TM
Slack: Join our channel #nginx-ingress-controller
on the NGINX Community Slack for updates and discussions.
When: 16:00 GMT / Convert to your timezone, every other Monday.
Community Call Dates |
---|
2025-01-13 |
2025-01-27 |
2025-02-10 |
2025-02-24 |
2025-03-11 |
2025-03-24 |
NGINX Ingress Controller works with both NGINX and NGINX Plus and supports the standard Ingress features - content-based routing and TLS/SSL termination.
Additionally, several NGINX and NGINX Plus features are available as extensions to the Ingress resource via annotations and the ConfigMap resource. In addition to HTTP, NGINX Ingress Controller supports load balancing Websocket, gRPC, TCP and UDP applications. See ConfigMap and Annotations docs to learn more about the supported features and customization options.
As an alternative to the Ingress, NGINX Ingress Controller supports the VirtualServer and VirtualServerRoute resources. They enable use cases not supported with the Ingress resource, such as traffic splitting and advanced content-based routing. See VirtualServer and VirtualServerRoute resources doc.
TCP, UDP and TLS Passthrough load balancing is also supported. See the TransportServer resource doc.
Read this doc to learn more about NGINX Ingress Controller with NGINX Plus.
Note
This project is different from the NGINX Ingress Controller in kubernetes/ingress-nginx repo. See this doc to find out about the key differences.
Ingress and Ingress Controller
What is the Ingress?
The Ingress is a Kubernetes resource that lets you configure an HTTP load balancer for applications running on Kubernetes, represented by one or more Services. Such a load balancer is necessary to deliver those applications to clients outside of the Kubernetes cluster.
The Ingress resource supports the following features:
- Content-based routing:
- Host-based routing. For example, routing requests with the host header
foo.example.com
to one group of services and the host headerbar.example.com
to another group. - Path-based routing. For example, routing requests with the URI that starts with
/serviceA
to service A and requests with the URI that starts with/serviceB
to service B.
- Host-based routing. For example, routing requests with the host header
- TLS/SSL termination for each hostname, such as
foo.example.com
.
See the Ingress User Guide to learn more about the Ingress resource.
What is the Ingress Controller?
The Ingress Controller is an application that runs in a cluster and configures an HTTP load balancer according to Ingress resources. The load balancer can be a software load balancer running in the cluster or a hardware or cloud load balancer running externally. Different load balancers require different Ingress Controller implementations.
In the case of NGINX, the Ingress Controller is deployed in a pod along with the load balancer.
Getting Started
Note
All documentation should only be used with the latest stable release, indicated on the releases page of the GitHub repository.
- Install NGINX Ingress Controller using the Helm chart or the Kubernetes manifests.
- Configure load balancing for a simple web application:
- Use the Ingress resource. See the Cafe example.
- Or the VirtualServer resource. See the Basic configuration example.
- See additional configuration examples.
- Learn more about all available configuration and customization in the docs.
NGINX Ingress Controller Releases
We publish NGINX Ingress Controller releases on GitHub. See our releases page.
The latest stable release is 4.0.0. For production use, we recommend that you choose the latest stable release.
The edge version is useful for experimenting with new features that are not yet published in a stable release. To use it, choose the edge version built from the latest commit from the main branch.
To use NGINX Ingress Controller, you need to have access to:
- An NGINX Ingress Controller image.
- Installation manifests or a Helm chart.
- Documentation and examples.
It is important that the versions of those things above match.
The table below summarizes the options regarding the images, Helm chart, manifests, documentation and examples and gives your links to the correct versions:
Version | Description | Image for NGINX | Image for NGINX Plus | Installation Manifests and Helm Chart | Documentation and Examples |
---|---|---|---|---|---|
Latest stable release | For production use | Use the 4.0.0 images from DockerHub, GitHub Container, Amazon ECR Public Gallery or Quay.io or build your own image. | Use the 4.0.0 images from the F5 Container Registry or Build your own image. | Manifests. Helm chart. | Documentation. Examples. |
Edge/Nightly | For testing and experimenting | Use the edge or nightly images from DockerHub, GitHub Container, Amazon ECR Public Gallery or Quay.io or build your own image. | Build your own image. | Manifests. Helm chart. | Documentation. Examples. |
SBOM (Software Bill of Materials)
We generate SBOMs for the binaries and the Docker images.
Binaries
The SBOMs for the binaries are available in the releases page. The SBOMs are generated using syft and are available in SPDX format.
Docker Images
The SBOMs for the Docker images are available in the DockerHub, GitHub Container, Amazon ECR Public Gallery or Quay.io repositories. The SBOMs are generated using syft and stored as an attestation in the image manifest.
For example to retrieve the SBOM for linux/amd64
from Docker Hub and analyze it using
grype you can run the following command:
docker buildx imagetools inspect nginx/nginx-ingress:edge --format '{{ json (index .SBOM "linux/amd64").SPDX }}' | grype
Contacts
Weâd like to hear your feedback! If you have any suggestions or experience issues with our Ingress Controller, please create an issue or send a pull request on GitHub. You can contact us directly via NGINX Community Slack.
Contributing
If you'd like to contribute to the project, please read our Contributing guide.
Support
For NGINX Plus customers NGINX Ingress Controller (when used with NGINX Plus) is covered by the support contract.
Top Related Projects
Ingress NGINX Controller for Kubernetes
The Cloud Native Application Proxy
Connect, secure, control, and observe services.
:gorilla: Kong for Kubernetes: The official Ingress Controller for Kubernetes.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot