Top Related Projects
Quick Overview
Hubble is an observability and network troubleshooting tool for Cilium-managed Kubernetes clusters. It provides deep visibility into the network and security layer of Kubernetes, allowing users to observe service dependencies and network flows in real-time.
Pros
- Provides detailed network flow visibility and service dependency mapping
- Integrates seamlessly with Cilium for enhanced Kubernetes network observability
- Offers a user-friendly UI and CLI for easy access to network data
- Supports eBPF-based monitoring for high-performance, low-overhead observation
Cons
- Requires Cilium to be installed and configured in the Kubernetes cluster
- May have a learning curve for users unfamiliar with eBPF and Cilium concepts
- Limited functionality in non-Cilium environments
- Resource overhead may be noticeable in very large clusters
Getting Started
To get started with Hubble, follow these steps:
- Ensure Cilium is installed and configured in your Kubernetes cluster.
- Enable Hubble in your Cilium installation:
helm upgrade cilium cilium/cilium --version 1.13.0 \
--namespace kube-system \
--reuse-values \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true
- Install the Hubble CLI:
export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -L --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check hubble-linux-amd64.tar.gz.sha256sum
sudo tar xzvfC hubble-linux-amd64.tar.gz /usr/local/bin
rm hubble-linux-amd64.tar.gz{,.sha256sum}
- Access Hubble UI:
kubectl port-forward -n kube-system svc/hubble-ui 12000:80
Then open http://localhost:12000 in your browser.
- Use Hubble CLI to observe network flows:
hubble observe
This will provide a real-time view of the network flows in your cluster.
Competitor Comparisons
Cloud Native Runtime Security
Pros of Falco
- Broader security focus, covering system-wide behavior analysis
- Extensive rule set for detecting various security threats
- Flexible output options for alerts (Syslog, files, programs)
Cons of Falco
- Higher resource consumption due to system-wide monitoring
- Steeper learning curve for creating custom rules
- Less integrated with network-specific observability
Code Comparison
Falco rule example:
- rule: Unauthorized Process
desc: Detect unauthorized process execution
condition: spawned_process and not proc.name in (allowed_processes)
output: "Unauthorized process %proc.name started"
priority: WARNING
Hubble policy example:
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "l7-policy"
spec:
endpointSelector:
matchLabels:
app: myapp
ingress:
- toPorts:
- ports:
- port: "80"
protocol: TCP
rules:
http:
- method: "GET"
path: "/public"
While Falco focuses on system-wide security rules, Hubble's policies are more network-centric, reflecting their different scopes in Kubernetes environments.
Monitoring, visualisation & management for Docker & Kubernetes
Pros of Scope
- More comprehensive visualization of the entire application topology, including containers, processes, and network connections
- Easier to set up and use for general-purpose monitoring and troubleshooting
- Provides a more user-friendly web interface for exploring and interacting with the visualized data
Cons of Scope
- Less focused on network-specific observability and security features
- May have higher resource overhead due to its broader scope of monitoring
- Not as tightly integrated with Kubernetes network policies and eBPF technology
Code Comparison
Hubble (Go):
func (s *Server) GetFlows(req *observerpb.GetFlowsRequest, server observerpb.Observer_GetFlowsServer) error {
// Implementation for retrieving and streaming flow data
}
Scope (JavaScript):
App.prototype.getTopologies = function() {
return this.props.topologies;
};
The code snippets highlight the different focus areas of the two projects. Hubble's code is centered around observing network flows, while Scope's code deals with managing and presenting topology data for various components of the application stack.
Linux Runtime Security and Forensics using eBPF
Pros of Tracee
- Broader scope: Tracee offers system-wide tracing and security observability, not limited to network traffic
- More flexible deployment: Can be used in various environments, not just Kubernetes clusters
- Richer event data: Captures detailed system calls and events, providing deeper insights into system behavior
Cons of Tracee
- Higher resource overhead: System-wide tracing can be more resource-intensive than network-focused monitoring
- Steeper learning curve: Requires more in-depth knowledge of system internals to effectively use and interpret data
- Less network-centric: May not provide as detailed network flow analysis as Hubble
Code Comparison
Tracee (eBPF program):
SEC("tracepoint/syscalls/sys_enter_execve")
int tracepoint__syscalls__sys_enter_execve(struct trace_event_raw_sys_enter* ctx)
{
// Event handling code
}
Hubble (Go code for flow processing):
func (p *Parser) ProcessFlow(flow *pb.Flow) *v1.Event {
// Flow processing logic
}
Both projects use eBPF technology, but Tracee focuses on system-wide tracing while Hubble specializes in network flow monitoring. The code snippets reflect their different focuses, with Tracee handling system call tracepoints and Hubble processing network flows.
The fastest path to AI-powered full stack observability, even for lean teams.
Pros of Netdata
- Comprehensive system monitoring with a wide range of metrics
- User-friendly web interface for real-time visualization
- Extensive plugin system for easy extensibility
Cons of Netdata
- Higher resource consumption due to its comprehensive monitoring approach
- Less focused on network-specific observability compared to Hubble
Code Comparison
Netdata configuration example:
[global]
update every = 1
memory mode = ram
history = 3600
Hubble configuration example:
metrics:
enabled: true
port: 9091
server:
listen-address: ":4244"
Key Differences
- Netdata is a general-purpose system monitoring tool, while Hubble focuses on network observability for Kubernetes environments
- Netdata provides a rich web interface out-of-the-box, whereas Hubble relies on external visualization tools
- Hubble is tightly integrated with Cilium for advanced network policy enforcement, while Netdata offers broader system-wide monitoring capabilities
Use Cases
- Choose Netdata for comprehensive system monitoring across various metrics
- Opt for Hubble when deep network observability in Kubernetes environments is required, especially when using Cilium for network policy enforcement
Like Prometheus, but for logs.
Pros of Loki
- Designed for large-scale log aggregation and storage
- Integrates seamlessly with other Grafana ecosystem tools
- Supports multi-tenancy for better resource isolation
Cons of Loki
- Limited to log data, unlike Hubble's network-centric observability
- May require more complex setup for advanced querying capabilities
- Less focus on real-time network visibility compared to Hubble
Code Comparison
Loki query example:
{job="mysql"} |= "error" | json | rate[5m] > 0.5
Hubble query example:
hubble observe --type drop --verdict DROPPED
Key Differences
Loki is primarily focused on log aggregation and analysis, while Hubble specializes in network observability for Kubernetes environments. Loki offers broader log management capabilities, whereas Hubble provides deep insights into network flows and security policies.
Loki's querying language (LogQL) is more versatile for log analysis, while Hubble's CLI commands are tailored for network-specific observations. Hubble is tightly integrated with Cilium for enhanced Kubernetes networking features, while Loki is part of the broader Grafana observability stack.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME

Network, Service & Security Observability for Kubernetes
What is Hubble?
Hubble is a fully distributed networking and security observability platform for cloud native workloads. It is built on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner.
Hubble can answer questions such as:
Service dependencies & communication map:
- What services are communicating with each other? How frequently? What does the service dependency graph look like?
- What HTTP calls are being made? What Kafka topics does a service consume from or produce to?
Operational monitoring & alerting:
- Is any network communication failing? Why is communication failing? Is it DNS? Is it an application or network problem? Is the communication broken on layer 4 (TCP) or layer 7 (HTTP)?
- Which services have experienced a DNS resolution problems in the last 5 minutes? Which services have experienced an interrupted TCP connection recently or have seen connections timing out? What is the rate of unanswered TCP SYN requests?
Application monitoring:
- What is the rate of 5xx or 4xx HTTP response codes for a particular service or across all clusters?
- What is the 95th and 99th percentile latency between HTTP requests and responses in my cluster? Which services are performing the worst? What is the latency between two services?
Security observability:
- Which services had connections blocked due to network policy? What services have been accessed from outside the cluster? Which services have resolved a particular DNS name?
Why Hubble?
The Linux kernel technology eBPF is enabling visibility into systems and applications at a granularity and efficiency that was not possible before. It does so in a completely transparent way, without requiring the application to change or for the application to hide information. By building on top of Cilium, Hubble can leverage eBPF for visibility. By leveraging eBPF, all visibility is programmable and allows for a dynamic approach that minimizes overhead while providing deep and detailed insight where required. Hubble has been created and specifically designed to make best use of these new eBPF powers.
Releases
The Hubble CLI is backward compatible with all supported Cilium releases. For this reason, only the latest Hubble CLI version is maintained.
Version | Release Date | Maintained | Supported Cilium Version | Artifacts |
---|---|---|---|---|
v1.17 | 2025-06-23 (v1.17.5) | Yes | Cilium 1.17 and older | GitHub Release |
Component Stability
Hubble project consists of several components (see Architecture section).
While the core Hubble components have been running in production in multiple environments, new components continue to emerge as the project grows and expands in scope.
Some components, due to their relatively young age, are still considered beta and have to be used with caution in critical production workloads.
Component | Area | State |
---|---|---|
Hubble CLI | Core | Stable |
Hubble Server | Core | Stable |
Hubble Metrics | Core | Stable |
Hubble Relay | Multinode | Stable |
Hubble UI | UI | Beta |
Architecture
Getting Started
Features
Service Dependency Graph
Troubleshooting microservices application connectivity is a challenging task. Simply looking at "kubectl get pods" does not indicate dependencies between each service or external APIs or databases.
Hubble enables zero-effort automatic discovery of the service dependency graph for Kubernetes Clusters at L3/L4 and even L7, allowing user-friendly visualization and filtering of those dataflows as a Service Map.
See Hubble Service Map Tutorial for more examples.
Metrics & Monitoring
The metrics and monitoring functionality provides an overview of the state of systems and allow to recognize patterns indicating failure and other scenarios that require action. The following is a short list of example metrics, for a more detailed list of examples, see the Metrics Documentation.
Networking Behavior
Network Policy Observation
HTTP Request/Response Rate & Latency
DNS Request/Response Monitoring
Flow Visibility
Flow visibility provides visibility into flow information on the network and application protocol level. This enables visibility into individual TCP connections, DNS queries, HTTP requests, Kafka communication, and much more.
DNS Resolution
Identifying pods which have received DNS response indicating failure:
hubble observe --since=1m -t l7 -o json \
| jq 'select(.l7.dns.rcode==3) | .destination.namespace + "/" + .destination.pod_name' \
| sort | uniq -c | sort -r
42 "starwars/jar-jar-binks-6f5847c97c-qmggv"
Successful query & response:
starwars/x-wing-bd86d75c5-njv8k kube-system/coredns-5c98db65d4-twwdg DNS Query deathstar.starwars.svc.cluster.local. A
kube-system/coredns-5c98db65d4-twwdg starwars/x-wing-bd86d75c5-njv8k DNS Answer "10.110.126.213" TTL: 3 (Query deathstar.starwars.svc.cluster.local. A)
Non-existent domain:
starwars/jar-jar-binks-789c4b695d-ltrzm kube-system/coredns-5c98db65d4-f4m8n DNS Query unknown-galaxy.svc.cluster.local. A
starwars/jar-jar-binks-789c4b695d-ltrzm kube-system/coredns-5c98db65d4-f4m8n DNS Query unknown-galaxy.svc.cluster.local. AAAA
kube-system/coredns-5c98db65d4-twwdg starwars/jar-jar-binks-789c4b695d-ltrzm DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Query unknown-galaxy.starwars.svc.cluster.local. A)
kube-system/coredns-5c98db65d4-twwdg starwars/jar-jar-binks-789c4b695d-ltrzm DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Query unknown-galaxy.starwars.svc.cluster.local. AAAA)
HTTP Protocol
Successful request & response with latency information:
starwars/x-wing-bd86d75c5-njv8k:53410 starwars/deathstar-695d8f7ddc-lvj84:80 HTTP/1.1 GET http://deathstar/
starwars/deathstar-695d8f7ddc-lvj84:80 starwars/x-wing-bd86d75c5-njv8k:53410 HTTP/1.1 200 1ms (GET http://deathstar/)
TCP/UDP Packets
Successful TCP connection:
starwars/x-wing-bd86d75c5-njv8k:53410 starwars/deathstar-695d8f7ddc-lvj84:80 TCP Flags: SYN
deathstar.starwars.svc.cluster.local:80 starwars/x-wing-bd86d75c5-njv8k:53410 TCP Flags: SYN, ACK
starwars/x-wing-bd86d75c5-njv8k:53410 starwars/deathstar-695d8f7ddc-lvj84:80 TCP Flags: ACK, FIN
deathstar.starwars.svc.cluster.local:80 starwars/x-wing-bd86d75c5-njv8k:53410 TCP Flags: ACK, FIN
Connection timeout:
starwars/r2d2-6694d57947-xwhtz:60948 deathstar.starwars.svc.cluster.local:8080 TCP Flags: SYN
starwars/r2d2-6694d57947-xwhtz:60948 deathstar.starwars.svc.cluster.local:8080 TCP Flags: SYN
starwars/r2d2-6694d57947-xwhtz:60948 deathstar.starwars.svc.cluster.local:8080 TCP Flags: SYN
Network Policy Behavior
Denied connection attempt:
starwars/enterprise-5775b56c4b-thtwl:37800 starwars/deathstar-695d8f7ddc-lvj84:80(http) Policy denied (L3) TCP Flags: SYN
starwars/enterprise-5775b56c4b-thtwl:37800 starwars/deathstar-695d8f7ddc-lvj84:80(http) Policy denied (L3) TCP Flags: SYN
starwars/enterprise-5775b56c4b-thtwl:37800 starwars/deathstar-695d8f7ddc-lvj84:80(http) Policy denied (L3) TCP Flags: SYN
Specifying Raw Flow Filters
Hubble supports extensive set of filtering options that can be specified as a combination of allowlist and denylist. Hubble applies these filters as follows:
for each flow:
if flow does not match any of the allowlist filters:
continue
if flow matches any of the denylist filters:
continue
send flow to client
You can pass these filters to hubble observe
command as
JSON-encoded
FlowFilters. For
example, to observe flows that match the following conditions:
-
Either the source or destination identity contains
k8s:io.kubernetes.pod.namespace=kube-system
orreserved:host
label, AND -
Neither the source nor destination identity contains
k8s:k8s-app=kube-dns
label:hubble observe \ --allowlist '{"source_label":["k8s:io.kubernetes.pod.namespace=kube-system","reserved:host"]}' \ --allowlist '{"destination_label":["k8s:io.kubernetes.pod.namespace=kube-system","reserved:host"]}' \ --denylist '{"source_label":["k8s:k8s-app=kube-dns"]}' \ --denylist '{"destination_label":["k8s:k8s-app=kube-dns"]}'
Alternatively, you can also specify these flags as HUBBLE_{ALLOWLIST,DENYLIST}
environment variables:
cat > allowlist.txt <<EOF
{"source_label":["k8s:io.kubernetes.pod.namespace=kube-system","reserved:host"]}
{"destination_label":["k8s:io.kubernetes.pod.namespace=kube-system","reserved:host"]}
EOF
cat > denylist.txt <<EOF
{"source_label":["k8s:k8s-app=kube-dns"]}
{"destination_label":["k8s:k8s-app=kube-dns"]}
EOF
HUBBLE_ALLOWLIST=$(cat allowlist.txt)
HUBBLE_DENYLIST=$(cat denylist.txt)
export HUBBLE_ALLOWLIST
export HUBBLE_DENYLIST
hubble observe
Note that --allowlist
and --denylist
filters get included in the request in addition to
regular flow filters like --label
or --namespace
. Use --print-raw-filters
flag to verify
the exact filters that the Hubble CLI generates. For example:
% hubble observe --print-raw-filters \
-t drop \
-n kube-system \
--not --label "k8s:k8s-app=kube-dns" \
--allowlist '{"source_label":["k8s:k8s-app=my-app"]}'
allowlist:
- '{"source_pod":["kube-system/"],"event_type":[{"type":1}]}'
- '{"destination_pod":["kube-system/"],"event_type":[{"type":1}]}'
- '{"source_label":["k8s:k8s-app=my-app"]}'
denylist:
- '{"source_label":["k8s:k8s-app=kube-dns"]}'
- '{"destination_label":["k8s:k8s-app=kube-dns"]}'
The output YAML can be saved to a file and passed to hubble observe
command with --config
flag. For example:
% hubble observe --print-raw-filters --allowlist '{"source_label":["k8s:k8s-app=my-app"]}' > filters.yaml
% hubble observe --config ./filters.yaml
Community
Join the Cilium Slack #hubble channel to chat with Cilium Hubble developers and other Cilium / Hubble users. This is a good place to learn about Hubble and Cilium, ask questions, and share your experiences.
Learn more about Cilium.
Authors
Hubble is an open source project licensed under the Apache License. Everybody is welcome to contribute. The project is following the Governance Rules of the Cilium project. See CONTRIBUTING for instructions on how to contribute and details of the Code of Conduct.
Top Related Projects
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot