bcc
BCC - Tools for BPF-based Linux IO analysis, networking, monitoring, and more
Top Related Projects
eBPF-based Networking, Security, and Observability
Cloud Native Runtime Security
Linux Runtime Security and Forensics using eBPF
High-level tracing language for Linux
Inspektor Gadget is a set of tools and framework for data collection and system inspection on Kubernetes clusters and Linux hosts using eBPF
Quick Overview
BCC (BPF Compiler Collection) is a toolkit for creating efficient kernel tracing and manipulation programs. It utilizes extended Berkeley Packet Filter (eBPF) technology to provide a powerful and flexible way to analyze system performance and behavior. BCC makes it easier for developers and system administrators to write eBPF programs in Python and other high-level languages.
Pros
- Provides high-performance, low-overhead system tracing and analysis
- Supports multiple programming languages, including Python, C++, and Lua
- Offers a rich set of tools and examples for various use cases
- Enables real-time insights into kernel and application behavior
Cons
- Requires root access or CAP_SYS_ADMIN capability to run most tools
- Has a steep learning curve, especially for those unfamiliar with eBPF
- May require kernel updates or patches for full functionality on older systems
- Documentation can be inconsistent or outdated for some features
Code Examples
- Tracing new processes:
from bcc import BPF
# BPF program
bpf_text = """
#include <uapi/linux/ptrace.h>
#include <linux/sched.h>
int hello(struct pt_regs *ctx) {
bpf_trace_printk("Hello, World!\\n");
return 0;
}
"""
# Load BPF program
b = BPF(text=bpf_text)
b.attach_kprobe(event=b.get_syscall_fnname("execve"), fn_name="hello")
# Print trace output
print("Tracing new processes... Ctrl+C to exit")
b.trace_print()
- Counting syscalls by process:
from bcc import BPF
from time import sleep
# BPF program
bpf_text = """
#include <uapi/linux/ptrace.h>
BPF_HASH(syscall_count, u32);
int count_syscalls(struct pt_regs *ctx) {
u32 pid = bpf_get_current_pid_tgid();
syscall_count.increment(pid);
return 0;
}
"""
# Load BPF program
b = BPF(text=bpf_text)
b.attach_raw_tracepoint(tp="sys_enter", fn_name="count_syscalls")
# Print results
try:
while True:
sleep(1)
for k, v in b["syscall_count"].items():
print(f"PID {k.value}: {v.value} syscalls")
b["syscall_count"].clear()
except KeyboardInterrupt:
pass
- Tracing TCP connections:
from bcc import BPF
# BPF program
bpf_text = """
#include <uapi/linux/ptrace.h>
#include <net/sock.h>
#include <bcc/proto.h>
int trace_connect(struct pt_regs *ctx, struct sock *sk) {
u32 pid = bpf_get_current_pid_tgid() >> 32;
u32 saddr = sk->__sk_common.skc_rcv_saddr;
u32 daddr = sk->__sk_common.skc_daddr;
u16 dport = sk->__sk_common.skc_dport;
bpf_trace_printk("PID %d connecting to %x:%d\\n", pid, ntohl(daddr), ntohs(dport));
return 0;
}
"""
# Load BPF program
b = BPF(text=bpf_text)
b.attach_kprobe(event="tcp_v4_connect", fn_name="trace_connect")
# Print trace output
print("Tracing TCP connections... Ctrl+C to exit")
b.trace_print()
Getting Started
- Install BCC:
sudo apt-get install bpfcc-tools linux-headers-$(uname -
Competitor Comparisons
eBPF-based Networking, Security, and Observability
Pros of Cilium
- Provides comprehensive network security and visibility for cloud-native environments
- Offers advanced load balancing and service mesh capabilities
- Integrates well with Kubernetes and other container orchestration platforms
Cons of Cilium
- Steeper learning curve due to its complexity and wide range of features
- May require more resources to run compared to simpler networking solutions
- Less flexible for general-purpose eBPF development outside of networking
Code Comparison
BCC example (Python):
from bcc import BPF
prog = """
int hello(void *ctx) {
bpf_trace_printk("Hello, World!\\n");
return 0;
}
"""
b = BPF(text=prog)
b.attach_kprobe(event="sys_clone", fn_name="hello")
Cilium example (Go):
import (
"github.com/cilium/cilium/pkg/bpf"
)
func main() {
bpffs := "/sys/fs/bpf"
if err := bpf.MountFS(bpffs); err != nil {
log.Fatal(err)
}
}
While BCC focuses on eBPF programming and tracing, Cilium uses eBPF for networking and security in container environments. BCC provides a more general-purpose toolkit for eBPF development, while Cilium offers a specialized solution for cloud-native networking.
Cloud Native Runtime Security
Pros of Falco
- Focused on security monitoring and threat detection
- Provides out-of-the-box rules for common security scenarios
- Easier to set up and use for security-specific tasks
Cons of Falco
- Less flexible for general-purpose system tracing and analysis
- More limited in terms of customization and extensibility
- Smaller community and ecosystem compared to BCC
Code Comparison
Falco rule example:
- rule: Unauthorized Process
desc: Detect unauthorized process execution
condition: spawned_process and not proc.name in (allowed_processes)
output: "Unauthorized process started (user=%user.name command=%proc.cmdline)"
priority: WARNING
BCC Python script example:
from bcc import BPF
program = """
int hello(void *ctx) {
bpf_trace_printk("Hello, World!\\n");
return 0;
}
"""
b = BPF(text=program)
b.attach_kprobe(event=b.get_syscall_fnname("clone"), fn_name="hello")
b.trace_print()
Both Falco and BCC are powerful tools for system monitoring and analysis, but they serve different purposes. Falco is more specialized for security monitoring, while BCC offers greater flexibility for general system tracing and performance analysis.
Linux Runtime Security and Forensics using eBPF
Pros of Tracee
- Focused on runtime security and threat detection in containers and cloud-native environments
- Provides out-of-the-box security rules and policies
- Easier to use for security-specific tasks without extensive programming knowledge
Cons of Tracee
- More limited in scope compared to BCC's general-purpose tracing capabilities
- Less flexibility for custom tracing and performance analysis tasks
- Smaller community and ecosystem compared to BCC
Code Comparison
Tracee example (using Tracee-Rules):
apiVersion: tracee.aquasec.com/v1beta1
kind: Policy
metadata:
name: detect-suspicious-file-access
spec:
rules:
- name: suspicious-file-access
BCC example (using Python frontend):
from bcc import BPF
b = BPF(text="""
int kprobe__sys_open(struct pt_regs *ctx, const char __user *filename)
{
bpf_trace_printk("open file: %s\\n", filename);
return 0;
}
""")
Both tools use eBPF for tracing, but Tracee focuses on predefined security rules, while BCC offers more flexibility for custom tracing scenarios across various use cases.
High-level tracing language for Linux
Pros of bpftrace
- Simpler, more concise syntax for quick one-liners and short scripts
- Built-in functions for common tasks, reducing boilerplate code
- Easier to learn and use for beginners in eBPF programming
Cons of bpftrace
- Less flexible for complex, large-scale programs
- Limited support for some advanced eBPF features
- Slower execution compared to compiled BCC programs
Code Comparison
bpftrace example:
bpftrace -e 'tracepoint:syscalls:sys_enter_open { printf("%s %s\n", comm, str(args->filename)); }'
BCC example:
from bcc import BPF
prog = """
int trace_open(struct pt_regs *ctx, const char *filename) {
bpf_trace_printk("Process: %s, File: %s\\n", comm, filename);
return 0;
}
"""
b = BPF(text=prog)
b.attach_kprobe(event="sys_open", fn_name="trace_open")
b.trace_print()
Both bpftrace and BCC are powerful tools for eBPF programming, with bpftrace focusing on simplicity and ease of use, while BCC offers more flexibility and control for complex scenarios. The choice between them depends on the specific use case and the user's familiarity with eBPF concepts.
Inspektor Gadget is a set of tools and framework for data collection and system inspection on Kubernetes clusters and Linux hosts using eBPF
Pros of Inspektor Gadget
- Kubernetes-native design, making it easier to deploy and manage in containerized environments
- Provides a higher-level abstraction for eBPF-based tools, simplifying usage for Kubernetes operators
- Offers a unified interface for various debugging and observability tools
Cons of Inspektor Gadget
- More limited scope compared to BCC, focusing primarily on Kubernetes use cases
- Less flexibility for custom eBPF program development
- Newer project with a smaller community and fewer available tools
Code Comparison
BCC example (Python):
from bcc import BPF
b = BPF(text='int kprobe__sys_clone(void *ctx) { bpf_trace_printk("Hello, World!\\n"); return 0; }')
b.trace_print()
Inspektor Gadget example (YAML):
apiVersion: gadget.kinvolk.io/v1alpha1
kind: Trace
metadata:
name: syscalls
spec:
node: worker-1
gadget: syscalls
Both projects leverage eBPF for system observability, but Inspektor Gadget provides a more Kubernetes-centric approach, while BCC offers lower-level access and greater flexibility for general Linux systems.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
BPF Compiler Collection (BCC)
BCC is a toolkit for creating efficient kernel tracing and manipulation programs, and includes several useful tools and examples. It makes use of extended BPF (Berkeley Packet Filters), formally known as eBPF, a new feature that was first added to Linux 3.15. Much of what BCC uses requires Linux 4.1 and above.
eBPF was described by Ingo Molnár as:
One of the more interesting features in this cycle is the ability to attach eBPF programs (user-defined, sandboxed bytecode executed by the kernel) to kprobes. This allows user-defined instrumentation on a live kernel image that can never crash, hang or interfere with the kernel negatively.
BCC makes BPF programs easier to write, with kernel instrumentation in C (and includes a C wrapper around LLVM), and front-ends in Python and lua. It is suited for many tasks, including performance analysis and network traffic control.
Screenshot
This example traces a disk I/O kernel function, and populates an in-kernel power-of-2 histogram of the I/O size. For efficiency, only the histogram summary is returned to user-level.
# ./bitehist.py
Tracing... Hit Ctrl-C to end.
^C
kbytes : count distribution
0 -> 1 : 3 | |
2 -> 3 : 0 | |
4 -> 7 : 211 |********** |
8 -> 15 : 0 | |
16 -> 31 : 0 | |
32 -> 63 : 0 | |
64 -> 127 : 1 | |
128 -> 255 : 800 |**************************************|
The above output shows a bimodal distribution, where the largest mode of 800 I/O was between 128 and 255 Kbytes in size.
See the source: bitehist.py. What this traces, what this stores, and how the data is presented, can be entirely customized. This shows only some of many possible capabilities.
Installing
See INSTALL.md for installation steps on your platform.
FAQ
See FAQ.txt for the most common troubleshoot questions.
Reference guide
See docs/reference_guide.md for the reference guide to the bcc and bcc/BPF APIs.
Contents
Some of these are single files that contain both C and Python, others have a pair of .c and .py files, and some are directories of files.
Tracing
Examples
- examples/tracing/bitehist.py: Block I/O size histogram. Examples.
- examples/tracing/disksnoop.py: Trace block device I/O latency. Examples.
- examples/hello_world.py: Prints "Hello, World!" for new processes.
- examples/tracing/mysqld_query.py: Trace MySQL server queries using USDT probes. Examples.
- examples/tracing/nodejs_http_server.py: Trace Node.js HTTP server requests using USDT probes. Examples.
- examples/tracing/stacksnoop: Trace a kernel function and print all kernel stack traces. Examples.
- tools/statsnoop: Trace stat() syscalls. Examples.
- examples/tracing/task_switch.py: Count task switches with from and to PIDs.
- examples/tracing/tcpv4connect.py: Trace TCP IPv4 active connections. Examples.
- examples/tracing/trace_fields.py: Simple example of printing fields from traced events.
- examples/tracing/undump.py: Dump UNIX socket packets. Examples
- examples/tracing/urandomread.py: A kernel tracepoint example, which traces random:urandom_read. Examples.
- examples/tracing/vfsreadlat.py examples/tracing/vfsreadlat.c: VFS read latency distribution. Examples.
- examples/tracing/kvm_hypercall.py: Conditional static kernel tracepoints for KVM entry, exit and hypercall Examples.
Tools
- tools/argdist: Display function parameter values as a histogram or frequency count. Examples.
- tools/bashreadline: Print entered bash commands system wide. Examples.
- tools/bpflist: Display processes with active BPF programs and maps. Examples.
- tools/capable: Trace security capability checks. Examples.
- tools/compactsnoop: Trace compact zone events with PID and latency. Examples.
- tools/criticalstat: Trace and report long atomic critical sections in the kernel. Examples
- tools/deadlock: Detect potential deadlocks on a running process. Examples.
- tools/drsnoop: Trace direct reclaim events with PID and latency. Examples.
- tools/funccount: Count kernel function calls. Examples.
- tools/inject: Targeted error injection with call chain and predicates Examples.
- tools/klockstat: Traces kernel mutex lock events and display locks statistics. Examples.
- tools/opensnoop: Trace open() syscalls. Examples.
- tools/readahead: Show performance of read-ahead cache Examples.
- tools/reset-trace: Reset the state of tracing. Maintenance tool only. Examples.
- tools/stackcount: Count kernel function calls and their stack traces. Examples.
- tools/syncsnoop: Trace sync() syscall. Examples.
- tools/threadsnoop: List new thread creation. Examples.
- tools/tplist: Display kernel tracepoints or USDT probes and their formats. Examples.
- tools/trace: Trace arbitrary functions, with filters. Examples.
- tools/ttysnoop: Watch live output from a tty or pts device. Examples.
- tools/ucalls: Summarize method calls or Linux syscalls in high-level languages. Examples.
- tools/uflow: Print a method flow graph in high-level languages. Examples.
- tools/ugc: Trace garbage collection events in high-level languages. Examples.
- tools/uobjnew: Summarize object allocation events by object type and number of bytes allocated. Examples.
- tools/ustat: Collect events such as GCs, thread creations, object allocations, exceptions and more in high-level languages. Examples.
- tools/uthreads: Trace thread creation events in Java and raw pthreads. Examples.
Memory and Process Tools
- tools/execsnoop: Trace new processes via exec() syscalls. Examples.
- tools/exitsnoop: Trace process termination (exit and fatal signals). Examples.
- tools/killsnoop: Trace signals issued by the kill() syscall. Examples.
- tools/kvmexit: Display the exit_reason and its statistics of each vm exit. Examples.
- tools/memleak: Display outstanding memory allocations to find memory leaks. Examples.
- tools/oomkill: Trace the out-of-memory (OOM) killer. Examples.
- tools/pidpersec: Count new processes (via fork). Examples.
- tools/rdmaucma: Trace RDMA Userspace Connection Manager Access events. Examples.
- tools/shmsnoop: Trace System V shared memory syscalls. Examples.
- tools/slabratetop: Kernel SLAB/SLUB memory cache allocation rate top. Examples.
Performance and Time Tools
- tools/dbslower: Trace MySQL/PostgreSQL queries slower than a threshold. Examples.
- tools/dbstat: Summarize MySQL/PostgreSQL query latency as a histogram. Examples.
- tools/funcinterval: Time interval between the same function as a histogram. Examples.
- tools/funclatency: Time functions and show their latency distribution. Examples.
- tools/funcslower: Trace slow kernel or user function calls. Examples.
- tools/hardirqs: Measure hard IRQ (hard interrupt) event time. Examples.
- tools/mysqld_qslower: Trace MySQL server queries slower than a threshold. Examples.
- tools/ppchcalls: Summarize ppc hcall counts and latencies. Examples.
- tools/softirqs: Measure soft IRQ (soft interrupt) event time. Examples.
- tools/syscount: Summarize syscall counts and latencies. Examples.
CPU and Scheduler Tools
- tools/cpudist: Summarize on- and off-CPU time per task as a histogram. Examples
- tools/cpuunclaimed: Sample CPU run queues and calculate unclaimed idle CPU. Examples
- tools/llcstat: Summarize CPU cache references and misses by process. Examples.
- tools/offcputime: Summarize off-CPU time by kernel stack trace. Examples.
- tools/offwaketime: Summarize blocked time by kernel off-CPU stack and waker stack. Examples.
- tools/profile: Profile CPU usage by sampling stack traces at a timed interval. Examples.
- tools/runqlat: Run queue (scheduler) latency as a histogram. Examples.
- tools/runqlen: Run queue length as a histogram. Examples.
- tools/runqslower: Trace long process scheduling delays. Examples.
- tools/wakeuptime: Summarize sleep to wakeup time by waker kernel stack. Examples.
- tools/wqlat: Summarize work waiting latency on workqueue. Examples.
Network and Sockets Tools
- tools/gethostlatency: Show latency for getaddrinfo/gethostbyname[2] calls. Examples.
- tools/bindsnoop: Trace IPv4 and IPv6 bind() system calls (bind()). Examples.
- tools/netqtop tools/netqtop.c: Trace and display packets distribution on NIC queues. Examples.
- tools/sofdsnoop: Trace FDs passed through unix sockets. Examples.
- tools/solisten: Trace TCP socket listen. Examples.
- tools/sslsniff: Sniff OpenSSL written and readed data. Examples.
- tools/tcpaccept: Trace TCP passive connections (accept()). Examples.
- tools/tcpconnect: Trace TCP active connections (connect()). Examples.
- tools/tcpconnlat: Trace TCP active connection latency (connect()). Examples.
- tools/tcpdrop: Trace kernel-based TCP packet drops with details. Examples.
- tools/tcplife: Trace TCP sessions and summarize lifespan. Examples.
- tools/tcpretrans: Trace TCP retransmits and TLPs. Examples.
- tools/tcprtt: Trace TCP round trip time. Examples.
- tools/tcpstates: Trace TCP session state changes with durations. Examples.
- tools/tcpsubnet: Summarize and aggregate TCP send by subnet. Examples.
- tools/tcpsynbl: Show TCP SYN backlog. Examples.
- tools/tcptop: Summarize TCP send/recv throughput by host. Top for TCP. Examples.
- tools/tcptracer: Trace TCP established connections (connect(), accept(), close()). Examples.
- tools/tcpcong: Trace TCP socket congestion control status duration. Examples.
Storage and Filesystems Tools
- tools/bitesize: Show per process I/O size histogram. Examples.
- tools/cachestat: Trace page cache hit/miss ratio. Examples.
- tools/cachetop: Trace page cache hit/miss ratio by processes. Examples.
- tools/dcsnoop: Trace directory entry cache (dcache) lookups. Examples.
- tools/dcstat: Directory entry cache (dcache) stats. Examples.
- tools/biolatency: Summarize block device I/O latency as a histogram. Examples.
- tools/biotop: Top for disks: Summarize block device I/O by process. Examples.
- tools/biopattern: Identify random/sequential disk access patterns. Examples.
- tools/biosnoop: Trace block device I/O with PID and latency. Examples.
- tools/dirtop: File reads and writes by directory. Top for directories. Examples.
- tools/filelife: Trace the lifespan of short-lived files. Examples.
- tools/filegone: Trace why file gone (deleted or renamed). Examples.
- tools/fileslower: Trace slow synchronous file reads and writes. Examples.
- tools/filetop: File reads and writes by filename and process. Top for files. Examples.
- tools/mdflush: Trace md flush events. Examples.
- tools/mountsnoop: Trace mount and umount syscalls system-wide. Examples.
- tools/virtiostat: Show VIRTIO device IO statistics. Examples.
Filesystems Tools
- tools/btrfsdist: Summarize btrfs operation latency distribution as a histogram. Examples.
- tools/btrfsslower: Trace slow btrfs operations. Examples.
- tools/ext4dist: Summarize ext4 operation latency distribution as a histogram. Examples.
- tools/ext4slower: Trace slow ext4 operations. Examples.
- tools/nfsslower: Trace slow NFS operations. Examples.
- tools/nfsdist: Summarize NFS operation latency distribution as a histogram. Examples.
- tools/vfscount: Count VFS calls. Examples.
- tools/vfsstat: Count some VFS calls, with column output. Examples.
- tools/xfsdist: Summarize XFS operation latency distribution as a histogram. Examples.
- tools/xfsslower: Trace slow XFS operations. Examples.
- tools/zfsdist: Summarize ZFS operation latency distribution as a histogram. Examples.
- tools/zfsslower: Trace slow ZFS operations. Examples.
Networking
Examples:
- examples/networking/distributed_bridge/: Distributed bridge example.
- examples/networking/http_filter/: Simple HTTP filter example.
- examples/networking/simple_tc.py: Simple traffic control example.
- examples/networking/simulation.py: Simulation helper.
- examples/networking/neighbor_sharing/tc_neighbor_sharing.py examples/networking/neighbor_sharing/tc_neighbor_sharing.c: Per-IP classification and rate limiting.
- examples/networking/tunnel_monitor/: Efficiently monitor traffic flows.
- examples/networking/vlan_learning/vlan_learning.py examples/vlan_learning.c: Demux Ethernet traffic into worker veth+namespaces.
BPF Introspection
Tools that help to introspect BPF programs.
Motivation
BPF guarantees that the programs loaded into the kernel cannot crash, and cannot run forever, but yet BPF is general purpose enough to perform many arbitrary types of computation. Currently, it is possible to write a program in C that will compile into a valid BPF program, yet it is vastly easier to write a C program that will compile into invalid BPF (C is like that). The user won't know until trying to run the program whether it was valid or not.
With a BPF-specific frontend, one should be able to write in a language and receive feedback from the compiler on the validity as it pertains to a BPF backend. This toolkit aims to provide a frontend that can only create valid BPF programs while still harnessing its full flexibility.
Furthermore, current integrations with BPF have a kludgy workflow, sometimes involving compiling directly in a linux kernel source tree. This toolchain aims to minimize the time that a developer spends getting BPF compiled, and instead focus on the applications that can be written and the problems that can be solved with BPF.
The features of this toolkit include:
- End-to-end BPF workflow in a shared library
- A modified C language for BPF backends
- Integration with llvm-bpf backend for JIT
- Dynamic (un)loading of JITed programs
- Support for BPF kernel hooks: socket filters, tc classifiers, tc actions, and kprobes
- Bindings for Python
- Examples for socket filters, tc classifiers, and kprobes
- Self-contained tools for tracing a running system
In the future, more bindings besides python will likely be supported. Feel free to add support for the language of your choice and send a pull request!
Tutorials
- docs/tutorial.md: Using bcc tools to solve performance, troubleshooting, and networking issues.
- docs/tutorial_bcc_python_developer.md: Developing new bcc programs using the Python interface.
Networking
At Red Hat Summit 2015, BCC was presented as part of a session on BPF. A multi-host vxlan environment is simulated and a BPF program used to monitor one of the physical interfaces. The BPF program keeps statistics on the inner and outer IP addresses traversing the interface, and the userspace component turns those statistics into a graph showing the traffic distribution at multiple granularities. See the code here.
Contributing
Already pumped up to commit some code? Here are some resources to join the discussions in the IOVisor community and see what you want to work on.
- Mailing List: https://lists.iovisor.org/mailman/listinfo/iovisor-dev
- IRC: #iovisor at irc.oftc.net
- BCC Issue Tracker: Github Issues
- A guide for contributing scripts: CONTRIBUTING-SCRIPTS.md
External links
Looking for more information on BCC and how it's being used? You can find links to other BCC content on the web in LINKS.md.
Top Related Projects
eBPF-based Networking, Security, and Observability
Cloud Native Runtime Security
Linux Runtime Security and Forensics using eBPF
High-level tracing language for Linux
Inspektor Gadget is a set of tools and framework for data collection and system inspection on Kubernetes clusters and Linux hosts using eBPF
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot