BoomFilters
Probabilistic data structures for processing continuous, unbounded streams.
Top Related Projects
Go package implementing Bloom filters, used by Milvus and Beego.
C network daemon for bloom filters
Cuckoo Filter: Practically Better Than Bloom
Quick Overview
BoomFilters is a Go library that provides probabilistic data structures for processing continuous, unbounded streams. It implements various types of Bloom filters and other similar structures, offering efficient solutions for membership testing, cardinality estimation, and frequency counting in large datasets.
Pros
- Offers a variety of probabilistic data structures, including Stable Bloom Filter, Scalable Bloom Filter, Counting Bloom Filter, and more
- Provides memory-efficient solutions for processing large data streams
- Well-documented with clear examples and benchmarks
- Implements thread-safe versions of the data structures
Cons
- Limited to Go programming language
- May require understanding of probabilistic data structures for optimal usage
- Some structures may have false positive rates that need to be considered in certain applications
- Not actively maintained (last commit was in 2019 as of writing)
Code Examples
- Creating and using a standard Bloom filter:
import "github.com/tylertreat/BoomFilters"
filter := boom.NewBloomFilter(1000, 0.01)
filter.Add([]byte("item"))
exists := filter.Test([]byte("item"))
fmt.Println(exists) // Output: true
- Using a Counting Bloom filter:
cbf := boom.NewCountingBloomFilter(1000, 0.01)
cbf.Add([]byte("item"))
cbf.Add([]byte("item"))
count := cbf.Count([]byte("item"))
fmt.Println(count) // Output: 2
- Estimating cardinality with HyperLogLog:
hll := boom.NewHyperLogLog(0.1)
hll.Add([]byte("item1"))
hll.Add([]byte("item2"))
hll.Add([]byte("item3"))
cardinality := hll.Count()
fmt.Println(cardinality) // Output: ~3 (approximate)
Getting Started
To use BoomFilters in your Go project, first install the package:
go get github.com/tylertreat/BoomFilters
Then, import and use the desired data structure in your code:
import (
"fmt"
"github.com/tylertreat/BoomFilters"
)
func main() {
filter := boom.NewBloomFilter(1000, 0.01)
filter.Add([]byte("example"))
exists := filter.Test([]byte("example"))
fmt.Printf("Item exists: %v\n", exists)
}
This example creates a Bloom filter, adds an item, and tests for its existence. You can similarly use other data structures provided by the library based on your specific needs.
Competitor Comparisons
Go package implementing Bloom filters, used by Milvus and Beego.
Pros of bloom
- More actively maintained with recent updates
- Broader range of Bloom filter implementations (e.g., counting, stable)
- Better documentation and examples
Cons of bloom
- Limited to Bloom filters only, while BoomFilters offers additional probabilistic data structures
- Slightly more complex API compared to BoomFilters' simpler interface
Code Comparison
bloom:
filter := bloom.NewWithEstimates(1000000, 0.01)
filter.Add([]byte("test"))
if filter.Test([]byte("test")) {
fmt.Println("Element may be in set")
}
BoomFilters:
filter := boom.NewBloomFilter(1000000, 0.01)
filter.Add([]byte("test"))
if filter.Test([]byte("test")) {
fmt.Println("Element may be in set")
}
Both libraries offer similar basic functionality for Bloom filters, with slight differences in method names and initialization. bloom provides more specialized filter types, while BoomFilters includes additional data structures beyond Bloom filters.
C network daemon for bloom filters
Pros of bloomd
- Written in C, offering potentially better performance for high-throughput scenarios
- Provides a network server, allowing for easy integration with various programming languages and distributed systems
- Supports multiple filters in a single instance, enabling efficient management of multiple datasets
Cons of bloomd
- Limited to basic Bloom filter functionality, lacking advanced probabilistic data structures
- Less flexibility in terms of customization and extensibility compared to BoomFilters
- Requires separate installation and management of a server process
Code Comparison
bloomd (C):
int bloom_add(bloom_filter *filter, char *key) {
uint64_t hash = hash_function(key);
int add = 1;
for (int i = 0; i < filter->k_num; i++) {
add &= set_bit(filter, (hash + i * filter->second_hash) % filter->bitmap_size);
}
return add;
}
BoomFilters (Go):
func (f *BloomFilter) Add(data []byte) {
for i := uint(0); i < f.k; i++ {
f.setBit(f.location(data, i))
}
}
Both implementations show similar approaches to adding elements to a Bloom filter, with bloomd using a C-style loop and BoomFilters using a more concise Go-style loop. The main difference lies in the language-specific syntax and data structures used.
Cuckoo Filter: Practically Better Than Bloom
Pros of cuckoofilter
- Simpler implementation focused solely on Cuckoo filters
- Potentially faster for specific use cases involving Cuckoo filters
- More lightweight, with fewer dependencies
Cons of cuckoofilter
- Limited to Cuckoo filters, while BoomFilters offers multiple filter types
- Less actively maintained, with fewer recent updates
- Fewer additional features and optimizations compared to BoomFilters
Code Comparison
cuckoofilter:
cf := cuckoo.NewFilter(1000)
cf.Insert([]byte("key"))
cf.Lookup([]byte("key"))
BoomFilters:
cf := boom.NewCuckooFilter(1000, 0.01)
cf.Add([]byte("key"))
cf.Test([]byte("key"))
Both implementations offer similar basic functionality for Cuckoo filters. However, BoomFilters provides a more extensive API with additional filter types and features. The cuckoofilter package focuses solely on Cuckoo filters, which may be preferable for projects with specific requirements.
BoomFilters offers a wider range of probabilistic data structures, including Bloom filters, Counting Bloom filters, and others, making it more versatile for various use cases. It also provides more configuration options and optimizations.
cuckoofilter may be a better choice for projects that only need Cuckoo filter functionality and prefer a simpler, more focused implementation. BoomFilters is more suitable for projects requiring multiple filter types or advanced features.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Boom Filters
Boom Filters are probabilistic data structures for processing continuous, unbounded streams. This includes Stable Bloom Filters, Scalable Bloom Filters, Counting Bloom Filters, Inverse Bloom Filters, Cuckoo Filters, several variants of traditional Bloom filters, HyperLogLog, Count-Min Sketch, and MinHash.
Classic Bloom filters generally require a priori knowledge of the data set in order to allocate an appropriately sized bit array. This works well for offline processing, but online processing typically involves unbounded data streams. With enough data, a traditional Bloom filter "fills up", after which it has a false-positive probability of 1.
Boom Filters are useful for situations where the size of the data set isn't known ahead of time. For example, a Stable Bloom Filter can be used to deduplicate events from an unbounded event stream with a specified upper bound on false positives and minimal false negatives. Alternatively, an Inverse Bloom Filter is ideal for deduplicating a stream where duplicate events are relatively close together. This results in no false positives and, depending on how close together duplicates are, a small probability of false negatives. Scalable Bloom Filters place a tight upper bound on false positives while avoiding false negatives but require allocating memory proportional to the size of the data set. Counting Bloom Filters and Cuckoo Filters are useful for cases which require adding and removing elements to and from a set.
For large or unbounded data sets, calculating the exact cardinality is impractical. HyperLogLog uses a fraction of the memory while providing an accurate approximation. Similarly, Count-Min Sketch provides an efficient way to estimate event frequency for data streams, while Top-K tracks the top-k most frequent elements.
MinHash is a probabilistic algorithm to approximate the similarity between two sets. This can be used to cluster or compare documents by splitting the corpus into a bag of words.
Installation
$ go get github.com/tylertreat/BoomFilters
Stable Bloom Filter
This is an implementation of Stable Bloom Filters as described by Deng and Rafiei in Approximately Detecting Duplicates for Streaming Data using Stable Bloom Filters.
A Stable Bloom Filter (SBF) continuously evicts stale information so that it has room for more recent elements. Like traditional Bloom filters, an SBF has a non-zero probability of false positives, which is controlled by several parameters. Unlike the classic Bloom filter, an SBF has a tight upper bound on the rate of false positives while introducing a non-zero rate of false negatives. The false-positive rate of a classic Bloom filter eventually reaches 1, after which all queries result in a false positive. The stable-point property of an SBF means the false-positive rate asymptotically approaches a configurable fixed constant. A classic Bloom filter is actually a special case of SBF where the eviction rate is zero and the cell size is one, so this provides support for them as well (in addition to bitset-based Bloom filters).
Stable Bloom Filters are useful for cases where the size of the data set isn't known a priori and memory is bounded. For example, an SBF can be used to deduplicate events from an unbounded event stream with a specified upper bound on false positives and minimal false negatives.
Usage
package main
import (
"fmt"
"github.com/tylertreat/BoomFilters"
)
func main() {
sbf := boom.NewDefaultStableBloomFilter(10000, 0.01)
fmt.Println("stable point", sbf.StablePoint())
sbf.Add([]byte(`a`))
if sbf.Test([]byte(`a`)) {
fmt.Println("contains a")
}
if !sbf.TestAndAdd([]byte(`b`)) {
fmt.Println("doesn't contain b")
}
if sbf.Test([]byte(`b`)) {
fmt.Println("now it contains b!")
}
// Restore to initial state.
sbf.Reset()
}
Scalable Bloom Filter
This is an implementation of a Scalable Bloom Filter as described by Almeida, Baquero, Preguica, and Hutchison in Scalable Bloom Filters.
A Scalable Bloom Filter (SBF) dynamically adapts to the size of the data set while enforcing a tight upper bound on the rate of false positives and a false-negative probability of zero. This works by adding Bloom filters with geometrically decreasing false-positive rates as filters become full. A tightening ratio, r, controls the filter growth. The compounded probability over the whole series converges to a target value, even accounting for an infinite series.
Scalable Bloom Filters are useful for cases where the size of the data set isn't known a priori and memory constraints aren't of particular concern. For situations where memory is bounded, consider using Inverse or Stable Bloom Filters.
The core parts of this implementation were originally written by Jian Zhen as discussed in Benchmarking Bloom Filters and Hash Functions in Go.
Usage
package main
import (
"fmt"
"github.com/tylertreat/BoomFilters"
)
func main() {
sbf := boom.NewDefaultScalableBloomFilter(0.01)
sbf.Add([]byte(`a`))
if sbf.Test([]byte(`a`)) {
fmt.Println("contains a")
}
if !sbf.TestAndAdd([]byte(`b`)) {
fmt.Println("doesn't contain b")
}
if sbf.Test([]byte(`b`)) {
fmt.Println("now it contains b!")
}
// Restore to initial state.
sbf.Reset()
}
Inverse Bloom Filter
An Inverse Bloom Filter, or "the opposite of a Bloom filter", is a concurrent, probabilistic data structure used to test whether an item has been observed or not. This implementation, originally described and written by Jeff Hodges, replaces the use of MD5 hashing with a non-cryptographic FNV-1 function.
The Inverse Bloom Filter may report a false negative but can never report a false positive. That is, it may report that an item has not been seen when it actually has, but it will never report an item as seen which it hasn't come across. This behaves in a similar manner to a fixed-size hashmap which does not handle conflicts.
This structure is particularly well-suited to streams in which duplicates are relatively close together. It uses a CAS-style approach, which makes it thread-safe.
Usage
package main
import (
"fmt"
"github.com/tylertreat/BoomFilters"
)
func main() {
ibf := boom.NewInverseBloomFilter(10000)
ibf.Add([]byte(`a`))
if ibf.Test([]byte(`a`)) {
fmt.Println("contains a")
}
if !ibf.TestAndAdd([]byte(`b`)) {
fmt.Println("doesn't contain b")
}
if ibf.Test([]byte(`b`)) {
fmt.Println("now it contains b!")
}
}
Counting Bloom Filter
This is an implementation of a Counting Bloom Filter as described by Fan, Cao, Almeida, and Broder in Summary Cache: A Scalable Wide-Area Web Cache Sharing Protocol.
A Counting Bloom Filter (CBF) provides a way to remove elements by using an array of n-bit buckets. When an element is added, the respective buckets are incremented. To remove an element, the respective buckets are decremented. A query checks that each of the respective buckets are non-zero. Because CBFs allow elements to be removed, they introduce a non-zero probability of false negatives in addition to the possibility of false positives.
Counting Bloom Filters are useful for cases where elements are both added and removed from the data set. Since they use n-bit buckets, CBFs use roughly n-times more memory than traditional Bloom filters.
See Deletable Bloom Filter for an alternative which avoids false negatives.
Usage
package main
import (
"fmt"
"github.com/tylertreat/BoomFilters"
)
func main() {
bf := boom.NewDefaultCountingBloomFilter(1000, 0.01)
bf.Add([]byte(`a`))
if bf.Test([]byte(`a`)) {
fmt.Println("contains a")
}
if !bf.TestAndAdd([]byte(`b`)) {
fmt.Println("doesn't contain b")
}
if bf.TestAndRemove([]byte(`b`)) {
fmt.Println("removed b")
}
// Restore to initial state.
bf.Reset()
}
Cuckoo Filter
This is an implementation of a Cuckoo Filter as described by Andersen, Kaminsky, and Mitzenmacher in Cuckoo Filter: Practically Better Than Bloom. The Cuckoo Filter is similar to the Counting Bloom Filter in that it supports adding and removing elements, but it does so in a way that doesn't significantly degrade space and performance.
It works by using a cuckoo hashing scheme for inserting items. Instead of storing the elements themselves, it stores their fingerprints which also allows for item removal without false negatives (if you don't attempt to remove an item not contained in the filter).
For applications that store many items and target moderately low false-positive rates, cuckoo filters have lower space overhead than space-optimized Bloom filters.
Usage
package main
import (
"fmt"
"github.com/tylertreat/BoomFilters"
)
func main() {
cf := boom.NewCuckooFilter(1000, 0.01)
cf.Add([]byte(`a`))
if cf.Test([]byte(`a`)) {
fmt.Println("contains a")
}
if contains, _ := cf.TestAndAdd([]byte(`b`)); !contains {
fmt.Println("doesn't contain b")
}
if cf.TestAndRemove([]byte(`b`)) {
fmt.Println("removed b")
}
// Restore to initial state.
cf.Reset()
}
Classic Bloom Filter
A classic Bloom filter is a special case of a Stable Bloom Filter whose eviction rate is zero and cell size is one. We call this special case an Unstable Bloom Filter. Because cells require more memory overhead, this package also provides two bitset-based Bloom filter variations. The first variation is the traditional implementation consisting of a single bit array. The second implementation is a partitioned approach which uniformly distributes the probability of false positives across all elements.
Bloom filters have a limited capacity, depending on the configured size. Once all bits are set, the probability of a false positive is 1. However, traditional Bloom filters cannot return a false negative.
A Bloom filter is ideal for cases where the data set is known a priori because the false-positive rate can be configured by the size and number of hash functions.
Usage
package main
import (
"fmt"
"github.com/tylertreat/BoomFilters"
)
func main() {
// We could also use boom.NewUnstableBloomFilter or boom.NewPartitionedBloomFilter.
bf := boom.NewBloomFilter(1000, 0.01)
bf.Add([]byte(`a`))
if bf.Test([]byte(`a`)) {
fmt.Println("contains a")
}
if !bf.TestAndAdd([]byte(`b`)) {
fmt.Println("doesn't contain b")
}
if bf.Test([]byte(`b`)) {
fmt.Println("now it contains b!")
}
// Restore to initial state.
bf.Reset()
}
Count-Min Sketch
This is an implementation of a Count-Min Sketch as described by Cormode and Muthukrishnan in An Improved Data Stream Summary: The Count-Min Sketch and its Applications.
A Count-Min Sketch (CMS) is a probabilistic data structure which approximates the frequency of events in a data stream. Unlike a hash map, a CMS uses sub-linear space at the expense of a configurable error factor. Similar to Counting Bloom filters, items are hashed to a series of buckets, which increment a counter. The frequency of an item is estimated by taking the minimum of each of the item's respective counter values.
Count-Min Sketches are useful for counting the frequency of events in massive data sets or unbounded streams online. In these situations, storing the entire data set or allocating counters for every event in memory is impractical. It may be possible for offline processing, but real-time processing requires fast, space-efficient solutions like the CMS. For approximating set cardinality, refer to the HyperLogLog.
Usage
package main
import (
"fmt"
"github.com/tylertreat/BoomFilters"
)
func main() {
cms := boom.NewCountMinSketch(0.001, 0.99)
cms.Add([]byte(`alice`)).Add([]byte(`bob`)).Add([]byte(`bob`)).Add([]byte(`frank`))
fmt.Println("frequency of alice", cms.Count([]byte(`alice`)))
fmt.Println("frequency of bob", cms.Count([]byte(`bob`)))
fmt.Println("frequency of frank", cms.Count([]byte(`frank`)))
// Serialization example
buf := new(bytes.Buffer)
n, err := cms.WriteDataTo(buf)
if err != nil {
fmt.Println(err, n)
}
// Restore to initial state.
cms.Reset()
newCMS := boom.NewCountMinSketch(0.001, 0.99)
n, err = newCMS.ReadDataFrom(buf)
if err != nil {
fmt.Println(err, n)
}
fmt.Println("frequency of frank", newCMS.Count([]byte(`frank`)))
}
Top-K
Top-K uses a Count-Min Sketch and min-heap to track the top-k most frequent elements in a stream.
Usage
package main
import (
"fmt"
"github.com/tylertreat/BoomFilters"
)
func main() {
topk := boom.NewTopK(0.001, 0.99, 5)
topk.Add([]byte(`bob`)).Add([]byte(`bob`)).Add([]byte(`bob`))
topk.Add([]byte(`tyler`)).Add([]byte(`tyler`)).Add([]byte(`tyler`)).Add([]byte(`tyler`))
topk.Add([]byte(`fred`))
topk.Add([]byte(`alice`)).Add([]byte(`alice`)).Add([]byte(`alice`)).Add([]byte(`alice`))
topk.Add([]byte(`james`))
topk.Add([]byte(`fred`))
topk.Add([]byte(`sara`)).Add([]byte(`sara`))
topk.Add([]byte(`bill`))
for i, element := range topk.Elements() {
fmt.Println(i, string(element.Data), element.Freq)
}
// Restore to initial state.
topk.Reset()
}
HyperLogLog
This is an implementation of HyperLogLog as described by Flajolet, Fusy, Gandouet, and Meunier in HyperLogLog: the analysis of a near-optimal cardinality estimation algorithm.
HyperLogLog is a probabilistic algorithm which approximates the number of distinct elements in a multiset. It works by hashing values and calculating the maximum number of leading zeros in the binary representation of each hash. If the maximum number of leading zeros is n, the estimated number of distinct elements in the set is 2^n. To minimize variance, the multiset is split into a configurable number of registers, the maximum number of leading zeros is calculated in the numbers in each register, and a harmonic mean is used to combine the estimates.
For large or unbounded data sets, calculating the exact cardinality is impractical. HyperLogLog uses a fraction of the memory while providing an accurate approximation.
This implementation was originally written by Eric Lesh. Some small changes and additions have been made, including a way to construct a HyperLogLog optimized for a particular relative accuracy and adding FNV hashing. For counting element frequency, refer to the Count-Min Sketch.
Usage
package main
import (
"fmt"
"github.com/tylertreat/BoomFilters"
)
func main() {
hll, err := boom.NewDefaultHyperLogLog(0.1)
if err != nil {
panic(err)
}
hll.Add([]byte(`alice`)).Add([]byte(`bob`)).Add([]byte(`bob`)).Add([]byte(`frank`))
fmt.Println("count", hll.Count())
// Serialization example
buf := new(bytes.Buffer)
_, err = hll.WriteDataTo(buf)
if err != nil {
fmt.Println(err)
}
// Restore to initial state.
hll.Reset()
newHll, err := boom.NewDefaultHyperLogLog(0.1)
if err != nil {
fmt.Println(err)
}
_, err = newHll.ReadDataFrom(buf)
if err != nil {
fmt.Println(err)
}
fmt.Println("count", newHll.Count())
}
MinHash
This is a variation of the technique for estimating similarity between two sets as presented by Broder in On the resemblance and containment of documents.
MinHash is a probabilistic algorithm which can be used to cluster or compare documents by splitting the corpus into a bag of words. MinHash returns the approximated similarity ratio of the two bags. The similarity is less accurate for very small bags of words.
Usage
package main
import (
"fmt"
"github.com/tylertreat/BoomFilters"
)
func main() {
bag1 := []string{"bill", "alice", "frank", "bob", "sara", "tyler", "james"}
bag2 := []string{"bill", "alice", "frank", "bob", "sara"}
fmt.Println("similarity", boom.MinHash(bag1, bag2))
}
References
- Approximately Detecting Duplicates for Streaming Data using Stable Bloom Filters
- Scalable Bloom Filters
- The Opposite of a Bloom Filter
- Benchmarking Bloom Filters and Hash Functions in Go
- Summary Cache: A Scalable Wide-Area Web Cache Sharing Protocol
- An Improved Data Stream Summary: The Count-Min Sketch and its Applications
- HyperLogLog: the analysis of a near-optimal cardinality estimation algorithm
- Package hyperloglog
- On the resemblance and containment of documents
- Cuckoo Filter: Practically Better Than Bloom
Top Related Projects
Go package implementing Bloom filters, used by Milvus and Beego.
C network daemon for bloom filters
Cuckoo Filter: Practically Better Than Bloom
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot