Convert Figma logo to code with AI

dgraph-io logoristretto

A high performance memory-bound Go cache

5,718
374
5,718
4

Top Related Projects

A cache library for Go with zero GC overhead.

Efficient cache for gigabytes of data written in Go.

Fast thread-safe inmemory cache for big number of entries in Go. Minimizes GC overhead

An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications.

groupcache is a caching and cache-filling library, intended as a replacement for memcached in many cases.

2,516

☔️ A complete Go cache library that brings you multiple ways of managing your caches

Quick Overview

Ristretto is a high-performance memory-bound Go cache library. It provides a concurrent, thread-safe cache implementation with automatic eviction of less frequently used items. Ristretto is designed to be fast, efficient, and easy to use in Go applications.

Pros

  • High performance and low latency, optimized for concurrent access
  • Automatic item eviction based on frequency and recency
  • Thread-safe implementation, suitable for use in concurrent Go programs
  • Customizable cache size and eviction policies

Cons

  • Limited to in-memory caching, not suitable for distributed caching scenarios
  • May require fine-tuning of parameters for optimal performance in specific use cases
  • Lacks some advanced features found in more complex caching solutions

Code Examples

  1. Creating a new cache:
import "github.com/dgraph-io/ristretto"

cache, err := ristretto.NewCache(&ristretto.Config{
    NumCounters: 1e7,     // number of keys to track frequency of (10M).
    MaxCost:     1 << 30, // maximum cost of cache (1GB).
    BufferItems: 64,      // number of keys per Get buffer.
})
if err != nil {
    panic(err)
}
  1. Setting and getting values:
// Set a value in the cache
cache.Set("key", "value", 1)

// Get a value from the cache
value, found := cache.Get("key")
if found {
    fmt.Println(value)
}
  1. Using the cache with expiration:
import "time"

// Set a value with expiration
cache.SetWithTTL("key", "value", 1, 5*time.Minute)

// Get the value before expiration
value, found := cache.Get("key")
if found {
    fmt.Println(value)
}

// Wait for expiration
time.Sleep(6 * time.Minute)

// Try to get the expired value
value, found = cache.Get("key")
if !found {
    fmt.Println("Key expired")
}

Getting Started

To use Ristretto in your Go project, follow these steps:

  1. Install the library:

    go get github.com/dgraph-io/ristretto
    
  2. Import the library in your Go code:

    import "github.com/dgraph-io/ristretto"
    
  3. Create a new cache instance:

    cache, err := ristretto.NewCache(&ristretto.Config{
        NumCounters: 1e7,
        MaxCost:     1 << 30,
        BufferItems: 64,
    })
    if err != nil {
        panic(err)
    }
    
  4. Use the cache in your application:

    cache.Set("key", "value", 1)
    value, found := cache.Get("key")
    

Competitor Comparisons

A cache library for Go with zero GC overhead.

Pros of Freecache

  • Simpler implementation, making it easier to understand and maintain
  • Lower memory usage for small caches (< 100MB)
  • Faster for read-heavy workloads with infrequent writes

Cons of Freecache

  • Less efficient for larger caches (> 100MB)
  • No support for automatic eviction based on item cost or value
  • Limited configurability compared to Ristretto

Code Comparison

Freecache:

cache := freecache.NewCache(100 * 1024 * 1024)
cache.Set([]byte("key"), []byte("value"), 60)
value, err := cache.Get([]byte("key"))

Ristretto:

cache, _ := ristretto.NewCache(&ristretto.Config{
    NumCounters: 1e7,
    MaxCost:     1 << 30,
    BufferItems: 64,
})
cache.Set("key", "value", 1)
value, found := cache.Get("key")

Both Freecache and Ristretto are high-performance Go caching libraries, but they have different strengths. Freecache is simpler and performs well for smaller caches, while Ristretto offers more advanced features and better scalability for larger caches. The choice between them depends on the specific requirements of your project, such as cache size, read/write patterns, and desired configurability.

Efficient cache for gigabytes of data written in Go.

Pros of Bigcache

  • Simpler implementation, easier to understand and integrate
  • Better performance for small payloads (< 1KB)
  • Lower memory overhead for small caches

Cons of Bigcache

  • Less efficient for larger payloads
  • Lacks advanced features like automatic eviction policies
  • May have higher CPU usage due to frequent garbage collection

Code Comparison

Bigcache:

cache, _ := bigcache.NewBigCache(bigcache.DefaultConfig(10 * time.Minute))
cache.Set("key", []byte("value"))
entry, _ := cache.Get("key")

Ristretto:

cache, _ := ristretto.NewCache(&ristretto.Config{
    NumCounters: 1e7,     // number of keys to track frequency of (10M).
    MaxCost:     1 << 30, // maximum cost of cache (1GB).
    BufferItems: 64,      // number of keys per Get buffer.
})
cache.Set("key", "value", 1)
value, found := cache.Get("key")

Both Bigcache and Ristretto are Go-based caching libraries, but they have different strengths and use cases. Bigcache is simpler and performs well for small payloads, while Ristretto offers more advanced features and better efficiency for larger payloads. The choice between them depends on the specific requirements of your project, such as payload size, cache size, and desired features.

Fast thread-safe inmemory cache for big number of entries in Go. Minimizes GC overhead

Pros of fastcache

  • Optimized for high-performance scenarios with large cache sizes
  • Efficient memory usage through byte slices instead of interface{}
  • Designed for concurrent access without locks

Cons of fastcache

  • Limited feature set compared to Ristretto
  • Less flexible in terms of eviction policies
  • May not be as suitable for smaller cache sizes or general-purpose use cases

Code Comparison

fastcache:

cache := fastcache.New(100 * 1024 * 1024) // 100MB cache
cache.Set([]byte("key"), []byte("value"))
value := cache.Get(nil, []byte("key"))

Ristretto:

cache, _ := ristretto.NewCache(&ristretto.Config{
    NumCounters: 1e7,     // number of keys to track frequency of (10M)
    MaxCost:     1 << 30, // maximum cost of cache (1GB)
    BufferItems: 64,      // number of keys per Get buffer
})
cache.Set("key", "value", 1)
value, found := cache.Get("key")

Summary

fastcache is optimized for high-performance scenarios with large cache sizes, while Ristretto offers more features and flexibility. fastcache uses byte slices for efficient memory usage, whereas Ristretto supports interface{} values. Ristretto provides more advanced eviction policies and is generally more suitable for a wider range of use cases, while fastcache excels in specific high-performance scenarios.

An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications.

Pros of go-cache

  • Simpler API and easier to use for basic caching needs
  • Built-in support for expiration and cleanup of expired items
  • Lightweight with minimal dependencies

Cons of go-cache

  • Less performant for high-concurrency scenarios
  • Lacks advanced features like automatic eviction policies and size-based limits
  • No built-in support for metrics or observability

Code Comparison

go-cache:

c := cache.New(5*time.Minute, 10*time.Minute)
c.Set("foo", "bar", cache.DefaultExpiration)
foo, found := c.Get("foo")

Ristretto:

cache, _ := ristretto.NewCache(&ristretto.Config{
    NumCounters: 1e7,     // number of keys to track frequency of (10M).
    MaxCost:     1 << 30, // maximum cost of cache (1GB).
    BufferItems: 64,      // number of keys per Get buffer.
})
cache.Set("foo", "bar", 1)
value, found := cache.Get("foo")

Summary

go-cache is simpler and easier to use for basic caching needs, while Ristretto offers more advanced features and better performance for high-concurrency scenarios. go-cache provides built-in expiration and cleanup, whereas Ristretto focuses on efficient memory usage and advanced eviction policies. The choice between the two depends on the specific requirements of your project, such as simplicity vs. performance and advanced features.

groupcache is a caching and cache-filling library, intended as a replacement for memcached in many cases.

Pros of groupcache

  • Designed for distributed caching across multiple nodes
  • Supports automatic replication and load balancing
  • Includes a simple, single-file implementation

Cons of groupcache

  • Less actively maintained (last commit in 2020)
  • Limited configuration options and customization
  • Lacks advanced features like TTL and memory management

Code Comparison

groupcache:

getter := func(ctx Context, key string, dest Sink) error {
    // Fetch data and populate dest
    return nil
}
group := groupcache.NewGroup("myCache", 64<<20, groupcache.GetterFunc(getter))
var data []byte
group.Get(ctx, "key", groupcache.AllocatingByteSliceSink(&data))

Ristretto:

cache, _ := ristretto.NewCache(&ristretto.Config{
    NumCounters: 1e7,
    MaxCost:     1 << 30,
    BufferItems: 64,
})
cache.Set("key", "value", 1)
value, found := cache.Get("key")

Summary

groupcache is better suited for distributed caching scenarios with multiple nodes, while Ristretto offers more advanced features and fine-grained control over cache behavior. groupcache has a simpler API but lacks active maintenance, whereas Ristretto is actively developed and provides more flexibility in configuration. Choose groupcache for distributed setups with minimal configuration, and Ristretto for high-performance, customizable caching within a single application.

2,516

☔️ A complete Go cache library that brings you multiple ways of managing your caches

Pros of gocache

  • Supports multiple cache stores (in-memory, Redis, Memcache)
  • Offers a simpler API for basic caching needs
  • Provides built-in marshaling/unmarshaling of complex types

Cons of gocache

  • Less optimized for high-performance scenarios
  • Lacks advanced features like automatic item cost calculation
  • May have higher memory usage for large datasets

Code Comparison

gocache:

cache := store.NewGoCache(gocache.New(5*time.Minute, 10*time.Minute))
err := cache.Set("key", "value", &store.Options{Expiration: 5 * time.Minute})
value, err := cache.Get("key")

Ristretto:

cache, err := ristretto.NewCache(&ristretto.Config{NumCounters: 1e7, MaxCost: 1<<30, BufferItems: 64})
cache.Set("key", "value", 1)
value, found := cache.Get("key")

Key Differences

  • Ristretto focuses on high performance and memory efficiency
  • gocache offers more flexibility with multiple backend options
  • Ristretto provides automatic cost calculation and admission policy
  • gocache has a simpler API for basic caching needs
  • Ristretto is better suited for large-scale, high-throughput applications

Both libraries serve different use cases, with gocache being more versatile for general-purpose caching and Ristretto excelling in performance-critical scenarios.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Ristretto

Go Doc ci-ristretto-tests ci-ristretto-lint Coverage Status Go Report Card

Ristretto is a fast, concurrent cache library built with a focus on performance and correctness.

The motivation to build Ristretto comes from the need for a contention-free cache in Dgraph.

Features

  • High Hit Ratios - with our unique admission/eviction policy pairing, Ristretto's performance is best in class.
    • Eviction: SampledLFU - on par with exact LRU and better performance on Search and Database traces.
    • Admission: TinyLFU - extra performance with little memory overhead (12 bits per counter).
  • Fast Throughput - we use a variety of techniques for managing contention and the result is excellent throughput.
  • Cost-Based Eviction - any large new item deemed valuable can evict multiple smaller items (cost could be anything).
  • Fully Concurrent - you can use as many goroutines as you want with little throughput degradation.
  • Metrics - optional performance metrics for throughput, hit ratios, and other stats.
  • Simple API - just figure out your ideal Config values and you're off and running.

Status

Ristretto is production-ready. See Projects using Ristretto.

Usage

package main

import (
	"fmt"

	"github.com/dgraph-io/ristretto/v2"
)

func main() {
	cache, err := ristretto.NewCache(&ristretto.Config[string, string]{
		NumCounters: 1e7,     // number of keys to track frequency of (10M).
		MaxCost:     1 << 30, // maximum cost of cache (1GB).
		BufferItems: 64,      // number of keys per Get buffer.
	})
	if err != nil {
		panic(err)
	}
	defer cache.Close()

	// set a value with a cost of 1
	cache.Set("key", "value", 1)

	// wait for value to pass through buffers
	cache.Wait()

	// get value from cache
	value, found := cache.Get("key")
	if !found {
		panic("missing value")
	}
	fmt.Println(value)

	// del value from cache
	cache.Del("key")
}

Benchmarks

The benchmarks can be found in https://github.com/dgraph-io/benchmarks/tree/master/cachebench/ristretto.

Hit Ratios for Search

This trace is described as "disk read accesses initiated by a large commercial search engine in response to various web search requests."

Hit Ratio for Database

This trace is described as "a database server running at a commercial site running an ERP application on top of a commercial database."

Hit Ratio for Looping

This trace demonstrates a looping access pattern.

Hit Ratio for CODASYL

This trace is described as "references to a CODASYL database for a one hour period."

Throughput for Mixed Workload

Throughput ffor Read Workload

Through for Write Workload

Projects Using Ristretto

Below is a list of known projects that use Ristretto:

  • Badger - Embeddable key-value DB in Go
  • Dgraph - Horizontally scalable and distributed GraphQL database with a graph backend

FAQ

How are you achieving this performance? What shortcuts are you taking?

We go into detail in the Ristretto blog post, but in short: our throughput performance can be attributed to a mix of batching and eventual consistency. Our hit ratio performance is mostly due to an excellent admission policy and SampledLFU eviction policy.

As for "shortcuts," the only thing Ristretto does that could be construed as one is dropping some Set calls. That means a Set call for a new item (updates are guaranteed) isn't guaranteed to make it into the cache. The new item could be dropped at two points: when passing through the Set buffer or when passing through the admission policy. However, this doesn't affect hit ratios much at all as we expect the most popular items to be Set multiple times and eventually make it in the cache.

Is Ristretto distributed?

No, it's just like any other Go library that you can import into your project and use in a single process.