Convert Figma logo to code with AI

coocood logofreecache

A cache library for Go with zero GC overhead.

5,056
390
5,056
37

Top Related Projects

a thread-safe concurrent map for go

Efficient cache for gigabytes of data written in Go.

An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications.

2,578

An in-memory cache library for golang. It supports multiple eviction policies: LRU, LFU, ARC

A high performance memory-bound Go cache

Quick Overview

The freecache project is a fast, concurrent, in-memory cache library written in Go. It is designed to provide a simple and efficient way to cache data in memory, with a focus on performance and low memory usage.

Pros

  • High Performance: The library is designed to be highly performant, with low latency and high throughput.
  • Concurrency: The cache supports concurrent access, allowing multiple goroutines to access and modify the cache simultaneously.
  • Low Memory Usage: The cache is optimized for low memory usage, making it suitable for use in resource-constrained environments.
  • Simple API: The library provides a simple and intuitive API, making it easy to integrate into existing Go projects.

Cons

  • Limited Features: The library is focused on providing a basic in-memory cache, and may lack some of the more advanced features found in other cache libraries.
  • Lack of Persistence: The cache is in-memory only, and does not provide any built-in persistence mechanism. Data stored in the cache will be lost when the application is restarted.
  • Limited Eviction Policies: The library only supports a basic LRU (Least Recently Used) eviction policy, which may not be suitable for all use cases.
  • Limited Documentation: The project's documentation could be more comprehensive, making it harder for new users to get started.

Code Examples

Here are a few examples of how to use the freecache library:

// Initialize a new cache with a maximum size of 100MB
cache := freecache.NewCache(100 * 1024 * 1024)

// Set a key-value pair in the cache
cache.Set([]byte("key"), []byte("value"), 60) // 60 seconds expiration time

// Get a value from the cache
value, err := cache.Get([]byte("key"))
if err != nil {
    fmt.Println("Error getting value:", err)
} else {
    fmt.Println("Value:", string(value))
}

// Delete a key-value pair from the cache
cache.Del([]byte("key"))
// Iterate over all the keys in the cache
cache.IterateKeys(func(key []byte, expire int64) bool {
    fmt.Println("Key:", string(key), "Expire:", expire)
    return true // return false to stop iteration
})
// Set a callback function to be called when a key is evicted from the cache
cache.SetEvictionCallback(func(key, value []byte) {
    fmt.Println("Evicted key:", string(key), "value:", string(value))
})

Getting Started

To get started with the freecache library, follow these steps:

  1. Install the library using Go's package manager:
go get github.com/coocood/freecache
  1. Import the library in your Go code:
import "github.com/coocood/freecache"
  1. Create a new cache instance with a specified maximum size:
cache := freecache.NewCache(100 * 1024 * 1024) // 100MB cache size
  1. Use the cache's methods to store, retrieve, and manage data:
cache.Set([]byte("key"), []byte("value"), 60) // 60 seconds expiration time
value, err := cache.Get([]byte("key"))
cache.Del([]byte("key"))

That's the basic setup to get started with the freecache library. You can refer to the project's documentation for more advanced usage and configuration options.

Competitor Comparisons

a thread-safe concurrent map for go

Pros of concurrent-map

  • Provides a thread-safe implementation of a hash map, which can be useful in concurrent environments.
  • Supports common map operations like Get, Set, Delete, and Iterate.
  • Allows for custom hash and equality functions to be used.

Cons of concurrent-map

  • May have higher overhead compared to a simple in-memory map due to the concurrency mechanisms.
  • Doesn't provide the same level of performance optimization as a specialized cache like FreeCa

Code Comparison

concurrent-map:

m := concurrent_map.New()
m.Set("key", "value")
value, ok := m.Get("key")

FreeCa:

cache := freecache.NewCache(100 * 1024 * 1024) // 100MB
cache.Set([]byte("key"), []byte("value"), 60) // 60 seconds TTL
value, err := cache.Get([]byte("key"))

Efficient cache for gigabytes of data written in Go.

Pros of BigCache

  • BigCache supports expiration of cache entries, which can be useful for caching data that has a limited lifespan.
  • BigCache provides a more comprehensive set of features, including support for batch operations and custom eviction policies.
  • BigCache has a larger community and more active development compared to FreeCache.

Cons of BigCache

  • BigCache has a larger memory footprint compared to FreeCache, which may be a concern for applications with tight memory constraints.
  • BigCache has a more complex API compared to FreeCache, which may make it more difficult to integrate into some projects.
  • BigCache may have a higher performance overhead compared to FreeCache, depending on the specific use case.

Code Comparison

FreeCache:

cache := freecache.NewCache(100 * 1024 * 1024) // 100MB
cache.Set([]byte("key1"), []byte("value1"), 60) // 60 seconds TTL
value, err := cache.Get([]byte("key1"))
if err != nil {
    fmt.Println("Error getting value:", err)
} else {
    fmt.Println("Value:", string(value))
}

BigCache:

cache, _ := bigcache.NewBigCache(bigcache.DefaultConfig(5 * time.Minute))
cache.Set("key1", []byte("value1"))
value, _ := cache.Get("key1")
fmt.Println("Value:", string(value))

An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications.

Pros of go-cache

  • go-cache provides a simple and easy-to-use in-memory cache implementation, making it suitable for small to medium-sized applications.
  • The library offers expiration and eviction policies, allowing for more control over cache management.
  • go-cache has a smaller codebase and is generally more lightweight compared to freecache.

Cons of go-cache

  • go-cache is not as performant as freecache, especially for large datasets or high-concurrency scenarios.
  • The library does not provide the same level of memory optimization and low-level control as freecache.

Code Comparison

go-cache:

cache := goCache.New(5*time.Minute, 10*time.Minute)
cache.Set("key", "value", goCache.DefaultExpiration)
value, found := cache.Get("key")

freecache:

cache := freecache.NewCache(100 * 1024 * 1024) // 100MB
cache.Set([]byte("key"), []byte("value"), 60) // 60 seconds TTL
value, err := cache.Get([]byte("key"))
2,578

An in-memory cache library for golang. It supports multiple eviction policies: LRU, LFU, ARC

Pros of bluele/gcache

  • Concurrency Support: bluele/gcache provides built-in support for concurrent access to the cache, allowing multiple goroutines to safely access and modify the cache without race conditions.
  • Expiration Handling: bluele/gcache offers automatic expiration of cache entries, ensuring that stale data is automatically removed from the cache.
  • Eviction Policies: bluele/gcache supports various eviction policies, such as LRU (Least Recently Used) and LFU (Least Frequently Used), allowing for more fine-grained control over cache management.

Cons of bluele/gcache

  • Complexity: bluele/gcache has a more complex API and feature set compared to coocood/freecache, which may make it less suitable for simple use cases.
  • Performance: While bluele/gcache offers more features, it may have slightly lower performance compared to the more lightweight coocood/freecache, especially for simple use cases.
  • Dependency: bluele/gcache has a dependency on the golang.org/x/sync package, which may be a concern for some users who prefer to minimize external dependencies.

Code Comparison

coocood/freecache:

cache := freecache.NewCache(100 * 1024 * 1024) // 100MB
cache.Set([]byte("key"), []byte("value"), 60) // 60 seconds TTL
value, err := cache.Get([]byte("key"))

bluele/gcache:

cache := gcache.New(100).LRU().Build()
cache.Set("key", "value", 60*time.Second)
value, err := cache.Get("key")

A high performance memory-bound Go cache

Pros of Ristretto

  • Ristretto provides a more comprehensive set of features, including support for LRU (Least Recently Used) eviction, TTL (Time-to-Live), and concurrent access.
  • Ristretto has a more flexible API, allowing for custom eviction policies and cache management.
  • Ristretto is designed to be more scalable and performant, with a focus on reducing memory usage and improving cache hit rates.

Cons of Ristretto

  • Ristretto has a larger codebase and may have a steeper learning curve compared to FreeCahe.
  • Ristretto may have a higher overhead due to its more advanced features, which could impact performance in certain use cases.
  • Ristretto is primarily developed and maintained by the Dgraph team, while FreeCahe has a larger and more active community.

Code Comparison

FreeCahe:

cache := freecache.NewCache(100 * 1024 * 1024) // 100MB
cache.Set([]byte("key1"), []byte("value1"), 60) // 60 seconds TTL
value, err := cache.Get([]byte("key1"))

Ristretto:

cache, _ := ristretto.NewCache(&ristretto.Config{
    NumCounters: 1e7,     // number of keys to track frequency of (10M).
    MaxCost:     1 << 30, // maximum cost of cache (1GB).
    BufferItems: 64,      // number of keys per Get buffer.
})
cache.Set("key1", "value1", 1)
value, found := cache.Get("key1")

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

FreeCache - A cache library for Go with zero GC overhead and high concurrent performance.

Long lived objects in memory introduce expensive GC overhead, With FreeCache, you can cache unlimited number of objects in memory without increased latency and degraded throughput.

Build Status GoCover GoDoc

Features

  • Store hundreds of millions of entries
  • Zero GC overhead
  • High concurrent thread-safe access
  • Pure Go implementation
  • Expiration support
  • Nearly LRU algorithm
  • Strictly limited memory usage
  • Come with a toy server that supports a few basic Redis commands with pipeline
  • Iterator support

Performance

Here is the benchmark result compares to built-in map, Set performance is about 2x faster than built-in map, Get performance is about 1/2x slower than built-in map. Since it is single threaded benchmark, in multi-threaded environment, FreeCache should be many times faster than single lock protected built-in map.

BenchmarkCacheSet        3000000               446 ns/op
BenchmarkMapSet          2000000               861 ns/op
BenchmarkCacheGet        3000000               517 ns/op
BenchmarkMapGet         10000000               212 ns/op

Example Usage

// In bytes, where 1024 * 1024 represents a single Megabyte, and 100 * 1024*1024 represents 100 Megabytes.
cacheSize := 100 * 1024 * 1024
cache := freecache.NewCache(cacheSize)
debug.SetGCPercent(20)
key := []byte("abc")
val := []byte("def")
expire := 60 // expire in 60 seconds
cache.Set(key, val, expire)
got, err := cache.Get(key)
if err != nil {
    fmt.Println(err)
} else {
    fmt.Printf("%s\n", got)
}
affected := cache.Del(key)
fmt.Println("deleted key ", affected)
fmt.Println("entry count ", cache.EntryCount())

Notice

  • Memory is preallocated.
  • If you allocate large amount of memory, you may need to set debug.SetGCPercent() to a much lower percentage to get a normal GC frequency.
  • If you set a key to be expired in X seconds, e.g. using cache.Set(key, val, X), the effective cache duration will be within this range: (X-1, X] seconds. This is because that sub-second time at the moment will be ignored when calculating the the expiration: for example, if the current time is 8:15::01.800 (800 milliseconds passed since 8:15::01), the actual duration will be X-800ms.

How it is done

FreeCache avoids GC overhead by reducing the number of pointers. No matter how many entries stored in it, there are only 512 pointers. The data set is sharded into 256 segments by the hash value of the key. Each segment has only two pointers, one is the ring buffer that stores keys and values, the other one is the index slice which used to lookup for an entry. Each segment has its own lock, so it supports high concurrent access.

TODO

  • Support dump to file and load from file.
  • Support resize cache size at runtime.

License

The MIT License