Convert Figma logo to code with AI

bluele logogcache

An in-memory cache library for golang. It supports multiple eviction policies: LRU, LFU, ARC

2,578
270
2,578
28

Top Related Projects

An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications.

Efficient cache for gigabytes of data written in Go.

A cache library for Go with zero GC overhead.

Go Memcached client library #golang

Quick Overview

The gcache library is a general-purpose cache library for Go, providing a simple and efficient way to cache data in memory. It supports various cache eviction policies, such as LRU (Least Recently Used), LFU (Least Frequently Used), and TTL (Time-to-Live), allowing developers to choose the most appropriate policy for their use case.

Pros

  • Flexible Eviction Policies: The library supports multiple cache eviction policies, including LRU, LFU, and TTL, allowing developers to choose the most suitable policy for their application.
  • Concurrent Safe: The cache implementation is thread-safe, making it suitable for use in concurrent environments.
  • Customizable Expiration: Developers can set a custom expiration time for each cached item, ensuring that stale data is automatically removed from the cache.
  • Simple API: The library provides a straightforward and easy-to-use API, making it simple to integrate into existing projects.

Cons

  • Limited Persistence: The gcache library is an in-memory cache, and it does not provide any built-in persistence mechanism. If data persistence is required, developers will need to implement their own solution or use a separate storage system.
  • Limited Clustering/Distributed Support: The library is designed for single-node use and does not have built-in support for clustering or distributed caching scenarios.
  • Lack of Advanced Features: Compared to some other cache libraries, gcache may lack more advanced features, such as batch operations, cache invalidation, or integration with external monitoring/management tools.
  • Potential Performance Overhead: Depending on the cache size and usage patterns, the overhead of the cache management and eviction policies may impact the overall performance of the application.

Code Examples

Here are a few examples of how to use the gcache library:

  1. Basic Cache Usage:
cache := gcache.New(10).LRU()
cache.Set("key", "value")
value, _ := cache.Get("key")
fmt.Println(value) // Output: "value"
  1. Cache with Time-to-Live (TTL):
cache := gcache.New(10).LRU().Expiration(time.Minute)
cache.Set("key", "value")
value, _ := cache.Get("key")
fmt.Println(value) // Output: "value"
time.Sleep(61 * time.Second)
_, ok := cache.Get("key")
fmt.Println(ok) // Output: false
  1. Cache with Least Frequently Used (LFU) Eviction Policy:
cache := gcache.New(10).LFU()
cache.Set("key1", "value1")
cache.Set("key2", "value2")
cache.Get("key1") // Access "key1" to increase its frequency
cache.Set("key3", "value3") // This will evict "key2" due to LFU policy
  1. Batch Operations:
cache := gcache.New(10).LRU()
cache.SetMulti(map[string]interface{}{
    "key1": "value1",
    "key2": "value2",
    "key3": "value3",
})
values, _ := cache.GetMulti([]string{"key1", "key2", "key3"})
fmt.Println(values) // Output: map[string]interface{}{"key1": "value1", "key2": "value2", "key3": "value3"}

Getting Started

To use the gcache library in your Go project, follow these steps:

  1. Install the library using go get:
go get github.com/bluele/gcache
  1. Import the library in your Go file:
import "github.com/bluele/gcache"
  1. Create a new cache instance with the desired configuration:
cache := gcache.New(100).LRU().Expiration(time.Minute).Build()
  1. Use the cache to store and retrieve data:
cache.Set("key", "value")
value, _ := cache.Get

Competitor Comparisons

An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications.

Pros of go-cache

  • Simplicity: go-cache is a lightweight and straightforward cache implementation, making it easy to integrate into projects.
  • Expiration: go-cache supports expiration of cache items, allowing for efficient management of cached data.
  • Concurrent Access: go-cache is thread-safe, enabling concurrent access to the cache without race conditions.

Cons of go-cache

  • Limited Features: go-cache lacks some advanced features found in gcache, such as LRU (Least Recently Used) eviction and sharding.
  • No Distributed Cache: go-cache is a local cache, and it does not provide a distributed cache solution like gcache.

Code Comparison

go-cache:

cache := goCache.New(5*time.Minute, 10*time.Minute)
cache.Set("key", "value", goCache.DefaultExpiration)
value, found := cache.Get("key")

gcache:

cache := gcache.New(100).LRU().Expiration(5 * time.Minute).Build()
cache.Set("key", "value")
value, err := cache.Get("key")

Efficient cache for gigabytes of data written in Go.

Pros of BigCache

  • BigCache is designed to be a high-performance, in-memory cache that can handle large amounts of data.
  • BigCache provides a simple and easy-to-use API for managing cache entries.
  • BigCache supports various eviction strategies, such as LRU (Least Recently Used) and LFU (Least Frequently Used).

Cons of BigCache

  • BigCache may have a higher memory footprint compared to GCache, as it uses a more complex data structure to manage cache entries.
  • BigCache may have a slightly higher overhead for small cache operations, as it needs to maintain the cache metadata.
  • BigCache does not provide the same level of customization and flexibility as GCache, which allows for more fine-grained control over cache behavior.

Code Comparison

GCache:

cache := gcache.New(10).LRU().Build()
cache.Set("key", "value")
value, _ := cache.Get("key")

BigCache:

cache, _ := bigcache.NewBigCache(bigcache.DefaultConfig(10 * time.Minute))
cache.Set("key", []byte("value"))
value, _ := cache.Get("key")

The main difference in the code is the way the cache is configured and the API for setting and retrieving values. GCache provides a more fluent API for configuring the cache, while BigCache uses a more traditional configuration object.

A cache library for Go with zero GC overhead.

Pros of freecache

  • Faster Performance: freecache is written in pure Go and is designed to be highly efficient, with benchmarks showing it outperforming gcache in terms of read and write operations.
  • Simpler API: freecache has a more straightforward and easier-to-use API compared to gcache, which may be more suitable for some use cases.
  • Smaller Footprint: freecache has a smaller memory footprint than gcache, making it a better choice for applications with limited resources.

Cons of freecache

  • Fewer Features: freecache has a more limited set of features compared to gcache, which may lack some functionality required by more complex applications.
  • Less Mature: freecache is a relatively newer project compared to gcache, and may not have the same level of community support and documentation.
  • Limited Concurrency: freecache's concurrency model may not be as robust as gcache's, which could be a concern for highly concurrent applications.

Code Comparison

gcache:

cache := gcache.New(10).LRU().Build()
cache.Set("key", "value", 10*time.Second)
value, _ := cache.Get("key")

freecache:

cache := freecache.NewCache(100 * 1024 * 1024) // 100MB
cache.Set([]byte("key"), []byte("value"), 10)
value, _ := cache.Get([]byte("key"))

The main differences in the code are the initialization and the use of byte slices instead of strings for the keys and values in freecache.

Go Memcached client library #golang

Pros of gomemcache

  • Provides a simple and lightweight API for interacting with Memcached, a popular distributed memory caching system.
  • Supports common Memcached operations such as Get, Set, Add, Replace, and Delete.
  • Includes support for Memcached's expiration and flags features.

Cons of gomemcache

  • Lacks advanced features like connection pooling, automatic failover, and distributed caching.
  • Does not provide any built-in support for serialization or deserialization of data.

Code Comparison

gcache:

cache := gcache.New(10).LRU().Build()
cache.Set("key", "value")
value, _ := cache.Get("key")

gomemcache:

client := memcache.New("localhost:11211")
client.Set(&memcache.Item{
    Key:   "key",
    Value: []byte("value"),
})
item, _ := client.Get("key")

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

GCache

Test GoDoc

Cache library for golang. It supports expirable Cache, LFU, LRU and ARC.

Features

  • Supports expirable Cache, LFU, LRU and ARC.

  • Goroutine safe.

  • Supports event handlers which evict, purge, and add entry. (Optional)

  • Automatically load cache if it doesn't exists. (Optional)

Install

$ go get github.com/bluele/gcache

Example

Manually set a key-value pair.

package main

import (
  "github.com/bluele/gcache"
  "fmt"
)

func main() {
  gc := gcache.New(20).
    LRU().
    Build()
  gc.Set("key", "ok")
  value, err := gc.Get("key")
  if err != nil {
    panic(err)
  }
  fmt.Println("Get:", value)
}
Get: ok

Manually set a key-value pair, with an expiration time.

package main

import (
  "github.com/bluele/gcache"
  "fmt"
  "time"
)

func main() {
  gc := gcache.New(20).
    LRU().
    Build()
  gc.SetWithExpire("key", "ok", time.Second*10)
  value, _ := gc.Get("key")
  fmt.Println("Get:", value)

  // Wait for value to expire
  time.Sleep(time.Second*10)

  value, err := gc.Get("key")
  if err != nil {
    panic(err)
  }
  fmt.Println("Get:", value)
}
Get: ok
// 10 seconds later, new attempt:
panic: ErrKeyNotFound

Automatically load value

package main

import (
  "github.com/bluele/gcache"
  "fmt"
)

func main() {
  gc := gcache.New(20).
    LRU().
    LoaderFunc(func(key interface{}) (interface{}, error) {
      return "ok", nil
    }).
    Build()
  value, err := gc.Get("key")
  if err != nil {
    panic(err)
  }
  fmt.Println("Get:", value)
}
Get: ok

Automatically load value with expiration

package main

import (
  "fmt"
  "time"

  "github.com/bluele/gcache"
)

func main() {
  var evictCounter, loaderCounter, purgeCounter int
  gc := gcache.New(20).
    LRU().
    LoaderExpireFunc(func(key interface{}) (interface{}, *time.Duration, error) {
      loaderCounter++
      expire := 1 * time.Second
      return "ok", &expire, nil
    }).
    EvictedFunc(func(key, value interface{}) {
      evictCounter++
      fmt.Println("evicted key:", key)
    }).
    PurgeVisitorFunc(func(key, value interface{}) {
      purgeCounter++
      fmt.Println("purged key:", key)
    }).
    Build()
  value, err := gc.Get("key")
  if err != nil {
    panic(err)
  }
  fmt.Println("Get:", value)
  time.Sleep(1 * time.Second)
  value, err = gc.Get("key")
  if err != nil {
    panic(err)
  }
  fmt.Println("Get:", value)
  gc.Purge()
  if loaderCounter != evictCounter+purgeCounter {
    panic("bad")
  }
}
Get: ok
evicted key: key
Get: ok
purged key: key

Cache Algorithm

  • Least-Frequently Used (LFU)

Discards the least frequently used items first.

func main() {
  // size: 10
  gc := gcache.New(10).
    LFU().
    Build()
  gc.Set("key", "value")
}
  • Least Recently Used (LRU)

Discards the least recently used items first.

func main() {
  // size: 10
  gc := gcache.New(10).
    LRU().
    Build()
  gc.Set("key", "value")
}
  • Adaptive Replacement Cache (ARC)

Constantly balances between LRU and LFU, to improve the combined result.

detail: http://en.wikipedia.org/wiki/Adaptive_replacement_cache

func main() {
  // size: 10
  gc := gcache.New(10).
    ARC().
    Build()
  gc.Set("key", "value")
}
  • SimpleCache (Default)

SimpleCache has no clear priority for evict cache. It depends on key-value map order.

func main() {
  // size: 10
  gc := gcache.New(10).Build()
  gc.Set("key", "value")
  v, err := gc.Get("key")
  if err != nil {
    panic(err)
  }
}

Loading Cache

If specified LoaderFunc, values are automatically loaded by the cache, and are stored in the cache until either evicted or manually invalidated.

func main() {
  gc := gcache.New(10).
    LRU().
    LoaderFunc(func(key interface{}) (interface{}, error) {
      return "value", nil
    }).
    Build()
  v, _ := gc.Get("key")
  // output: "value"
  fmt.Println(v)
}

GCache coordinates cache fills such that only one load in one process of an entire replicated set of processes populates the cache, then multiplexes the loaded value to all callers.

Expirable cache

func main() {
  // LRU cache, size: 10, expiration: after a hour
  gc := gcache.New(10).
    LRU().
    Expiration(time.Hour).
    Build()
}

Event handlers

Evicted handler

Event handler for evict the entry.

func main() {
  gc := gcache.New(2).
    EvictedFunc(func(key, value interface{}) {
      fmt.Println("evicted key:", key)
    }).
    Build()
  for i := 0; i < 3; i++ {
    gc.Set(i, i*i)
  }
}
evicted key: 0

Added handler

Event handler for add the entry.

func main() {
  gc := gcache.New(2).
    AddedFunc(func(key, value interface{}) {
      fmt.Println("added key:", key)
    }).
    Build()
  for i := 0; i < 3; i++ {
    gc.Set(i, i*i)
  }
}
added key: 0
added key: 1
added key: 2

Author

Jun Kimura