fastcache
Fast thread-safe inmemory cache for big number of entries in Go. Minimizes GC overhead
Top Related Projects
Scalable datastore for metrics, events, and real-time analytics
The Prometheus monitoring system and time series database.
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.
The high-performance database for modern applications
Lightweight, embedded, syncable NoSQL database engine for Android.
Quick Overview
VictoriaMetrics/fastcache is a high-performance in-memory cache library written in Go. It provides a simple and efficient way to cache data in memory, with a focus on low latency and high throughput.
Pros
- High Performance: The library is designed to be extremely fast, with low latency and high throughput.
- Simple API: The API is straightforward and easy to use, making it simple to integrate into existing projects.
- Thread-Safe: The cache is thread-safe, allowing for concurrent access without the need for additional synchronization.
- Flexible Eviction Policies: The library supports multiple eviction policies, including LRU (Least Recently Used) and FIFO (First-In, First-Out), allowing users to choose the most appropriate policy for their use case.
Cons
- Limited Persistence: The cache is in-memory only, and does not provide any built-in persistence mechanism. Data will be lost when the application is restarted.
- Limited Scalability: While the library is highly performant, it may not be suitable for extremely large caches or high-concurrency scenarios, as the in-memory nature of the cache can limit its scalability.
- Lack of Distributed Support: The library is designed for single-node use, and does not provide any built-in support for distributed caching or clustering.
- Limited Monitoring and Observability: The library does not have extensive monitoring or observability features, which may make it more difficult to integrate into larger, more complex systems.
Code Examples
Here are a few examples of how to use the VictoriaMetrics/fastcache library:
// Simple cache usage
cache := fastcache.New(1024 * 1024) // 1MB cache size
cache.Set([]byte("key"), []byte("value"))
value, found := cache.Get([]byte("key"))
if found {
fmt.Println("Value:", string(value))
}
// Using LRU eviction policy
cache := fastcache.New(1024 * 1024)
cache.SetEvictionCallback(func(key, value []byte) {
fmt.Printf("Evicted key: %s, value: %s\n", key, value)
})
cache.Set([]byte("key1"), []byte("value1"))
cache.Set([]byte("key2"), []byte("value2"))
cache.Set([]byte("key3"), []byte("value3")) // This will evict "key1"
// Batch operations
keys := [][]byte{
[]byte("key1"), []byte("key2"), []byte("key3"),
}
values := cache.GetBatch(keys)
for i, value := range values {
fmt.Printf("Key: %s, Value: %s\n", keys[i], value)
}
// Concurrent access
wg := sync.WaitGroup{}
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
cache.Set([]byte(fmt.Sprintf("key%d", i)), []byte(fmt.Sprintf("value%d", i)))
_, found := cache.Get([]byte(fmt.Sprintf("key%d", i)))
if found {
fmt.Printf("Found key: %s\n", fmt.Sprintf("key%d", i))
}
}()
}
wg.Wait()
Getting Started
To get started with VictoriaMetrics/fastcache, you can install the library using Go's package manager:
go get github.com/VictoriaMetrics/fastcache
Then, you can import the library and start using it in your Go code:
import "github.com/VictoriaMetrics/fastcache"
func main() {
cache := fastcache.New(1024 * 1024) // Create a 1MB cache
cache.Set([]byte("key"), []byte("value"))
value, found := cache.Get([]byte("key"))
if found {
fmt.
Competitor Comparisons
Scalable datastore for metrics, events, and real-time analytics
Pros of InfluxDB
- Time-Series Database: InfluxDB is a purpose-built time-series database, optimized for handling large amounts of time-series data, making it well-suited for use cases like monitoring, IoT, and analytics.
- Scalability: InfluxDB is designed to be highly scalable, allowing it to handle large volumes of data and high write/read throughput.
- Query Language: InfluxDB has its own query language, InfluxQL, which is similar to SQL and provides a powerful way to interact with and analyze time-series data.
Cons of InfluxDB
- Complexity: InfluxDB has a steeper learning curve compared to VictoriaMetrics/fastcache, as it is a more feature-rich and complex system.
- Resource Requirements: InfluxDB generally requires more system resources (CPU, memory, storage) compared to VictoriaMetrics/fastcache, especially for large-scale deployments.
Code Comparison
Here's a brief code comparison between InfluxDB and VictoriaMetrics/fastcache:
InfluxDB (Go):
// Create a new database
_, err := client.CreateDatabase("mydb")
if err != nil {
log.Fatal(err)
}
// Write data to the database
point, err := client.NewPoint(
"cpu_usage",
map[string]string{"host": "server01"},
map[string]interface{}{"value": 63.2},
time.Now(),
)
if err != nil {
log.Fatal(err)
}
client.Write("mydb", point)
VictoriaMetrics/fastcache (Go):
// Create a new cache
cache := fastcache.New(100 * 1024 * 1024) // 100MB cache size
// Set a value in the cache
cache.Set([]byte("key"), []byte("value"))
// Get a value from the cache
value, found := cache.Get(nil, []byte("key"))
if found {
fmt.Printf("Value: %s\n", value)
} else {
fmt.Println("Value not found")
}
The Prometheus monitoring system and time series database.
Pros of Prometheus
- Prometheus is a widely-adopted, open-source monitoring and alerting system that has become a de facto standard in the industry.
- Prometheus provides a rich set of features, including a powerful query language, flexible data storage, and robust alerting capabilities.
- The Prometheus ecosystem includes a large and active community, with a wide range of exporters, integrations, and tooling available.
Cons of Prometheus
- Prometheus can be more complex to set up and configure compared to simpler monitoring solutions.
- The resource requirements of Prometheus, particularly in terms of storage and CPU, can be higher than some alternatives.
- Prometheus may not be the best fit for certain use cases, such as high-cardinality time series data or real-time streaming data.
Code Comparison
Prometheus:
func (s *TargetManager) Sync(ctx context.Context) error {
s.mtx.Lock()
defer s.mtx.Unlock()
tps, err := s.discoverer.Discover(ctx)
if err != nil {
return err
}
s.targets = tps
return nil
}
VictoriaMetrics/fastcache:
func (c *Cache) Get(key []byte) (value []byte, ok bool) {
c.mu.RLock()
defer c.mu.RUnlock()
value, ok = c.m[string(key)]
return value, ok
}
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
Pros of Grafana
- Grafana is a comprehensive and feature-rich data visualization and monitoring platform, offering a wide range of capabilities beyond just caching.
- It has a large and active community, with a vast ecosystem of plugins and integrations, making it highly extensible.
- Grafana provides a user-friendly and intuitive web-based interface, making it accessible to a wide range of users.
Cons of Grafana
- Grafana is a more complex and resource-intensive solution compared to VictoriaMetrics/fastcache, which is a focused caching library.
- The learning curve for Grafana can be steeper, especially for users who are primarily interested in caching functionality.
Code Comparison
VictoriaMetrics/fastcache:
func New(size int) *Cache {
return &Cache{
items: make(map[string][]byte, size),
size: size,
}
}
func (c *Cache) Get(key string) ([]byte, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
v, ok := c.items[key]
return v, ok
}
Grafana:
func (c *Cache) Get(key string) (interface{}, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
v, ok := c.items[key]
return v, ok
}
func (c *Cache) Set(key string, value interface{}, duration time.Duration) {
c.mu.Lock()
defer c.mu.Unlock()
c.items[key] = value
c.expiration[key] = time.Now().Add(duration)
}
An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.
Pros of TimescaleDB
- TimescaleDB is a time-series database built on top of PostgreSQL, providing a powerful and scalable solution for time-series data management.
- It offers advanced features like automatic data partitioning, hypertables, and SQL-based querying, making it well-suited for time-series applications.
- TimescaleDB integrates seamlessly with the PostgreSQL ecosystem, allowing users to leverage the rich set of tools and extensions available for PostgreSQL.
Cons of TimescaleDB
- TimescaleDB has a higher overhead compared to FastCache, as it is a full-fledged database management system.
- The setup and configuration of TimescaleDB may be more complex than that of FastCache, which is a simpler in-memory cache.
- TimescaleDB may not be as performant as FastCache for certain use cases, especially when dealing with high-throughput, low-latency requirements.
Code Comparison
FastCache (VictoriaMetrics/fastcache):
func New(size int) *Cache {
return &Cache{
items: make(map[string][]byte, size),
size: size,
}
}
func (c *Cache) Get(key string) ([]byte, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
v, ok := c.items[key]
return v, ok
}
TimescaleDB (timescale/timescaledb):
CREATE TABLE IF NOT EXISTS metrics (
time TIMESTAMPTZ NOT NULL,
device_id TEXT NOT NULL,
metric_name TEXT NOT NULL,
value DOUBLE PRECISION NOT NULL
);
SELECT create_hypertable('metrics', 'time');
The high-performance database for modern applications
Pros of Dgraph
- Scalability: Dgraph is a distributed, horizontally scalable, and highly available graph database, making it suitable for large-scale applications.
- Query Language: Dgraph provides a powerful query language, called GraphQL+-, which allows for complex and efficient data retrieval.
- Ecosystem: Dgraph has a growing ecosystem with various tools and integrations, such as Ratel (web-based UI) and Dgraph Bulk Loader.
Cons of Dgraph
- Learning Curve: Dgraph's query language and overall architecture may have a steeper learning curve compared to simpler key-value stores like VictoriaMetrics/fastcache.
- Resource Consumption: Dgraph may require more system resources (CPU, memory, storage) compared to lightweight caching solutions like VictoriaMetrics/fastcache, especially for smaller datasets.
Code Comparison
VictoriaMetrics/fastcache:
cache := fastcache.New(100 * 1024 * 1024) // 100MB cache
cache.Set([]byte("key"), []byte("value"))
value, found := cache.Get(nil, []byte("key"))
if !found {
// key not found
}
Dgraph:
client, err := dgo.NewClient(dgo.FromConfig(&dgo.Config{
Addr: "localhost:9080",
}))
txn := client.NewTxn()
_, err = txn.Mutate(ctx, &api.Mutation{
SetJson: []byte(`{"name": "Alice", "age": 30}`),
})
if err != nil {
// handle error
}
Lightweight, embedded, syncable NoSQL database engine for Android.
Pros of Couchbase Lite Android
- Couchbase Lite Android provides a full-featured, embedded NoSQL database solution for Android applications, allowing for offline data storage and synchronization.
- The library offers a rich set of features, including support for CRUD operations, queries, and replication, making it a powerful choice for building offline-first mobile apps.
- Couchbase Lite Android is part of the Couchbase ecosystem, which provides a comprehensive suite of tools and services for data management, making it a well-integrated solution.
Cons of Couchbase Lite Android
- The Couchbase Lite Android library may have a larger footprint and higher resource requirements compared to a lightweight in-memory cache like VictoriaMetrics/fastcache, which could be a concern for resource-constrained mobile devices.
- The learning curve for Couchbase Lite Android may be steeper than a simpler in-memory cache, as it involves understanding the Couchbase ecosystem and its specific APIs and data models.
- The Couchbase Lite Android library may have a higher maintenance overhead, as it is a more feature-rich and complex solution compared to a focused in-memory cache.
Code Comparison
VictoriaMetrics/fastcache:
func New(size int) *Cache {
return &Cache{
data: make(map[string][]byte, size),
maxItems: size,
}
}
func (c *Cache) Set(key string, value []byte) {
c.mu.Lock()
defer c.mu.Unlock()
c.data[key] = value
if len(c.data) > c.maxItems {
c.evict()
}
}
Couchbase Lite Android:
Database database = manager.getDatabase("mydb");
Document document = database.getDocument("mydoc");
Map<String, Object> properties = new HashMap<>();
properties.put("name", "John Doe");
properties.put("age", 30);
document.putProperties(properties);
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
fastcache - fast thread-safe inmemory cache for big number of entries in Go
Features
- Fast. Performance scales on multi-core CPUs. See benchmark results below.
- Thread-safe. Concurrent goroutines may read and write into a single cache instance.
- The fastcache is designed for storing big number of entries without GC overhead.
- Fastcache automatically evicts old entries when reaching the maximum cache size set on its creation.
- Simple API.
- Simple source code.
- Cache may be saved to file and loaded from file.
- Works on Google AppEngine.
Benchmarks
Fastcache
performance is compared with BigCache, standard Go map
and sync.Map.
GOMAXPROCS=4 go test github.com/VictoriaMetrics/fastcache -bench='Set|Get' -benchtime=10s
goos: linux
goarch: amd64
pkg: github.com/VictoriaMetrics/fastcache
BenchmarkBigCacheSet-4 2000 10566656 ns/op 6.20 MB/s 4660369 B/op 6 allocs/op
BenchmarkBigCacheGet-4 2000 6902694 ns/op 9.49 MB/s 684169 B/op 131076 allocs/op
BenchmarkBigCacheSetGet-4 1000 17579118 ns/op 7.46 MB/s 5046744 B/op 131083 allocs/op
BenchmarkCacheSet-4 5000 3808874 ns/op 17.21 MB/s 1142 B/op 2 allocs/op
BenchmarkCacheGet-4 5000 3293849 ns/op 19.90 MB/s 1140 B/op 2 allocs/op
BenchmarkCacheSetGet-4 2000 8456061 ns/op 15.50 MB/s 2857 B/op 5 allocs/op
BenchmarkStdMapSet-4 2000 10559382 ns/op 6.21 MB/s 268413 B/op 65537 allocs/op
BenchmarkStdMapGet-4 5000 2687404 ns/op 24.39 MB/s 2558 B/op 13 allocs/op
BenchmarkStdMapSetGet-4 100 154641257 ns/op 0.85 MB/s 387405 B/op 65558 allocs/op
BenchmarkSyncMapSet-4 500 24703219 ns/op 2.65 MB/s 3426543 B/op 262411 allocs/op
BenchmarkSyncMapGet-4 5000 2265892 ns/op 28.92 MB/s 2545 B/op 79 allocs/op
BenchmarkSyncMapSetGet-4 1000 14595535 ns/op 8.98 MB/s 3417190 B/op 262277 allocs/op
MB/s
column here actually means millions of operations per second
.
As you can see, fastcache
is faster than the BigCache
in all the cases.
fastcache
is faster than the standard Go map and sync.Map
on workloads
with inserts.
Limitations
- Keys and values must be byte slices. Other types must be marshaled before storing them in the cache.
- Big entries with sizes exceeding 64KB must be stored via distinct API.
- There is no cache expiration. Entries are evicted from the cache only on cache size overflow. Entry deadline may be stored inside the value in order to implement cache expiration.
Architecture details
The cache uses ideas from BigCache:
- The cache consists of many buckets, each with its own lock. This helps scaling the performance on multi-core CPUs, since multiple CPUs may concurrently access distinct buckets.
- Each bucket consists of a
hash(key) -> (key, value) position
map and 64KB-sized byte slices (chunks) holding encoded(key, value)
entries. Each bucket contains onlyO(chunksCount)
pointers. For instance, 64GB cache would contain ~1M pointers, while similarly-sizedmap[string][]byte
would contain ~1B pointers for short keys and values. This would lead to huge GC overhead.
64KB-sized chunks reduce memory fragmentation and the total memory usage comparing
to a single big chunk per bucket.
Chunks are allocated off-heap if possible. This reduces total memory usage because
GC collects unused memory more frequently without the need in GOGC
tweaking.
Users
Fastcache
has been extracted from VictoriaMetrics sources. See this article for more info aboutVictoriaMetrics
.
FAQ
What is the difference between fastcache
and other similar caches like BigCache or FreeCache?
Fastcache
is faster. See benchmark results above.Fastcache
uses less memory due to lower heap fragmentation. This allows saving many GBs of memory on multi-GB caches.Fastcache
API is simpler. The API is designed to be used in zero-allocation mode.
Why fastcache
doesn't support cache expiration?
Because we don't need cache expiration in VictoriaMetrics.
Cached entries inside VictoriaMetrics
never expire. They are automatically evicted on cache size overflow.
It is easy to implement cache expiration on top of fastcache
by caching values
with marshaled deadlines and verifying deadlines after reading these values
from the cache.
Why fastcache
doesn't support advanced features such as thundering herd protection or callbacks on entries' eviction?
Because these features would complicate the code and would make it slower.
Fastcache
source code is simple - just copy-paste it and implement the feature you want
on top of it.
Top Related Projects
Scalable datastore for metrics, events, and real-time analytics
The Prometheus monitoring system and time series database.
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.
The high-performance database for modern applications
Lightweight, embedded, syncable NoSQL database engine for Android.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot