Convert Figma logo to code with AI

spotify logosparkey

Simple constant key/value storage library, for read-heavy systems with infrequent large bulk inserts.

1,160
80
1,160
9

Top Related Projects

28,450

A library that provides an embeddable, persistent key-value store for fast storage.

36,378

LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.

1,283

because you need another a key/value storage engine

Apache Cassandra®

13,406

memcached development tree

66,686

Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.

Quick Overview

Sparkey is a simple constant key/value storage library developed by Spotify. It's designed for read-heavy systems with infrequent large bulk inserts, offering high read performance and efficient storage for large datasets.

Pros

  • High read performance, optimized for read-heavy workloads
  • Efficient storage for large datasets
  • Support for both in-memory and on-disk operations
  • Simple API and easy integration

Cons

  • Limited write performance, not suitable for frequent small updates
  • Lack of advanced features like transactions or complex querying
  • Primarily maintained by Spotify, potentially slower community support
  • Not as widely adopted as some other key/value stores

Code Examples

  1. Creating a new Sparkey writer:
SparkeyWriter writer = Sparkey.createNew(new File("mydata.spl"), CompressionType.SNAPPY, 1024);
writer.put("key1", "value1");
writer.put("key2", "value2");
writer.close();
  1. Reading from a Sparkey file:
SparkeyReader reader = Sparkey.open(new File("mydata.spl"));
String value = reader.getAsString("key1");
System.out.println(value); // Outputs: value1
reader.close();
  1. Iterating over all entries:
SparkeyReader reader = Sparkey.open(new File("mydata.spl"));
SparkeyReader.Entry entry;
while ((entry = reader.getNext()) != null) {
    System.out.println(entry.getKeyAsString() + ": " + entry.getValueAsString());
}
reader.close();

Getting Started

To use Sparkey in your Java project, add the following dependency to your Maven pom.xml:

<dependency>
    <groupId>com.spotify.sparkey</groupId>
    <artifactId>sparkey</artifactId>
    <version>3.2.4</version>
</dependency>

For Gradle, add this to your build.gradle:

implementation 'com.spotify.sparkey:sparkey:3.2.4'

Then, you can start using Sparkey in your Java code as shown in the code examples above.

Competitor Comparisons

28,450

A library that provides an embeddable, persistent key-value store for fast storage.

Pros of RocksDB

  • More feature-rich and versatile, supporting a wider range of use cases
  • Better performance for write-heavy workloads and larger datasets
  • Active development with frequent updates and improvements

Cons of RocksDB

  • Higher complexity and steeper learning curve
  • Larger memory footprint and potentially slower for read-only workloads
  • More dependencies and harder to integrate into some projects

Code Comparison

Sparkey (C):

sparkey_hash_write(writer, "key", 3, "value", 5);

RocksDB (C++):

db->Put(WriteOptions(), "key", "value");

Key Differences

  • Sparkey is designed for simplicity and read-heavy workloads, while RocksDB is more versatile
  • RocksDB offers advanced features like transactions and column families
  • Sparkey has a smaller codebase and is easier to integrate into projects
  • RocksDB provides better support for concurrent access and multi-threaded environments

Use Cases

  • Sparkey: Ideal for read-heavy workloads with infrequent updates, such as caching or static data storage
  • RocksDB: Suitable for a wide range of applications, including write-heavy workloads, real-time analytics, and as a storage engine for databases
36,378

LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.

Pros of LevelDB

  • Higher performance for random reads and writes
  • Supports atomic batch operations for multiple updates
  • More mature and widely adopted in production environments

Cons of LevelDB

  • Larger memory footprint, especially for large datasets
  • Lacks built-in compression, requiring additional libraries
  • More complex API and setup compared to Sparkey

Code Comparison

LevelDB:

leveldb::DB* db;
leveldb::Options options;
options.create_if_missing = true;
leveldb::Status status = leveldb::DB::Open(options, "/tmp/testdb", &db);
db->Put(leveldb::WriteOptions(), "key", "value");

Sparkey:

sparkey_hash_writer *writer;
sparkey_hash_write_open(&writer, "test.spi", "test.spl", SPARKEY_COMPRESSION_NONE, 0);
sparkey_hash_write_add(writer, 3, "key", 5, "value");
sparkey_hash_write_close(&writer);

Key Differences

  • LevelDB uses a log-structured merge-tree (LSM) for storage, while Sparkey uses a hash table with separate index and log files
  • Sparkey offers better write performance and lower memory usage for large, append-heavy workloads
  • LevelDB provides more advanced features like snapshots and iterators, making it suitable for complex database operations

Both libraries have their strengths, with LevelDB excelling in general-purpose key-value storage and Sparkey optimized for specific use cases like append-heavy workloads and large datasets with infrequent updates.

1,283

because you need another a key/value storage engine

Pros of Bitcask

  • Designed for high write throughput and low latency reads
  • Supports automatic data compaction and garbage collection
  • Implements a simple and efficient key/value storage model

Cons of Bitcask

  • Limited support for range queries or complex data structures
  • Requires more disk space due to its append-only log structure
  • May have higher memory usage for maintaining in-memory key directories

Code Comparison

Bitcask (Erlang):

{ok, Ref} = bitcask:open("path/to/database", [read_write]),
ok = bitcask:put(Ref, <<"key">>, <<"value">>),
{ok, Value} = bitcask:get(Ref, <<"key">>),
ok = bitcask:close(Ref).

Sparkey (C):

sparkey_hash_write(writer, "key", 3, "value", 5);
sparkey_hash_get(reader, "key", 3, &entry);
sparkey_logiter_get_value(&iter, &value, &value_length);

Both Bitcask and Sparkey are key-value storage systems, but they have different design goals and implementation languages. Bitcask focuses on high write throughput and low latency reads, while Sparkey emphasizes read performance and memory efficiency. Bitcask is implemented in Erlang, making it well-suited for Erlang/OTP applications, whereas Sparkey is written in C, offering potential performance benefits and broader language interoperability through bindings.

Apache Cassandra®

Pros of Cassandra

  • Highly scalable and distributed database system
  • Supports multi-datacenter replication
  • Offers tunable consistency levels

Cons of Cassandra

  • Higher complexity and resource requirements
  • Steeper learning curve for setup and maintenance
  • Less suitable for simple key-value storage needs

Code Comparison

Sparkey (writing):

writer = sparkey.HashWriter("example.spl", "example.spi")
writer.put(b"key", b"value")
writer.close()

Cassandra (writing):

session.execute(
    "INSERT INTO keyspace.table (key, value) VALUES (%s, %s)",
    ("key", "value")
)

Key Differences

Sparkey is a simple key-value store library, while Cassandra is a full-featured distributed database system. Sparkey is designed for efficient read-heavy workloads with infrequent batch updates, whereas Cassandra excels in handling large-scale distributed data with high availability and fault tolerance.

Sparkey is more suitable for applications requiring a lightweight, embedded key-value store with fast read performance. Cassandra is better suited for complex, distributed systems that need to handle large amounts of structured data across multiple nodes and data centers.

13,406

memcached development tree

Pros of Memcached

  • Widely adopted and battle-tested in production environments
  • Distributed caching system with support for multiple servers
  • Simple key-value store with fast read and write operations

Cons of Memcached

  • Limited data persistence options
  • Lacks advanced features like range queries or iterators
  • Memory-only storage can lead to data loss on server restarts

Code Comparison

Memcached (C):

memcached_return_t memcached_set(memcached_st *ptr,
                                 const char *key,
                                 size_t key_length,
                                 const char *value,
                                 size_t value_length,
                                 time_t expiration,
                                 uint32_t flags);

Sparkey (C):

sparkey_returncode sparkey_hash_put(sparkey_hashwriter *writer,
                                    uint64_t keylen,
                                    const uint8_t *key,
                                    uint64_t valuelen,
                                    const uint8_t *value);

Both libraries provide simple key-value storage APIs, but Sparkey focuses on local, persistent storage, while Memcached is designed for distributed, in-memory caching. Memcached offers additional features like expiration times and flags, whereas Sparkey provides a more straightforward put operation for local use cases.

66,686

Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.

Pros of Redis

  • More feature-rich, offering a wide range of data structures and operations
  • Highly scalable with built-in clustering and replication capabilities
  • Extensive community support and ecosystem of tools/libraries

Cons of Redis

  • Higher memory usage compared to Sparkey's efficient storage
  • More complex setup and configuration for advanced features
  • Potentially slower for simple key-value operations due to additional overhead

Code Comparison

Redis:

SET key value
GET key
INCR counter
LPUSH mylist item
HSET myhash field value

Sparkey:

writer_open(&writer, "mydb", SPARKEY_COMPRESSION_NONE);
sparkey_logwriter_put(&writer, key, value);
writer_close(&writer);
sparkey_hash_open(&hash, "mydb", SPARKEY_HASH_ALGORITHM_MURMUR3);
sparkey_hash_get(&hash, key, &value);

Summary

Redis is a more comprehensive in-memory data store with a wide range of features, while Sparkey is a simple and efficient key-value storage library. Redis excels in complex scenarios requiring various data structures and scalability, whereas Sparkey is optimized for high-performance, low-memory usage in simpler key-value storage applications. The choice between them depends on the specific requirements of your project, such as feature needs, performance constraints, and scalability demands.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Sparkey is a simple constant key/value storage library. It is mostly suited for read heavy systems with infrequent large bulk inserts. It includes both a C library for working with sparkey index and log files (libsparkey), and a command line utility for getting info about and reading values from a sparkey index/log (sparkey).

Travis

Continuous integration with travis.

Build Status

Dependencies

  • GNU build system (autoconf, automake, libtool)
  • Snappy

Optional

Building

autoreconf --install
./configure
make
make check

API documentation can be generated with doxygen.

Installing

sudo make install && sudo ldconfig

Related projects

Description

Sparkey is an extremely simple persistent key-value store. You could think of it as a read-only hashtable on disk and you wouldn't be far off. It is designed and optimized for some server side usecases at Spotify but it is written to be completely generic and makes no assumptions about what kind of data is stored.

Some key characteristics:

  • Supports data sizes up to 2^63 - 1 bytes.
  • Supports iteration, get, put, delete
  • Optimized for bulk writes.
  • Immutable hash table.
  • Any amount of concurrent independent readers.
  • Only allows one writer at a time per storage unit.
  • Cross platform storage file.
  • Low overhead per entry.
  • Constant read startup cost
  • Low number of disk seeks per read
  • Support for block level compression.
  • Data agnostic, it just maps byte arrays to byte arrays.

What it's not:

  • It's not a distributed key value store - it's just a hash table on disk.
  • It's not a compacted data store, but that can be implemented on top of it, if needed.
  • It's not robust against data corruption.

The usecase we have for it at Spotify is serving data that rarely gets updated to users or other services. The fast and efficient bulk writes makes it feasible to periodically rebuild the data, and the fast random access reads makes it suitable for high throughput low latency services. For some services we have been able to saturate network interfaces while keeping cpu usage really low.

Limitations

The hash writing process requires memory allocation of num_entries * 16 * 1.3 bytes. This means that you may run out of memory if trying to write a hash index for too many entries. For instance, with 16 GB available RAM you may write 825 million entries.

This limitation has been removed in sparkey-java, but it has not yet been implemented in this version.

Usage

Sparkey is meant to be used as a library embedded in other software. Take a look at the API documentation which gives examples on how to use it.

License

Apache License, Version 2.0

Design

Sparkey uses two files on disk to store its data. The first is the sparkey log file (.spl), which is simply a sequence of key value pairs. This is an append-only file. You're not allowed to modify it in the middle, and you can't use more than one writer to append to it.

The other file is the sparkey index file (.spi) which is a just a hashtable pointing at entries in the log. This is an immutable file, so you would typically only update it once you're done with your bulk appends.

Doing a random lookup involves first finding the proper entry in the hashtable, and then doing a seek to the right offset in the log file. On average, this means two disk seeks per access for a cold disk cache. If you mlock the index file, it goes down to one seek. For some of our usecases, the total data set is less than the available RAM, so it makes sense to mlock everything.

The advantages of having two files instead of just one (another solution would be to append the hash table at the end) is that it's trivial to mlock one of the files and not the other. It also enables us to append more data to existing log files, even after it's already in use.

History

Sparkey is the product of hackdays at Spotify, where our developers get to spend some of their time on anything they think is interesting.

We have several usecases where we need to serve large amounts of static data with high throughput and low latency. To do this, we've built our own services, backed by various storage systems. Our flow consists of first generating large static storage files in our offline-systems, which then gets pushed out to the user facing services to serve the data.

The storage solutions we used for that have all served us well for a time, but they had limitations that became problematic.

  • We used to rely a lot on CDB (which is a really great piece of software). It performed blazingly quick and produces compact files. We only stopped using it when our data started growing close to the 4 GB limit
  • We also used (and still use) Tokyo Cabinet for a bunch of usecases. It performs really well for reading, but the write throughput really suffers when you can no longer keep the entire dataset in memory, and there were issues with opening the same file multiple times from the same process.

We needed a key-value store with the following characteristics:

  • random read throughput comparable to tokyo cabinet and cdb.
  • high throughput bulk writes.
  • low overhead.
  • high limit on data size.

For fun, we started hacking on a new key-value store on our internal hackdays, where developers get to work on whatever they're interested in. The result was this project.

Performance

A very simple benchmark program is included - see src/bench.c. The program is designed to be easily extended to measure other key value stores if anyone wants to. Running it on a production-like server (Intel(R) Xeon(R) CPU L5630 @ 2.13GHz) we get the following:

Testing bulk insert of 1000 elements and 1000.000 random lookups
  Candidate: Sparkey
    creation time (wall):     0.00
    creation time (cpu):      0.00
    throughput (puts/cpusec): 1098272.88
    file size:                28384
    lookup time (wall):          0.50
    lookup time (cpu):           0.58
    throughput (lookups/cpusec): 1724692.62

Testing bulk insert of 1000.000 elements and 1000.000 random lookups
  Candidate: Sparkey
    creation time (wall):     0.50
    creation time (cpu):      0.69
    throughput (puts/cpusec): 1448618.25
    file size:                34177984
    lookup time (wall):          1.00
    lookup time (cpu):           0.78
    throughput (lookups/cpusec): 1284477.75

Testing bulk insert of 10.000.000 elements and 1000.000 random lookups
  Candidate: Sparkey
    creation time (wall):     7.50
    creation time (cpu):      7.73
    throughput (puts/cpusec): 1294209.62
    file size:                413777988
    lookup time (wall):          1.00
    lookup time (cpu):           0.99
    throughput (lookups/cpusec): 1014608.94

Testing bulk insert of 100.000.000 elements and 1000.000 random lookups
  Candidate: Sparkey
    creation time (wall):     82.00
    creation time (cpu):      81.58
    throughput (puts/cpusec): 1225726.75
    file size:                4337777988
    lookup time (wall):          2.00
    lookup time (cpu):           1.98
    throughput (lookups/cpusec): 503818.84

Testing bulk insert of 1000 elements and 1000.000 random lookups
  Candidate: Sparkey compressed(1024)
    creation time (wall):     0.00
    creation time (cpu):      0.00
    throughput (puts/cpusec): 1101445.38
    file size:                19085
    lookup time (wall):          3.50
    lookup time (cpu):           3.30
    throughput (lookups/cpusec): 303335.78

Testing bulk insert of 1000.000 elements and 1000.000 random lookups
  Candidate: Sparkey compressed(1024)
    creation time (wall):     0.50
    creation time (cpu):      0.75
    throughput (puts/cpusec): 1333903.25
    file size:                19168683
    lookup time (wall):          3.00
    lookup time (cpu):           2.91
    throughput (lookups/cpusec): 343833.28

Testing bulk insert of 10.000.000 elements and 1000.000 random lookups
  Candidate: Sparkey compressed(1024)
    creation time (wall):     8.50
    creation time (cpu):      8.50
    throughput (puts/cpusec): 1176634.25
    file size:                311872187
    lookup time (wall):          3.00
    lookup time (cpu):           2.99
    throughput (lookups/cpusec): 334490.22

Testing bulk insert of 100.000.000 elements and 1000.000 random lookups
  Candidate: Sparkey compressed(1024)
    creation time (wall):     90.50
    creation time (cpu):      90.46
    throughput (puts/cpusec): 1105412.00
    file size:                3162865465
    lookup time (wall):          3.50
    lookup time (cpu):           3.60
    throughput (lookups/cpusec): 277477.41

File format details

Log file format

The contents of the log file starts with a constant size header, describing some metadata about the log file. After that is just a sequence of entries, where each entry consists of a type, key and a value.

Each entry begins with two Variable Length Quantity (VLQ) non-negative integers, A and B. The type is determined by the A. If A = 0, it's a DELETE, and B represents the length of the key to delete. If A > 0, it's a PUT and the key length is A - 1, and the value length is B.

(It gets slightly more complex if block level compression is used, but we'll ignore that for now.)

Hash file format

The contents of the hash file starts with a constant size header, similarly to the log file. The rest of the file is a hash table, represented as capacity * slotsize bytes. The capacity is simply an upper bound of the number of live entries multiplied by a hash density factor > 1.0. The default implementation uses density factor = 1.3. Each slot consists of two parts, the hash value part and the address. The size of the hash value is either 4 or 8 bytes, depending on the hash algorithm. It currently uses murmurhash32 if the number of entries is small, and a 64 bit truncation of murmurhash128 if the number of entries is large. The address is simply a reference into the log file, either as 4 or 8 bytes, depending on the size of the log file. That means that the slotsize is usually 16 bytes for any reasonably large set of entries. By storing the hash value itself in each slot we're wasting some space, but in return we can expect to avoid visiting the log file in most cases.

Hash lookup algorithm

One of few non-trivial parts in Sparkey is the way it does hash lookups. With hashtables there is always a risk of collisions. Even if the hash itself may not collide, the assigned slots may.

(It recently came to my attention that the method described below is basically the same thing as Robin Hood hashing with backward shift deletion)

Let's define displacement as the distance from the calculated optimal slot for a given hash to the slot it's actually placed in. Distance in this case is defined as the number of steps you need to move forward from your optimal slot to reach the actual slot.

The trivial and naive solution for this is to simply start with an empty hash table, move through the entries and put them in the first available slot, starting from the optimal slot, and this is almost what we do.

If we consider the average displacement, we can't really do better than that. We can however minimize the maximum displacement, which gives us some nice properties:

  • We can store the maximum displacement in the header, so we have an upper bound on traversals. We could possibly even use this information to binary search for the entry.
  • As soon as we reach an entry with higher displacement than the thing we're looking for, we can abort the lookup.

It's very easy to set up the hash table like this, we just need to do insertions into slots instead of appends. As soon as we reach a slot with a smaller displacement than our own, we shift the following slots up until the first empty slot one step and insert our own element.

Let's illustrate it with an example - let's start off with an empty hash table with a capacity of 7:

         hash value   log offset    optimal slot    displacement
       +------------+------------+
slot 0 |            |            |
       +------------+------------+
slot 1 |            |            |
       +------------+------------+
slot 2 |            |            |
       +------------+------------+
slot 3 |            |            |
       +------------+------------+
slot 4 |            |            |
       +------------+------------+
slot 5 |            |            |
       +------------+------------+
slot 6 |            |            |
       +------------+------------+

We add the key "key0" which happens to end up in slot 3, h("key0") % 7 == 3. The slot is empty, so this is trivial:

         hash value   log offset    optimal slot    displacement
       +------------+------------+
slot 0 |            |            |
       +------------+------------+
slot 1 |            |            |
       +------------+------------+
slot 2 |            |            |
       +------------+------------+
slot 3 | h(key0)    | 1          |  3               0
       +------------+------------+
slot 4 |            |            |
       +------------+------------+
slot 5 |            |            |
       +------------+------------+
slot 6 |            |            |
       +------------+------------+

Now we add "key1" which happens to end up in slot 4:

         hash value   log offset    optimal slot    displacement
       +------------+------------+
slot 0 |            |            |
       +------------+------------+
slot 1 |            |            |
       +------------+------------+
slot 2 |            |            |
       +------------+------------+
slot 3 | h(key0)    | 1          |  3               0
       +------------+------------+
slot 4 | h(key1)    | 11         |  4               0
       +------------+------------+
slot 5 |            |            |
       +------------+------------+
slot 6 |            |            |
       +------------+------------+

Now we add "key2" which also wants to be in slot 3. This is a conflict, so we skip forward until we found a slot which has a lower displacement than our current displacement. When we find that slot, all following entries until the next empty slot move down one step:

         hash value   log offset    optimal slot    displacement
       +------------+------------+
slot 0 |            |            |
       +------------+------------+
slot 1 |            |            |
       +------------+------------+
slot 2 |            |            |
       +------------+------------+
slot 3 | h(key0)    | 1          |  3               0
       +------------+------------+
slot 4 | h(key2)    | 21         |  3               1
       +------------+------------+
slot 5 | h(key1)    | 11         |  4               1
       +------------+------------+
slot 6 |            |            |
       +------------+------------+

Let's add "key3" which maps to slot 5. We can't push down key1, because it already has displacement 1 and our current displacement for key3 is 0, so we have to move forward:

         hash value   log offset    optimal slot    displacement
       +------------+------------+
slot 0 |            |            |
       +------------+------------+
slot 1 |            |            |
       +------------+------------+
slot 2 |            |            |
       +------------+------------+
slot 3 | h(key0)    | 1          |  3               0
       +------------+------------+
slot 4 | h(key2)    | 21         |  3               1
       +------------+------------+
slot 5 | h(key1)    | 11         |  4               1
       +------------+------------+
slot 6 | h(key3)    | 31         |  5               1
       +------------+------------+

Adding "key4" for slot 3. It ends up in slot 5 with displacement 2 and key3 loops around to slot 0:

         hash value   log offset    optimal slot    displacement
       +------------+------------+
slot 0 | key(key3)  | 31         |  5               2
       +------------+------------+
slot 1 |            |            |
       +------------+------------+
slot 2 |            |            |
       +------------+------------+
slot 3 | h(key0)    | 1          |  3               0
       +------------+------------+
slot 4 | h(key2)    | 21         |  3               1
       +------------+------------+
slot 5 | h(key4)    | 41         |  3               2
       +------------+------------+
slot 6 | h(key1)    | 11         |  4               2
       +------------+------------+

Now, if we search for key123 which maps to slot 3 (but doesn't exist!), we can stop scanning as soon as we reach slot 6, because then the current displacement (3) is higher than the displacement of the entry at the current slot (2).

Compression

Sparkey also supports block level compression using google snappy. You select a block size which is then used to split the contents of the log into blocks. Each block is compressed independently with snappy. This can be useful if your bottleneck is file size and there is a lot of redundant data across adjacent entries. The downside of using this is that during lookups, at least one block needs to be decompressed. The larger blocks you choose, the better compression you may get, but you will also have higher lookup cost. This is a tradeoff that needs to be empirically evaluated for each use case.