hazelcast
Hazelcast is a unified real-time data platform combining stream processing with a fast data store, allowing customers to act instantly on data-in-motion for real-time insights.
Top Related Projects
Apache Ignite
Apache Cassandra®
Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.
The open-source database for the realtime web.
memcached development tree
Quick Overview
Hazelcast is an open-source, in-memory data grid platform written in Java. It provides a distributed computing solution for high-performance applications, offering features like distributed caching, distributed computing, and cluster-wide in-memory data processing.
Pros
- High performance and low latency due to in-memory data storage
- Scalable and distributed architecture, allowing easy horizontal scaling
- Rich set of features including caching, distributed computing, and data structures
- Active community and regular updates
Cons
- Steep learning curve for complex distributed systems concepts
- Can be memory-intensive, especially for large datasets
- Potential network overhead in highly distributed environments
- Limited support for non-JVM languages compared to some alternatives
Code Examples
- Creating a Hazelcast instance and a distributed map:
HazelcastInstance hz = Hazelcast.newHazelcastInstance();
IMap<String, String> map = hz.getMap("my-distributed-map");
map.put("key", "value");
System.out.println("Value: " + map.get("key"));
- Using a distributed lock:
HazelcastInstance hz = Hazelcast.newHazelcastInstance();
ILock lock = hz.getLock("my-distributed-lock");
lock.lock();
try {
// Critical section
} finally {
lock.unlock();
}
- Implementing a distributed executor service:
HazelcastInstance hz = Hazelcast.newHazelcastInstance();
IExecutorService executor = hz.getExecutorService("my-distributed-executor");
Future<String> future = executor.submit(new MyCallable());
String result = future.get();
Getting Started
To get started with Hazelcast, follow these steps:
- Add Hazelcast dependency to your project (Maven example):
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
<version>5.2.1</version>
</dependency>
- Create a simple Java application:
import com.hazelcast.core.Hazelcast;
import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.map.IMap;
public class HazelcastExample {
public static void main(String[] args) {
HazelcastInstance hz = Hazelcast.newHazelcastInstance();
IMap<String, String> map = hz.getMap("my-distributed-map");
map.put("key", "value");
System.out.println("Value: " + map.get("key"));
hz.shutdown();
}
}
- Run the application, and you'll have a single-node Hazelcast cluster with a distributed map.
Competitor Comparisons
Apache Ignite
Pros of Ignite
- More comprehensive feature set, including SQL support and machine learning capabilities
- Better scalability for large-scale distributed systems
- More flexible deployment options, including support for various cloud platforms
Cons of Ignite
- Steeper learning curve due to its extensive feature set
- Higher resource consumption, especially for smaller deployments
- More complex configuration and setup process
Code Comparison
Hazelcast:
HazelcastInstance hz = Hazelcast.newHazelcastInstance();
IMap<String, String> map = hz.getMap("my-map");
map.put("key", "value");
Ignite:
Ignite ignite = Ignition.start();
IgniteCache<String, String> cache = ignite.getOrCreateCache("my-cache");
cache.put("key", "value");
Both Hazelcast and Ignite are powerful distributed computing platforms, but they have different strengths. Hazelcast is known for its simplicity and ease of use, making it a good choice for smaller projects or teams new to distributed systems. Ignite, on the other hand, offers a more comprehensive set of features and better scalability, making it suitable for larger, more complex distributed systems. The choice between the two depends on the specific requirements of your project and your team's expertise.
Apache Cassandra®
Pros of Cassandra
- Highly scalable and designed for massive distributed deployments
- Strong support for multi-datacenter replication
- Tunable consistency levels for read and write operations
Cons of Cassandra
- Steeper learning curve and more complex setup compared to Hazelcast
- Less flexible query capabilities, primarily optimized for known access patterns
- Higher memory consumption, especially for large datasets
Code Comparison
Hazelcast (Java):
HazelcastInstance hz = Hazelcast.newHazelcastInstance();
IMap<String, String> map = hz.getMap("my-distributed-map");
map.put("key", "value");
String value = map.get("key");
Cassandra (Java):
Cluster cluster = Cluster.builder().addContactPoint("127.0.0.1").build();
Session session = cluster.connect("mykeyspace");
ResultSet rs = session.execute("INSERT INTO users (id, name) VALUES (1, 'John')");
Row row = session.execute("SELECT * FROM users WHERE id = 1").one();
Both Hazelcast and Cassandra are distributed storage systems, but they serve different purposes. Hazelcast is an in-memory data grid focused on caching and distributed computing, while Cassandra is a NoSQL database designed for high availability and scalability. Hazelcast offers simpler setup and usage, making it suitable for smaller-scale applications, while Cassandra excels in large-scale, multi-datacenter deployments with massive amounts of data.
Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.
Pros of Redis
- Simpler architecture and easier to set up for small to medium-scale applications
- Faster performance for single-node deployments and simple data structures
- More extensive ecosystem with a wider range of client libraries and tools
Cons of Redis
- Limited built-in support for distributed computing and complex data partitioning
- Less robust out-of-the-box support for Java environments compared to Hazelcast
- Requires additional tools and configurations for advanced clustering features
Code Comparison
Redis (simple key-value storage):
import redis
r = redis.Redis(host='localhost', port=6379, db=0)
r.set('key', 'value')
value = r.get('key')
Hazelcast (distributed map):
import com.hazelcast.core.Hazelcast;
import com.hazelcast.core.HazelcastInstance;
HazelcastInstance hz = Hazelcast.newHazelcastInstance();
IMap<String, String> map = hz.getMap("myMap");
map.put("key", "value");
String value = map.get("key");
Both Redis and Hazelcast offer in-memory data storage solutions, but they cater to different use cases. Redis excels in simplicity and raw performance for single-node setups, while Hazelcast provides more robust distributed computing capabilities out of the box, particularly for Java-based applications.
The open-source database for the realtime web.
Pros of RethinkDB
- Real-time push architecture for live updates
- Flexible and expressive query language (ReQL)
- Built-in support for geospatial queries and indexing
Cons of RethinkDB
- Less mature and smaller community compared to Hazelcast
- Limited support for distributed transactions
- Steeper learning curve for developers new to NoSQL databases
Code Comparison
RethinkDB query example:
r.table('users')
.filter(r.row['age'].gt(30))
.update({'status': 'adult'})
Hazelcast query example:
IMap<String, User> users = hazelcastInstance.getMap("users");
users.executeOnEntries(new EntryProcessor<String, User, Object>() {
public Object process(Map.Entry<String, User> entry) {
User user = entry.getValue();
if (user.getAge() > 30) {
user.setStatus("adult");
entry.setValue(user);
}
return null;
}
});
Both RethinkDB and Hazelcast offer powerful querying capabilities, but RethinkDB's ReQL provides a more intuitive and expressive syntax for complex queries. Hazelcast's approach is more Java-centric and may require more verbose code for similar operations.
memcached development tree
Pros of Memcached
- Simpler and more lightweight, focusing solely on caching
- Generally faster for basic key-value operations
- Wider adoption and extensive community support
Cons of Memcached
- Limited data structures (only strings)
- Lack of built-in persistence and data replication
- No built-in clustering or data partitioning
Code Comparison
Memcached (C):
memcached_return_t memcached_set(memcached_st *ptr,
const char *key,
size_t key_length,
const char *value,
size_t value_length,
time_t expiration,
uint32_t flags);
Hazelcast (Java):
IMap<String, String> map = hazelcastInstance.getMap("myMap");
map.put("key", "value");
map.put("key", "value", 1, TimeUnit.HOURS);
Memcached focuses on simple key-value operations with a C API, while Hazelcast offers a more feature-rich Java API with support for various data structures and distributed computing capabilities. Memcached is ideal for straightforward caching needs, whereas Hazelcast provides a comprehensive solution for distributed data management and processing.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Hazelcast
What is Hazelcast
The worldâs leading companies trust Hazelcast to modernize applications and take instant action on data in motion to create new revenue streams, mitigate risk, and operate more efficiently. Businesses use Hazelcastâs unified real-time data platform to process streaming data, enrich it with historical context and take instant action with standard or ML/AI-driven automation - before it is stored in a database or data lake.
Hazelcast is named in the Gartner Market Guide to Event Stream Processing and a leader in the GigaOm Radar Report for Streaming Data Platforms. To join our community of CXOs, architects and developers at brands such as Loweâs, HSBC, JPMorgan Chase, Volvo, New York Life, and others, visit hazelcast.com.
When to use Hazelcast
Hazelcast provides a platform that can handle multiple types of workloads for building real-time applications.
- Stateful data processing over streaming data or data at rest
- Querying streaming and batch data sources directly using SQL
- Ingesting data through a library of connectors and serving it using low-latency SQL queries
- Pushing updates to applications on events
- Low-latency queue-based or pub-sub messaging
- Fast access to contextual and transactional data via caching patterns such as read/write-through and write-behind
- Distributed coordination for microservices
- Replicating data from one region to another or between data centers in the same region
Key Features
- Stateful and fault-tolerant data processing and querying over data streams and data at rest using SQL or dataflow API
- A comprehensive library of connectors such as Kafka, Hadoop, S3, RDBMS, JMS and many more
- Distributed messaging using pub-sub and queues
- Distributed, partitioned, queryable key-value store with event listeners, which can also be used to store contextual data for enriching event streams with low latency
- Tight integration for deploying machine learning models with Python to a data processing pipeline
- Cloud-native, run everywhere architecture
- Zero-downtime operations with rolling upgrades
- At-least-once and exactly-once processing guarantees for stream processing pipelines
- Data replication between data centers and geographic regions using WAN
- Microsecond performance for key-value point lookups and pub-sub
- Unique data processing architecture results in 99.99% latency of under 10ms for streaming queries with millions of events per second.
- Client libraries in Java, Python, Node.js, .NET, C++ and Go
Stateful Data Processing
Hazelcast has a built-in data processing engine called Jet, which can be used to build both streaming/real-time and batch/static data pipelines that are elastic. A single node of Hazelcast has been proven to aggregate 10 million events per second with latency under 10 milliseconds. A cluster of Hazelcast nodes can process billion events per second.
Get Started
Follow the Getting Started Guide to install and start using Hazelcast.
Documentation
Read the documentation for in-depth details about how to install Hazelcast and an overview of the features.
Get Help
You can use Slack for getting help with Hazelcast.
How to Contribute
Thanks for your interest in contributing! The easiest way is to just send a pull request.
Building From Source
Building Hazelcast requires at minimum JDK 17. Pull the latest source from the repository and use Maven install (or package) to build:
$ git pull origin master
$ ./mvnw clean package -DskipTests
It is recommended to use the included Maven wrapper script. It is also possible to use local Maven distribution with the same version that is used in the Maven wrapper script.
Additionally, there is a quick
build activated by setting the -Dquick
system
property that skips validation tasks for faster local builds (e.g. tests, checkstyle
validation, javadoc, source plugins etc) and does not build extensions
and distribution
modules.
Testing
Take into account that the default build executes thousands of tests which may take a considerable amount of time. Hazelcast has 3 testing profiles:
- Default:
./mvnw test
to run quick/integration tests (those can be run
in parallel without using network by using -P parallelTest
profile).
- Slow Tests:
./mvnw test -P nightly-build
to run tests that are either slow or cannot be run in parallel.
- All Tests:
./mvnw test -P all-tests
to run all tests serially using network.
Some tests require Docker to run. Set -Dhazelcast.disable.docker.tests
system property to ignore them.
When developing a PR it is sufficient to run your new tests and some related subset of tests locally. Our PR builder will take care of running the full test suite.
License
Source code in this repository is covered by one of two licenses:
The default license throughout the repository is Apache License 2.0 unless the header specifies another license.
Acknowledgments
We owe (the good parts of) our CLI tool's user experience to picocli.
Copyright
Copyright (c) 2008-2025, Hazelcast, Inc. All Rights Reserved.
Visit www.hazelcast.com for more info.
Top Related Projects
Apache Ignite
Apache Cassandra®
Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.
The open-source database for the realtime web.
memcached development tree
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot