Convert Figma logo to code with AI

replikativ logodatahike

A fast, immutable, distributed & compositional Datalog engine for everyone.

1,653
97
1,653
73

Top Related Projects

Immutable database and Datalog query engine for Clojure, ClojureScript and JS

2,638

An immutable SQL database for application development, time-travel reporting and data compliance. Developed by @juxt

2,638

An immutable SQL database for application development, time-travel reporting and data compliance. Developed by @juxt

TerminusDB is a distributed database with a collaboration model

1,162

Apache Jena, A free and open source Java framework for building Semantic Web and Linked Data applications.

Quick Overview

Datahike is a durable Datalog database written in Clojure. It's designed to be a flexible, high-performance database that supports ACID transactions and is built on top of a persistent data structure. Datahike can be used both as an in-memory database and with persistent storage backends.

Pros

  • Supports ACID transactions, ensuring data integrity
  • Flexible schema allows for easy adaptation to changing data models
  • Can be used as both in-memory and persistent storage
  • Implements Datalog query language, providing powerful querying capabilities

Cons

  • Learning curve for those unfamiliar with Datalog or Clojure
  • Limited ecosystem compared to more mainstream databases
  • Documentation could be more comprehensive for beginners
  • Performance may not match specialized databases for certain use cases

Code Examples

  1. Creating a new database:
(require '[datahike.api :as d])

(def config {:store {:backend :file :path "/tmp/example"}})
(d/create-database config)
(def conn (d/connect config))
  1. Adding data to the database:
(d/transact conn [{:db/id -1
                   :name "Alice"
                   :age 30}
                  {:db/id -2
                   :name "Bob"
                   :age 35}])
  1. Querying the database:
(d/q '[:find ?name ?age
       :where
       [?e :name ?name]
       [?e :age ?age]]
     @conn)
  1. Updating data:
(d/transact conn [[:db/add [:name "Alice"] :age 31]])

Getting Started

To get started with Datahike, add the following dependency to your project.clj or deps.edn:

[io.replikativ/datahike "0.6.1534"]

Then, in your Clojure code:

(require '[datahike.api :as d])

;; Configure and create a database
(def config {:store {:backend :mem :id "example"}})
(d/create-database config)
(def conn (d/connect config))

;; Add some data
(d/transact conn [{:db/id -1 :name "Alice" :age 30}])

;; Query the database
(d/q '[:find ?name :where [?e :name ?name]] @conn)

This basic setup creates an in-memory database, adds some data, and performs a simple query. Adjust the configuration for persistent storage or more complex use cases.

Competitor Comparisons

Immutable database and Datalog query engine for Clojure, ClojureScript and JS

Pros of Datascript

  • In-memory database, offering faster read operations
  • Simpler setup and usage, ideal for small to medium-sized applications
  • Lightweight and easily embeddable in ClojureScript projects

Cons of Datascript

  • Limited persistence options compared to Datahike
  • Less scalable for large datasets due to in-memory nature
  • Fewer advanced features like time travel and ACID transactions

Code Comparison

Datascript query example:

(d/q '[:find ?e ?name
       :where [?e :name ?name]]
     @conn)

Datahike query example:

(d/q '[:find ?e ?name
       :where [?e :name ?name]]
     (d/db conn))

Key Differences

  • Datahike offers persistent storage options, while Datascript is primarily in-memory
  • Datahike provides ACID transactions and time travel capabilities
  • Datascript is more lightweight and easier to set up for smaller projects
  • Datahike is better suited for larger, more complex data scenarios
  • Both use Datalog for querying, but Datahike's API is closer to Datomic

Use Cases

  • Choose Datascript for: client-side data management, prototyping, small to medium-sized applications
  • Choose Datahike for: server-side applications, larger datasets, scenarios requiring persistence and advanced features
2,638

An immutable SQL database for application development, time-travel reporting and data compliance. Developed by @juxt

Pros of XTDB

  • Supports bitemporal queries, allowing for point-in-time and historical data analysis
  • Offers a more comprehensive set of features, including ACID transactions and SQL support
  • Provides better documentation and more extensive examples

Cons of XTDB

  • Higher complexity and steeper learning curve compared to Datahike
  • Requires more system resources and may have slower performance for simpler use cases

Code Comparison

XTDB query example:

(xt/q
  (xt/db node)
  '{:find [e]
    :where [[e :name "Alice"]]})

Datahike query example:

(d/q
  '[:find ?e
    :where [?e :name "Alice"]]
  @conn)

Both XTDB and Datahike are Clojure-based databases with datalog query support. XTDB offers more advanced features and flexibility, while Datahike provides a simpler, lightweight solution. XTDB is better suited for complex, enterprise-level applications, whereas Datahike excels in scenarios requiring a straightforward, embedded database with good performance.

2,638

An immutable SQL database for application development, time-travel reporting and data compliance. Developed by @juxt

Pros of XTDB

  • Supports bitemporal queries, allowing for point-in-time and historical data analysis
  • Offers a more comprehensive set of features, including ACID transactions and SQL support
  • Provides better documentation and more extensive examples

Cons of XTDB

  • Higher complexity and steeper learning curve compared to Datahike
  • Requires more system resources and may have slower performance for simpler use cases

Code Comparison

XTDB query example:

(xt/q
  (xt/db node)
  '{:find [e]
    :where [[e :name "Alice"]]})

Datahike query example:

(d/q
  '[:find ?e
    :where [?e :name "Alice"]]
  @conn)

Both XTDB and Datahike are Clojure-based databases with datalog query support. XTDB offers more advanced features and flexibility, while Datahike provides a simpler, lightweight solution. XTDB is better suited for complex, enterprise-level applications, whereas Datahike excels in scenarios requiring a straightforward, embedded database with good performance.

TerminusDB is a distributed database with a collaboration model

Pros of TerminusDB

  • More comprehensive database solution with built-in versioning and collaboration features
  • Supports multiple query languages (WOQL, GraphQL, and SPARQL)
  • Offers a web-based interface for data visualization and management

Cons of TerminusDB

  • Steeper learning curve due to its more complex architecture
  • Requires more system resources for deployment and operation
  • Less flexible for integration into existing Clojure/JVM projects

Code Comparison

TerminusDB (JavaScript):

const client = new TerminusClient.WOQLClient("https://127.0.0.1:6363/");
client.connect({ user: "admin", key: "root" })
  .then(() => client.createDatabase("mydb", { label: "My Database" }))
  .then(() => console.log("Database created successfully"));

Datahike (Clojure):

(require '[datahike.api :as d])
(def cfg {:store {:backend :file :path "/tmp/example"}})
(d/create-database cfg)
(def conn (d/connect cfg))
(d/transact conn [{:db/id 1 :name "Alice" :age 30}])

Both repositories offer database solutions, but they cater to different use cases. Datahike is a lightweight, Datomic-inspired database for Clojure, while TerminusDB is a more feature-rich graph database with versioning capabilities. Datahike integrates seamlessly with Clojure projects, whereas TerminusDB provides a broader set of tools for data management and collaboration across various platforms.

1,162

Apache Jena, A free and open source Java framework for building Semantic Web and Linked Data applications.

Pros of Jena

  • Mature and widely-used framework with extensive documentation
  • Supports multiple storage backends and query languages
  • Offers a comprehensive suite of tools for RDF and linked data

Cons of Jena

  • Steeper learning curve due to its extensive feature set
  • Can be resource-intensive for large datasets
  • Primarily Java-based, which may not suit all development environments

Code Comparison

Jena (Java):

Model model = ModelFactory.createDefaultModel();
Resource resource = model.createResource("http://example.org/resource");
Property property = model.createProperty("http://example.org/property");
resource.addProperty(property, "value");

Datahike (Clojure):

(require '[datahike.api :as d])
(def cfg {:store {:backend :mem :id "example"}})
(d/create-database cfg)
(def conn (d/connect cfg))
(d/transact conn [{:db/id "resource" :property "value"}])

Key Differences

  • Jena is primarily for RDF and linked data, while Datahike focuses on immutable databases
  • Jena offers more extensive querying capabilities, including SPARQL support
  • Datahike provides a simpler API and is designed for Clojure/ClojureScript environments
  • Jena has broader industry adoption, while Datahike is more niche and community-driven

Both projects have their strengths, with Jena being more suitable for complex RDF-based applications and Datahike offering a lightweight solution for Clojure developers seeking an immutable database.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Datahike

Datahike is a durable Datalog database powered by an efficient Datalog query engine. This project started as a port of DataScript to the hitchhiker-tree. All DataScript tests are passing, but we are still working on the internals. Having said this we consider Datahike usable for medium sized projects, since DataScript is very mature and deployed in many applications and the hitchhiker-tree implementation is heavily tested through generative testing. We are building on the two projects and the storage backends for the hitchhiker-tree through konserve. We would like to hear experience reports and are happy if you join us.

You can find API documentation on cljdoc and articles on Datahike on our company's blog page.

cljdoc

We presented Datahike also at meetups,for example at:

Usage

Add to your dependencies:

Clojars Project

We provide a small stable API for the JVM at the moment, but the on-disk schema is not fixed yet. We will provide a migration guide until we have reached a stable on-disk schema. Take a look at the ChangeLog before upgrading.

(require '[datahike.api :as d])


;; use the filesystem as storage medium
(def cfg {:store {:backend :file :path "/tmp/example"}})

;; create a database at this place, per default configuration we enforce a strict
;; schema and keep all historical data
(d/create-database cfg)

(def conn (d/connect cfg))

;; the first transaction will be the schema we are using
;; you may also add this within database creation by adding :initial-tx
;; to the configuration
(d/transact conn [{:db/ident :name
                   :db/valueType :db.type/string
                   :db/cardinality :db.cardinality/one }
                  {:db/ident :age
                   :db/valueType :db.type/long
                   :db/cardinality :db.cardinality/one }])

;; lets add some data and wait for the transaction
(d/transact conn [{:name  "Alice", :age   20 }
                  {:name  "Bob", :age   30 }
                  {:name  "Charlie", :age   40 }
                  {:age 15 }])

;; search the data
(d/q '[:find ?e ?n ?a
       :where
       [?e :name ?n]
       [?e :age ?a]]
  @conn)
;; => #{[3 "Alice" 20] [4 "Bob" 30] [5 "Charlie" 40]}

;; add new entity data using a hash map
(d/transact conn {:tx-data [{:db/id 3 :age 25}]})

;; if you want to work with queries like in
;; https://grishaev.me/en/datomic-query/,
;; you may use a hashmap
(d/q {:query '{:find [?e ?n ?a ]
               :where [[?e :name ?n]
                       [?e :age ?a]]}
      :args [@conn]})
;; => #{[5 "Charlie" 40] [4 "Bob" 30] [3 "Alice" 25]}

;; query the history of the data
(d/q '[:find ?a
       :where
       [?e :name "Alice"]
       [?e :age ?a]]
  (d/history @conn))
;; => #{[20] [25]}

;; you might need to release the connection for specific stores
(d/release conn)

;; clean up the database if it is not need any more
(d/delete-database cfg)

The API namespace provides compatibility to a subset of Datomic functionality and should work as a drop-in replacement on the JVM. The rest of Datahike will be ported to core.async to coordinate IO in a platform-neutral manner.

Refer to the docs for more information:

For simple examples have a look at the projects in the examples folder.

Example Projects

Relationship to Datomic and DataScript

Datahike provides similar functionality to Datomic and can be used as a drop-in replacement for a subset of it. The goal of Datahike is not to provide an open-source reimplementation of Datomic, but it is part of the replikativ toolbox aimed to build distributed data management solutions. We have spoken to many backend engineers and Clojure developers, who tried to stay away from Datomic just because of its proprietary nature and we think in this regard Datahike should make an approach to Datomic easier and vice-versa people who only want to use the goodness of Datalog in small scale applications should not worry about setting up and depending on Datomic.

Some differences are:

  • Datahike runs locally on one peer. A transactor might be provided in the future and can also be realized through any linearizing write mechanism, e.g. Apache Kafka. If you are interested, please contact us.
  • Datahike provides the database as a transparent value, i.e. you can directly access the index datastructures (hitchhiker-tree) and leverage their persistent nature for replication. These internals are not guaranteed to stay stable, but provide useful insight into what is going on and can be optimized.
  • Datahike supports GDPR compliance by allowing to completely remove database entries.
  • Datomic has a REST interface and a Java API
  • Datomic provides timeouts

Datomic is a full-fledged scalable database (as a service) built from the authors of Clojure and people with a lot of experience. If you need this kind of professional support, you should definitely stick to Datomic.

Datahike's query engine and most of its codebase come from DataScript. Without the work on DataScript, Datahike would not have been possible. Differences to Datomic with respect to the query engine are documented there.

When to Choose Datahike vs. Datomic vs. DataScript

Datahike

Pick Datahike if your app has modest requirements towards a typical durable database, e.g. a single machine and a few millions of entities at maximum. Similarly, if you want to have an open-source solution and be able to study and tinker with the codebase of your database, Datahike provides a comparatively small and well composed codebase to tweak it to your needs. You should also always be able to migrate to Datomic later easily.

Datomic

Pick Datomic if you already know that you will need scalability later or if you need a network API for your database. There is also plenty of material about Datomic online already. Most of it applies in some form or another to Datahike, but it might be easier to use Datomic directly when you first learn Datalog.

DataScript

Pick DataScript if you want the fastest possible query performance and do not have a huge amount of data. You can easily persist the write operations separately and use the fast in-memory index data structure of DataScript then. Datahike also at the moment does not support ClojureScript anymore, although we plan to recover this functionality.

ClojureScript Support

ClojureScript support is planned and work in progress. Please see Discussions.

Migration & Backup

The database can be exported to a flat file with:

(require '[datahike.migrate :refer [export-db import-db]])
(export-db conn "/tmp/eavt-dump")

You must do so before upgrading to a Datahike version that has changed the on-disk format. This can happen as long as we are arriving at version 1.0.0 and will always be communicated through the Changelog. After you have bumped the Datahike version you can use

;; ... setup new-conn (recreate with correct schema)

(import-db new-conn "/tmp/eavt-dump")

to reimport your data into the new format.

The datoms are stored in the CBOR format, enabling migration of binary data, such as the byte array data type now supported by Datahike. You can also use the export as a backup.

If you are upgrading from pre 0.1.2 where we have not had the migration code yet, then just evaluate the datahike.migrate namespace manually in your project before exporting.

Have a look at the change log for recent updates.

Roadmap and Participation

Instead of providing a static roadmap, we have moved to working closely with the community to decide what will be worked on next in a dynamic and interactive way.

How it works?

Go to Discussions and upvote all the ideas of features you would like to be added to Datahike. As soon as we have someone free to work on a new feature, we will address one with the most upvotes.

Of course, you can also propose ideas yourself - either by adding them to the Discussions or even by creating a pull request yourself. Please note thought that due to considerations about incompatibilities to earlier Datahike versions it might sometimes take a bit more time until your PR is integrated.

Commercial Support

We are happy to provide commercial support with lambdaforge. If you are interested in a particular feature, please let us know.

License

Copyright © 2014–2023 Konrad Kühne, Christian Weilbach, Chrislain Razafimahefa, Timo Kramer, Judith Massa, Nikita Prokopov, Ryan Sundberg

Licensed under Eclipse Public License (see LICENSE).