Convert Figma logo to code with AI

prisma logoprisma-engines

πŸš‚ Engine components of Prisma ORM

1,162
227
1,162
171

Top Related Projects

Blazing fast, instant realtime GraphQL APIs on your DB with fine grained access control, also trigger webhooks on database events.

72,681

The open source Firebase alternative. Supabase gives you a dedicated Postgres database to build your web, mobile, and AI applications.

27,495

The flexible backend for all your projects 🐰 Turn your DB into a headless CMS, admin panels, or apps with a custom UI, instant APIs, auth & more.

63,346

πŸš€ Strapi is the leading open-source headless CMS. It’s 100% JavaScript/TypeScript, fully customizable, and developer-first.

The superpowered headless CMS for Node.js β€” built with GraphQL and React

Quick Overview

Prisma Engines is a core component of the Prisma ORM ecosystem, providing the underlying query engine and database connectivity for Prisma. It's written in Rust for high performance and includes the query engine, migration engine, and introspection engine, which are essential for Prisma's functionality.

Pros

  • High performance due to Rust implementation
  • Cross-platform compatibility
  • Supports multiple database systems
  • Provides type-safe database access

Cons

  • Complex internal architecture
  • Steep learning curve for contributors
  • Limited documentation for engine internals
  • Tightly coupled with Prisma ORM

Code Examples

As Prisma Engines is a core component and not directly used as a standalone library, code examples are not applicable. The engines are typically used through the Prisma ORM interface.

Getting Started

Prisma Engines is not meant to be used directly by developers. Instead, it's utilized through the Prisma ORM. To get started with Prisma, which incorporates these engines, follow these steps:

  1. Install Prisma CLI:
npm install prisma --save-dev
  1. Initialize Prisma in your project:
npx prisma init
  1. Define your database schema in prisma/schema.prisma

  2. Generate Prisma Client:

npx prisma generate
  1. Use Prisma Client in your application:
import { PrismaClient } from '@prisma/client'
const prisma = new PrismaClient()

async function main() {
  // Your database operations here
}

main()
  .catch((e) => console.error(e))
  .finally(async () => await prisma.$disconnect())

For more detailed instructions and advanced usage, refer to the official Prisma documentation.

Competitor Comparisons

Blazing fast, instant realtime GraphQL APIs on your DB with fine grained access control, also trigger webhooks on database events.

Pros of GraphQL Engine

  • More comprehensive out-of-the-box GraphQL API generation
  • Built-in real-time subscriptions and event triggers
  • Extensive authorization and access control features

Cons of GraphQL Engine

  • Less flexible for custom data modeling and migrations
  • Steeper learning curve for complex setups
  • More resource-intensive for smaller projects

Code Comparison

GraphQL Engine (Hasura):

type_defs = """
type User {
  id: Int!
  name: String!
  email: String!
}

type Query {
  users: [User!]!
}
"""

Prisma Engines:

model User {
  id    Int     @id @default(autoincrement())
  name  String
  email String  @unique
}

Both projects aim to simplify database access and API creation, but they take different approaches. GraphQL Engine focuses on providing a complete GraphQL API solution with advanced features like real-time subscriptions and fine-grained access control. Prisma Engines, on the other hand, offers a more flexible and lightweight approach to data modeling and database access, making it easier to integrate into existing projects or customize for specific needs.

GraphQL Engine excels in scenarios requiring rapid API development with complex authorization rules, while Prisma Engines shines in projects that need more control over the data layer and prefer a code-first approach to database schema management.

72,681

The open source Firebase alternative. Supabase gives you a dedicated Postgres database to build your web, mobile, and AI applications.

Pros of Supabase

  • Offers a full-stack development platform with built-in authentication, real-time subscriptions, and storage
  • Provides a user-friendly interface for database management and API generation
  • Supports multiple programming languages and frameworks out of the box

Cons of Supabase

  • Less flexible for complex database schemas and relationships compared to Prisma
  • May have a steeper learning curve for developers new to PostgreSQL
  • Limited customization options for query optimization compared to Prisma's fine-grained control

Code Comparison

Supabase (JavaScript):

const { data, error } = await supabase
  .from('users')
  .select('name, email')
  .eq('id', 123)

Prisma (TypeScript):

const user = await prisma.user.findUnique({
  where: { id: 123 },
  select: { name: true, email: true }
})

Key Differences

  • Supabase focuses on providing a complete backend-as-a-service solution, while Prisma-engines is primarily an ORM and database toolkit
  • Prisma offers more granular control over database operations and schema management
  • Supabase includes additional features like real-time subscriptions and file storage, which are not part of Prisma's core functionality
27,495

The flexible backend for all your projects 🐰 Turn your DB into a headless CMS, admin panels, or apps with a custom UI, instant APIs, auth & more.

Pros of Directus

  • Provides a complete headless CMS solution with a user-friendly admin interface
  • Offers flexible data modeling and API generation out of the box
  • Supports real-time collaboration and granular permissions management

Cons of Directus

  • May have a steeper learning curve for developers accustomed to traditional ORMs
  • Potentially higher resource consumption due to its full-featured nature
  • Less focused on database-specific optimizations compared to Prisma Engines

Code Comparison

Directus schema definition:

{
  "name": "products",
  "fields": [
    {
      "field": "id",
      "type": "integer",
      "primary_key": true
    },
    {
      "field": "name",
      "type": "string"
    }
  ]
}

Prisma schema definition:

model Product {
  id   Int    @id @default(autoincrement())
  name String
}

While both repositories focus on database interactions, Directus provides a more comprehensive solution for content management, whereas Prisma Engines emphasizes efficient database operations and type-safe queries. Directus offers greater flexibility in terms of content modeling and administration, while Prisma Engines excels in performance and integration with TypeScript-based projects.

63,346

πŸš€ Strapi is the leading open-source headless CMS. It’s 100% JavaScript/TypeScript, fully customizable, and developer-first.

Pros of Strapi

  • More comprehensive CMS solution with built-in admin panel and content management features
  • Highly customizable and extensible through plugins and custom components
  • Supports multiple databases out of the box (SQLite, PostgreSQL, MySQL, MongoDB)

Cons of Strapi

  • Heavier and more resource-intensive due to its full-featured nature
  • Steeper learning curve for developers new to the ecosystem
  • Less focused on database operations and query optimization compared to Prisma Engines

Code Comparison

Strapi (Content-Type definition):

module.exports = {
  attributes: {
    title: {
      type: 'string',
      required: true,
    },
    content: {
      type: 'richtext',
    },
  },
};

Prisma Engines (Schema definition):

model Post {
  id      Int     @id @default(autoincrement())
  title   String
  content String?
}

While Strapi focuses on defining content types with various field types and validations, Prisma Engines emphasizes database schema definition with a more concise syntax. Strapi's approach is geared towards content management, while Prisma Engines is optimized for database operations and type-safe queries.

The superpowered headless CMS for Node.js β€” built with GraphQL and React

Pros of Keystone

  • More flexible and customizable, allowing for complex data modeling and relationships
  • Provides a complete CMS solution with built-in admin UI
  • Supports GraphQL API out of the box, enabling easier frontend integration

Cons of Keystone

  • Steeper learning curve due to its extensive feature set
  • Potentially slower performance for large-scale applications
  • Less focus on type safety compared to Prisma

Code Comparison

Keystone schema definition:

const { Text, Relationship } = require('@keystonejs/fields');

const User = {
  fields: {
    name: { type: Text },
    posts: { type: Relationship, ref: 'Post', many: true },
  },
};

Prisma schema definition:

model User {
  id    Int     @id @default(autoincrement())
  name  String
  posts Post[]
}

Both Prisma and Keystone offer powerful ORM capabilities, but they cater to different use cases. Prisma focuses on type-safe database access and migrations, while Keystone provides a full-featured CMS framework with additional functionality like authentication and admin interfaces. The choice between them depends on project requirements and developer preferences.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Prisma Engines

Query Engine Schema Engine + sql_schema_describer Cargo docs

This repository contains a collection of engines that power the core stack for Prisma, most prominently Prisma Client and Prisma Migrate.

If you're looking for how to install Prisma or any of the engines, the Getting Started guide might be useful.

This document describes some of the internals of the engines, and how to build and test them.

What's in this repository

This repository contains four engines:

  • Query engine, used by the client to run database queries from Prisma Client
  • Schema engine, used to create and run migrations and introspection
  • Prisma Format, used to format prisma files

Additionally, the psl (Prisma Schema Language) is the library that defines how the language looks like, how it's parsed, etc.

You'll also find:

  • libs, for various (small) libraries such as macros, user facing errors, various connector/database-specific libraries, etc.
  • a docker-compose.yml file that's helpful for running tests and bringing up containers for various databases
  • a flake.nix file for bringing up all dependencies and making it easy to build the code in this repository (the use of this file and nix is entirely optional, but can be a good and easy way to get started)
  • an .envrc file to make it easier to set everything up, including the nix shell

Documentation

The API docs (cargo doc) are published on our fabulous repo page.

Building Prisma Engines

Prerequisites:

  • Installed the latest stable version of the Rust toolchain. You can get the toolchain at rustup or the package manager of your choice.
  • Linux only: OpenSSL is required to be installed.
  • Installed direnv, then direnv allow on the repository root.
    • Make sure direnv is hooked into your shell
    • Alternatively: Load the defined environment in ./.envrc manually in your shell.
  • For m1 users: Install Protocol Buffers

Note for nix users: it should be enough to direnv allow. How to build:

To build all engines, simply execute cargo build on the repository root. This builds non-production debug binaries. If you want to build the optimized binaries in release mode, the command is cargo build --release.

Depending on how you invoked cargo in the previous step, you can find the compiled binaries inside the repository root in the target/debug (without --release) or target/release directories (with --release):

Prisma ComponentPath to Binary
Query Engine./target/[debug|release]/query-engine
Schema Engine./target/[debug|release]/schema-engine
Prisma Format./target/[debug|release]/prisma-fmt

Prisma Schema Language

The Prisma Schema Language is a library which defines the data structures and parsing rules for prisma files, including the available database connectors. For more technical details, please check the library README.

The PSL is used throughout the schema engine, as well as prisma format. The DataModeL (DML), which is an annotated version of the PSL is also used as input for the query engine.

Query Engine

The Query Engine is how Prisma Client queries are executed. Here's a brief description of what it does:

  • takes as inputs an annotated version of the Prisma Schema file called the DataModeL (DML),
  • using the DML (specifically, the datasources and providers), it builds up a GraphQL model for queries and responses,
  • runs as a server listening for GraphQL queries,
  • it translates the queries to the respective native datasource(s) and returns GraphQL responses, and
  • handles all connections and communication with the native databases.

When used through Prisma Client, there are two ways for the Query Engine to be executed:

  • as a binary, downloaded during installation, launched at runtime; communication happens via HTTP (./query-engine/query-engine)
  • as a native, platform-specific Node.js addon; also downloaded during installation (./query-engine/query-engine-node-api)

Usage

You can also run the Query Engine as a stand-alone GraphQL server.

Warning: There is no guaranteed API stability. If using it on production please be aware the api and the query language can change any time.

Notable environment flags:

  • RUST_LOG_FORMAT=(devel|json) sets the log format. By default outputs json.
  • QE_LOG_LEVEL=(info|debug|trace) sets the log level for the Query Engine. If you need Query Graph debugging logs, set it to "trace"
  • FMT_SQL=1 enables logging formatted SQL queries
  • PRISMA_DML_PATH=[path_to_datamodel_file] should point to the datamodel file location. This or PRISMA_DML is required for the Query Engine to run.
  • PRISMA_DML=[base64_encoded_datamodel] an alternative way to provide a datamodel for the server.
  • RUST_BACKTRACE=(0|1) if set to 1, the error backtraces will be printed to the STDERR.
  • LOG_QUERIES=[anything] if set, the SQL queries will be written to the INFO log. Needs the right log level enabled to be seen from the terminal.
  • RUST_LOG=[filter] sets the filter for the logger. Can be either trace, debug, info, warning or error, that will output ALL logs from every crate from that level. The .envrc in this repo shows how to log different parts of the system in a more granular way.

Starting the Query Engine:

The engine can be started either with using the cargo build tool, or pre-building a binary and running it directly. If using cargo, replace whatever command that starts with ./query-engine with cargo run --bin query-engine --.

You can also pass --help to find out more options to run the engine.

Metrics

Running make show-metrics will start Prometheus and Grafana with a default metrics dashboard. Prometheus will scrape the /metrics endpoint to collect the engine's metrics

Navigate to http://localhost:3000 to view the Grafana dashboard.

Schema Engine

The Schema Engine does a couple of things:

  • creates new migrations by comparing the prisma file with the current state of the database, in order to bring the database in sync with the prisma file
  • run these migrations and keeps track of which migrations have been executed
  • (re-)generate a prisma schema file starting from a live database

The engine uses:

  • the prisma files, as the source of truth
  • the database it connects to, for diffing and running migrations, as well as keeping track of migrations in the _prisma_migrations table
  • the prisma/migrations directory which acts as a database of existing migrations

Prisma format

Prisma format can format prisma schema files. It also comes as a WASM module via a node package. You can read more here.

Debugging

When trying to debug code, here's a few things that might be useful:

  • use the language server; being able to go to definition and reason about code can make things a lot easier,
  • add dbg!() statements to validate code paths, inspect variables, etc.,
  • you can control the amount of logs you see, and where they come from using the RUST_LOG environment variable; see the documentation,
  • you can use the test-cli to test migration and introspection without having to go through the prisma npm package.

Testing

There are two test suites for the engines: Unit tests and integration tests.

  • Unit tests: They test internal functionality of individual crates and components.

    You can find them across the whole codebase, usually in ./tests folders at the root of modules. These tests can be executed via cargo test. Note that some of them will require the TEST_DATABASE_URL enviornment variable set up.

  • Integration tests: They run GraphQL queries against isolated instances of the Query Engine and asserts that the responses are correct.

    You can find them at ./query-engine/connector-test-kit-rs.

Set up & run tests:

Prerequisites:

  • Installed Rust toolchain.
  • Installed Docker.
  • Installed direnv, then direnv allow on the repository root.
    • Alternatively: Load the defined environment in ./.envrc manually in your shell.

Setup:

There are helper make commands to set up a test environment for a specific database connector you want to test. The commands set up a container (if needed) and write the .test_config file, which is picked up by the integration tests:

  • make dev-mysql: MySQL 5.7
  • make dev-mysql8: MySQL 8
  • make dev-postgres: PostgreSQL 10
  • make dev-sqlite: SQLite
  • make dev-mongodb_5: MongoDB 5

*On windows: If not using WSL, make is not available and you should just see what your command does and do it manually. Basically this means editing the .test_config file and starting the needed Docker containers.

To actually get the tests working, read the contents of .envrc. Then Edit environment variables for your account from Windows settings, and add at least the correct values for the following variables:

  • WORKSPACE_ROOT should point to the root directory of prisma-engines project.
  • PRISMA_BINARY_PATH is usually %WORKSPACE_ROOT%\target\release\query-engine.exe.
  • SCHEMA_ENGINE_BINARY_PATH should be %WORKSPACE_ROOT%\target\release\schema-engine.exe.

Other variables may or may not be useful.

Run:

Run cargo test in the repository root.

Testing driver adapters

Please refer to the Testing driver adapters section in the connector-test-kit-rs README.

҄¹ï¸ Important note on developing features that require changes to the both the query engine, and driver adapters code

As explained in Testing driver adapters, running DRIVER_ADAPTER=$adapter make qe-test will ensure you have prisma checked out in your filesystem in the same directory as prisma-engines. This is needed because the driver adapters code is symlinked in prisma-engines.

When working on a feature or bugfix spanning adapters code and query-engine code, you will need to open sibling PRs in prisma/prisma and prisma/prisma-engines respectively. Locally, each time you run DRIVER_ADAPTER=$adapter make test-qe tests will run using the driver adapters built from the source code in the working copy of prisma/prisma. All good.

In CI, tho', we need to denote which branch of prisma/prisma we want to use for tests. In CI, there's no working copy of prisma/prisma before tests run. The CI jobs clones prisma/prisma main branch by default, which doesn't include your local changes. To test in integration, we can tell CI to use the branch of prisma/prisma containing the changes in adapters. To do it, you can use a simple convention in commit messages. Like this:

git commit -m "DRIVER_ADAPTERS_BRANCH=prisma-branch-with-changes-in-adapters [...]"

GitHub actions will then pick up the branch name and use it to clone that branch's code of prisma/prisma, and build the driver adapters code from there.

When it's time to merge the sibling PRs, you'll need to merge the prisma/prisma PR first, so when merging the engines PR you have the code of the adapters ready in prisma/prisma main branch.

Testing engines in prisma/prisma

You can trigger releases from this repository to npm that can be used for testing the engines in prisma/prisma either automatically or manually:

Automated integration releases from this repository to npm

Any branch name starting with integration/ will, first, run the full test suite in GH Actions and, second, run the release workflow (build and upload engines to S3 & R2). To trigger the release on any other branch, you have two options:

  • Either run build-engines workflow on a specified branch manually.
  • Or add [integration] string anywhere in your commit messages/

The journey through the pipeline is the same as a commit on the main branch.

  • It will trigger prisma/engines-wrapper and publish a new @prisma/engines-version npm package but on the integration tag.
  • Which triggers prisma/prisma to create a chore(Automated Integration PR): [...] PR with a branch name also starting with integration/
  • Since in prisma/prisma we also trigger the publish pipeline when a branch name starts with integration/, this will publish all prisma/prisma monorepo packages to npm on the integration tag.
  • Our ecosystem-tests tests will automatically pick up this new version and run tests, results will show in GitHub Actions

This end to end will take minimum ~1h20 to complete, but is completely automated :robot:

Notes:

  • tests and publishing workflows are run in parallel in both prisma/prisma-engines and prisma/prisma repositories. So, it is possible that the engines would be published and only then test suite will discover a defect. It is advised that to keep an eye on both test and publishing workflows.

Manual integration releases from this repository to npm

Additionally to the automated integration release for integration/ branches, you can also trigger a publish manually in the Buildkite [Test] Prisma Engines job if that succeeds for any branch name. Click "Γ°ΒŸΒšΒ€ Publish binaries" at the bottom of the test list to unlock the publishing step. When all the jobs in [Release] Prisma Engines succeed, you also have to unlock the next step by clicking "Γ°ΒŸΒšΒ€ Publish client". This will then trigger the same journey as described above.

Parallel rust-analyzer builds

When rust-analzyer runs cargo check it will lock the build directory and stop any cargo commands from running until it has completed. This makes the build process feel a lot longer. It is possible to avoid this by setting a different build path for rust-analyzer. To avoid this. Open VSCode settings and search for Check on Save: Extra Args. Look for the Rust-analyzer Ҁº Check On Save: Extra Args settings and add a new directory for rust-analyzer. Something like:

--target-dir:/tmp/rust-analyzer-check

Community PRs: create a local branch for a branch coming from a fork

To trigger an Automated integration releases from this repository to npm or Manual integration releases from this repository to npm branches of forks need to be pulled into this repository so the Buildkite job is triggered. You can use these GitHub and git CLI commands to achieve that easily:

gh pr checkout 4375
git checkout -b integration/sql-nested-transactions
git push --set-upstream origin integration/sql-nested-transactions

If there is a need to re-create this branch because it has been updated, deleting it and re-creating will make sure the content is identical and avoid any conflicts.

git branch --delete integration/sql-nested-transactions
gh pr checkout 4375
git checkout -b integration/sql-nested-transactions
git push --set-upstream origin integration/sql-nested-transactions --force

Security

If you have a security issue to report, please contact us at security@prisma.io