Convert Figma logo to code with AI

devflowinc logotrieve

All-in-one infrastructure for search, recommendations, RAG, and analytics offered via API

1,325
114
1,325
22

Top Related Projects

⚡️ A fully-featured and blazing-fast JavaScript API client to interact with Algolia.

A lightning-fast search API that fits effortlessly into your apps, websites, and workflow

20,388

Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences

Official Elasticsearch client library for Node.js

19,790

🦔 Fast, lightweight & schema-less search backend. An alternative to Elasticsearch that runs on a few MBs of RAM.

Quick Overview

Trieve is an open-source vector database and semantic search engine. It provides a powerful platform for building AI-powered search applications, with features like semantic search, document chunking, and integration with large language models.

Pros

  • Offers both vector database and semantic search capabilities in one solution
  • Supports integration with popular large language models for enhanced search functionality
  • Provides flexible document chunking options for optimized search results
  • Open-source and self-hostable, allowing for customization and data privacy

Cons

  • Relatively new project, which may lead to potential stability issues or lack of extensive community support
  • Documentation could be more comprehensive, especially for advanced use cases
  • Limited language support compared to some more established vector databases
  • May require significant computational resources for large-scale deployments

Code Examples

# Initialize Trieve client
from trieve import TrieveClient

client = TrieveClient(api_key="your_api_key", server_url="https://your-trieve-instance.com")

# Add a document to the database
document = {
    "content": "This is a sample document for semantic search.",
    "metadata": {"author": "John Doe", "category": "Sample"}
}
client.add_document(document)
# Perform a semantic search
query = "Find documents about semantic search"
results = client.search(query, top_k=5)

for result in results:
    print(f"Score: {result.score}, Content: {result.content[:100]}...")
# Use document chunking
long_document = {
    "content": "This is a very long document that needs to be chunked...",
    "metadata": {"title": "Long Document Example"}
}
client.add_document(long_document, chunk_size=500, chunk_overlap=50)

Getting Started

To get started with Trieve:

  1. Install the Trieve client:

    pip install trieve-client
    
  2. Set up a Trieve instance (self-hosted or cloud-based)

  3. Initialize the client and start using Trieve:

    from trieve import TrieveClient
    
    client = TrieveClient(api_key="your_api_key", server_url="https://your-trieve-instance.com")
    
    # Add documents, perform searches, and utilize other features
    

For more detailed instructions and advanced usage, refer to the official Trieve documentation.

Competitor Comparisons

⚡️ A fully-featured and blazing-fast JavaScript API client to interact with Algolia.

Pros of algoliasearch-client-javascript

  • More mature and widely adopted, with extensive documentation and community support
  • Offers a broader range of features for search and indexing
  • Provides seamless integration with Algolia's hosted search service

Cons of algoliasearch-client-javascript

  • Requires a paid subscription to Algolia's service for production use
  • Less flexibility for customization compared to self-hosted solutions
  • May have higher latency due to reliance on external API calls

Code Comparison

algoliasearch-client-javascript:

const client = algoliasearch('YOUR_APP_ID', 'YOUR_API_KEY');
const index = client.initIndex('your_index_name');
index.search('query').then(({ hits }) => {
  console.log(hits);
});

trieve:

let client = Client::new("YOUR_API_KEY");
let search_result = client.search("your_index_name", "query").await?;
println!("{:?}", search_result);

The code comparison shows that algoliasearch-client-javascript uses a JavaScript-based API, while trieve employs a Rust-based approach. algoliasearch-client-javascript provides a more familiar syntax for web developers, whereas trieve offers the benefits of Rust's performance and safety features. Both libraries aim to simplify the process of integrating search functionality into applications, but they cater to different language ecosystems and deployment models.

A lightning-fast search API that fits effortlessly into your apps, websites, and workflow

Pros of Meilisearch

  • More mature and widely adopted search engine with a larger community
  • Offers a wider range of features, including typo tolerance and faceted search
  • Provides official SDKs for multiple programming languages

Cons of Meilisearch

  • Requires more system resources and may be overkill for smaller projects
  • Less focused on AI-powered search and natural language processing
  • Steeper learning curve for advanced configurations

Code Comparison

Meilisearch index creation:

client = MeiliSearch::Client.new('http://127.0.0.1:7700', 'masterKey')
index = client.create_index('movies')

Trieve index creation:

from trieve import Trieve

trieve = Trieve(api_key="your_api_key")
index = trieve.create_index("movies")

Both Meilisearch and Trieve offer simple APIs for index creation, but Trieve's approach appears more streamlined. Meilisearch requires separate client initialization, while Trieve combines this step with index creation.

Meilisearch is a more established search engine with a broader feature set, making it suitable for larger projects with complex search requirements. Trieve, on the other hand, focuses on AI-powered search and may be more appropriate for projects requiring advanced natural language processing capabilities. The choice between the two depends on the specific needs of your project and the desired balance between features and simplicity.

20,388

Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences

Pros of Typesense

  • More mature and established project with a larger community and ecosystem
  • Offers a wider range of features and integrations out-of-the-box
  • Provides official client libraries for multiple programming languages

Cons of Typesense

  • Requires more setup and configuration compared to Trieve
  • May have a steeper learning curve for beginners
  • Less focused on AI-specific search capabilities

Code Comparison

Typesense query example:

client.collections['books'].search({
  'q': 'harry potter',
  'query_by': 'title,author',
  'sort_by': 'ratings_count:desc'
})

Trieve query example:

client.search(
    collection_name="books",
    query="harry potter",
    fields=["title", "author"],
    sort=[("ratings_count", "desc")]
)

Both projects aim to provide efficient search capabilities, but Typesense offers a more comprehensive solution for general-purpose search, while Trieve focuses on AI-powered search and retrieval. Typesense may be better suited for larger projects with diverse search requirements, whereas Trieve could be more appropriate for applications specifically targeting AI-enhanced search functionality.

Official Elasticsearch client library for Node.js

Pros of elasticsearch-js

  • Mature and widely adopted library with extensive documentation
  • Supports a wide range of Elasticsearch features and operations
  • Backed by Elastic, ensuring long-term support and updates

Cons of elasticsearch-js

  • Larger codebase and potentially steeper learning curve
  • Focused solely on Elasticsearch, limiting its use for other databases
  • May require more configuration and setup for basic use cases

Code Comparison

elasticsearch-js:

const { Client } = require('@elastic/elasticsearch')
const client = new Client({ node: 'http://localhost:9200' })

await client.index({
  index: 'my-index',
  document: { title: 'Test', content: 'Hello world!' }
})

Trieve:

use trieve::Client;
let client = Client::new("http://localhost:8000");

client.index("my-index", json!({
    "title": "Test",
    "content": "Hello world!"
})).await?;

Summary

Elasticsearch-js is a robust and feature-rich library for interacting with Elasticsearch, offering comprehensive support for its ecosystem. It's ideal for projects deeply integrated with Elasticsearch. Trieve, on the other hand, appears to be a more lightweight and potentially easier-to-use alternative, possibly supporting multiple database types. The choice between them would depend on specific project requirements, existing infrastructure, and desired flexibility.

19,790

🦔 Fast, lightweight & schema-less search backend. An alternative to Elasticsearch that runs on a few MBs of RAM.

Pros of Sonic

  • Written in Rust, offering high performance and memory safety
  • Lightweight and designed for speed, with minimal resource usage
  • Supports multiple search methods including word, phrase, and fuzzy search

Cons of Sonic

  • Limited to text search and indexing, lacking advanced features like vector search
  • Requires more manual configuration and integration compared to Trieve
  • Less active development and smaller community support

Code Comparison

Sonic (search query):

let search_results = channel.search("default", "collection", "query", Some(10), None);

Trieve (search query):

results = client.search_chunks(
    query="your search query",
    dataset_id="your_dataset_id",
    filters={"key": "value"},
    page=1,
    limit=10
)

Summary

Sonic is a fast, lightweight search backend written in Rust, focusing on text search and indexing. It offers excellent performance but has a narrower feature set. Trieve, on the other hand, provides a more comprehensive solution with advanced features like vector search and semantic analysis, but may have higher resource requirements. The choice between the two depends on specific project needs, with Sonic being ideal for simple, high-performance text search, and Trieve offering more advanced capabilities for complex search and analysis tasks.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Trieve Logo

Sign Up (1k chunks free) | Documentation | Meeting Link | Discord | Matrix

Github stars GitHub issues Join Discord Join Matrix

Trieve is all-in-one infrastructure for building hybrid vector search, recommendations, and RAG

Trieve OG tag

Quick Links

Features

  • 🔒 Self-Hosting in your VPC or on-prem: Buy a license to host in your company's VPC on prem with our ready-to-go docker containers and terraform templates.
  • 🧠 Semantic Dense Vector Search: Integrates with OpenAI or Jina embedding models and Qdrant to provide semantic vector search.
  • 🔍 Typo Tolerant Full-Text/Neural Search: Every uploaded chunk is vector'ized with naver/efficient-splade-VI-BT-large-query for typo tolerant, quality neural sparse-vector search.
  • 🖊️ Sub-Sentence Highlighting: Highlight the matching words or sentences within a chunk and bold them on search to enhance UX for your users. Shout out to the simsearch crate!
  • 🌟 Recommendations: Find similar chunks (or files if using grouping) with the recommendation API. Very helpful if you have a platform where users favorite, bookmark, or upvote content.
  • 🤖 Convenient RAG API Routes: We integrate with OpenRouter to provide you with access to any LLM you would like for RAG. Try our routes for fully-managed RAG with topic-based memory management or select your own context RAG.
  • 💼 Bring Your Own Models: If you'd like, you can bring your own text-embedding, SPLADE, cross-encoder re-ranking, and/or large-language model (LLM) and plug it into our infrastructure.
  • 🔄 Hybrid Search with cross-encoder re-ranking: For the best results, use hybrid search with BAAI/bge-reranker-large re-rank optimization.
  • 📆 Recency Biasing: Easily bias search results for what was most recent to prevent staleness
  • 🛠️ Tunable Popularity-Based Ranking (Merchandizing): Weight indexed documents by popularity, total sales, or any other arbitrary metric for tunable relevancy
  • 🕳️ Filtering: Date-range, substring match, tag, numeric, and other filter types are supported.
  • 🧐 Duplicate Detection: Check out our docs on collision-based dup detection to learn about how we handle duplicates. This is a setting you can turn on or off.
  • 👥 Grouping: Mark multiple chunks as being part of the same file and search on the file-level such that the same top-level result never appears twice

Are we missing a feature that your use case would need? - call us at 628-222-4090, make a Github issue, or join the Matrix community and tell us! We are a small company who is still very hands-on and eager to build what you need; professional services are available.

Roadmap

Our current top 2 priorities for the next while are as follows. Subject to change as current or potential customers ask for things.

  1. Observability and metrics (likely something w/ Clickhouse)
  2. Benchmarking (going to aim for a 1M, 10M, and 100M vector benchmark)
  3. SDKs (can generate from OpenAPI spec, but would like to test a bit more)

How to contribute

  1. Find an issue in the issues tab that you would like to work on.
  2. Fork the repository and clone it to your local machine
  3. Create a new branch with a descriptive name: git checkout -b your-branch-name
  4. Solve the issue by adding or removing code on your forked branch.
  5. Test your changes locally to ensure that they do not break anything
  6. Commit your changes with a descriptive commit message: git commit -m "Add descriptive commit message here"
  7. Push your changes to your forked repository: git push origin your-branch-name
  8. Open a pull request to the main repository and describe your changes in the PR description

Self-hosting the API and UI's

We have a full self-hosting guide available on our documentation page here.

Local development with Linux

Debian/Ubuntu Packages needed packages

sudo apt install curl \
gcc \
g++ \
make \
pkg-config \
python3 \
python3-pip \
libpq-dev \
libssl-dev \
openssl

Arch Packages needed

sudo pacman -S base-devel postgresql-libs

Install NodeJS and Yarn

You can install NVM using its install script.

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.5/install.sh | bash

You should restart the terminal to update bash profile with NVM. Then, you can install NodeJS LTS release and Yarn.

nvm install --lts
npm install -g yarn

Make server tmp dir

mkdir server/tmp

Install rust

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

Install cargo-watch

cargo install cargo-watch

Setup env's

cp .env.analytics ./frontends/analytics/.env
cp .env.chat ./frontends/chat/.env
cp .env.search ./frontends/search/.env
cp .env.server ./server/.env
cp .env.dashboard ./frontends/dashboard/.env

Add your LLM_API_KEY to ./server/.env

Here is a guide for acquiring that.

Steps once you have the key

  1. Open the ./server/.env file
  2. Replace the value for LLM_API_KEY to be your own OpenAI API key.
  3. Replace the value for OPENAI_API_KEY to be your own OpenAI API key.

Start docker container services needed for local dev

cat .env.chat .env.search .env.server .env.docker-compose > .env

./convenience.sh -l
# or
COMPOSE_PROFILE=dev docker compose up

Start services for local dev

We know this is bad. Currently, we recommend managing this through tmux or VSCode terminal tabs.

cd server
cargo watch -x run
cd search
yarn
yarn dev
cd chat
yarn
yarn dev

We have tmux config we use internally you can use.

Local development with Windows

Install NodeJS and Yarn

You can download the latest version NodeJS from here. Open the downloaded file and follow the steps from the installer.

After completing the installation, open a powershell with administrator permissions.

npm install -g yarn

After installation, yarn might throw an error when used due to Window's execution policy. Change the execution policy to allow scripts to be executed by applications that are signed by a trusted publisher by putting this command in an admin powershell.

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned

Install Rust

You can download the latest version of Rust from here. Follow the installer's directions and install the prerequisites.

After installation, open a new powershell window with administrator permissions.

cargo install cargo-watch

Install Docker

Follow the instructions to download Docker Desktop for Windows from here. You may need to follow the instructions to enable WSL 2.

Install Postgres dependencies for building

Download PostgreSQL 13 from here. You should not use any other version of PostgreSQL due to there being an issue with diesel on other versions.

When installing, ensure that the PostgreSQL server is set to a port other than 5432 to prevent it from interfering with the docker container.

Add Postgres to PATH

[Environment]::SetEnvironmentVariable("PATH", $Env:PATH + ";C:\Program Files\PostgreSQL\13\lib;C:\Program Files\PostgreSQL\13\bin", [EnvironmentVariableTarget]::Machine)

Setup env's

cp .env.analytics ./frontends/analytics/.env
cp .env.chat ./frontends/chat/.env
cp .env.search ./frontends/search/.env
cp .env.server ./server/.env
cp .env.dashboard ./frontends/dashboard/.env

Add your LLM_API_KEY to ./server/.env

Here is a guide for acquiring that.

Steps once you have the key

  1. Open the ./server/.env file
  2. Replace the value for LLM_API_KEY to be your own OpenAI API key.
  3. Replace the value for OPENAI_API_KEY to be your own OpenAI API key.

Start Docker containers

Start the docker containers using the batch script.

Get-Content .env.chat, .env.search, .env.server, .env.docker-compose | Set-Content .env
./convenience.bat l

Start services for local dev

You need 3 different windows of powershell or use something like VSCode terminal tabs to manage it.

cd server
cargo watch -x run
cd search
yarn
yarn dev
cd chat
yarn
yarn dev

Install ImageMagick (Linux) - only needed if you want to use pdf_from_range route

apt install libjpeg-dev libpng-dev libtiff-dev

curl https://imagemagick.org/archive/ImageMagick.tar.gz | tar xz
cd ImageMagick
./configure
make uninstall
make install

How to debug diesel by getting the exact generated SQL

diesel::debug_query(&query).to_string();

Local Setup for Testing Stripe Features

Install Stripe CLI.

  1. stripe login
  2. stripe listen --forward-to localhost:8090/api/stripe/webhook
  3. set the STRIPE_WEBHOOK_SECRET in the server/.env to the resulting webhook signing secret
  4. stripe products create --name trieve --default-price-data.unit-amount 1200 --default-price-data.currency usd
  5. stripe plans create --amount=1200 --currency=usd --interval=month --product={id from response of step 3}

SelfHosting / Deploy to AWS

Refer to the self hosting guide here