Convert Figma logo to code with AI

QuivrHQ logoquivr

Opiniated RAG for integrating GenAI in your apps 🧠 Focus on your product rather than the RAG. Easy integration in existing products with customisation! Any LLM: GPT4, Groq, Llama. Any Vectorstore: PGVector, Faiss. Any Files. Anyway you want.

36,904
3,600
36,904
110

Top Related Projects

93,540

🦜🔗 Build context-aware reasoning applications

14,940

the AI-native open-source embedding database

Integrate cutting-edge LLM technology quickly and easily into your apps

LlamaIndex is a data framework for your LLM applications

Examples and guides for using the OpenAI API

Quick Overview

Quivr is an open-source, AI-powered personal productivity assistant. It allows users to store and retrieve information from various sources, acting as a "second brain" to enhance memory and productivity. Quivr uses advanced language models to process and understand user inputs, making it a powerful tool for knowledge management and task organization.

Pros

  • Integrates multiple data sources (files, links, notes) into a unified knowledge base
  • Utilizes AI for intelligent information retrieval and task management
  • Open-source, allowing for community contributions and customization
  • Supports natural language interactions for ease of use

Cons

  • May require technical knowledge for setup and customization
  • Potential privacy concerns due to AI processing of personal data
  • Dependency on external AI services may affect reliability and cost
  • Learning curve for optimal usage of all features

Getting Started

To get started with Quivr:

  1. Clone the repository:

    git clone https://github.com/QuivrHQ/quivr.git
    
  2. Install dependencies:

    cd quivr
    pip install -r requirements.txt
    
  3. Set up environment variables:

    cp .env.example .env
    # Edit .env file with your configuration
    
  4. Run the application:

    python main.py
    

For detailed setup instructions and configuration options, refer to the project's README and documentation on the GitHub repository.

Competitor Comparisons

93,540

🦜🔗 Build context-aware reasoning applications

Pros of LangChain

  • More comprehensive and flexible framework for building LLM applications
  • Larger community and ecosystem with extensive documentation
  • Supports a wider range of LLMs and integrations

Cons of LangChain

  • Steeper learning curve due to its extensive features and abstractions
  • Can be overkill for simpler projects or specific use cases
  • Requires more setup and configuration compared to Quivr

Code Comparison

LangChain example:

from langchain import OpenAI, LLMChain, PromptTemplate

llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(input_variables=["product"], template="What is a good name for a company that makes {product}?")
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run("colorful socks"))

Quivr example:

from quivr import Client

client = Client()
response = client.chat(messages=[{"role": "user", "content": "What is a good name for a company that makes colorful socks?"}])
print(response.choices[0].message.content)

Both repositories aim to simplify working with LLMs, but LangChain offers a more comprehensive toolkit at the cost of increased complexity, while Quivr provides a more streamlined approach for specific use cases.

14,940

the AI-native open-source embedding database

Pros of Chroma

  • More focused on vector database functionality, offering advanced embedding and similarity search capabilities
  • Better suited for large-scale production environments with distributed architecture support
  • More extensive documentation and API references

Cons of Chroma

  • Steeper learning curve for beginners due to its specialized nature
  • Less emphasis on end-user applications compared to Quivr's more user-friendly approach

Code Comparison

Chroma (Python):

import chromadb

client = chromadb.Client()
collection = client.create_collection("my_collection")
collection.add(
    documents=["This is a document", "This is another document"],
    metadatas=[{"source": "my_source"}, {"source": "my_source"}],
    ids=["id1", "id2"]
)

Quivr (Python):

from quivr import Client

client = Client()
brain = client.create_brain("my_brain")
brain.add_knowledge(
    "This is a document",
    metadata={"source": "my_source"}
)

Both repositories offer tools for managing and querying vector data, but Chroma focuses more on the database aspect, while Quivr provides a higher-level abstraction for building AI-powered applications. Chroma's code emphasizes collection management and document addition, whereas Quivr's code showcases a more intuitive "brain" concept for knowledge management.

Pros of langchain-hub

  • Extensive collection of pre-built prompts and chains for various use cases
  • Strong integration with the LangChain ecosystem
  • Active community contributions and regular updates

Cons of langchain-hub

  • More focused on providing components rather than a complete application
  • Steeper learning curve for users new to LangChain concepts
  • Less emphasis on user interface and visual design

Code Comparison

langchain-hub:

from langchain.prompts import load_prompt

prompt = load_prompt("lc://prompts/conversation/prompt.yaml")
result = prompt.format(input="Hello, how are you?")

Quivr:

from quivr import Brain

brain = Brain("my_brain")
brain.add_knowledge("Hello, I'm an AI assistant.")
response = brain.query("How can I help you today?")

Summary

langchain-hub offers a rich repository of LangChain components, ideal for developers familiar with the ecosystem. Quivr provides a more user-friendly, application-focused approach to building AI-powered knowledge bases. While langchain-hub excels in flexibility and integration with LangChain, Quivr offers a more streamlined experience for creating and querying AI brains.

Integrate cutting-edge LLM technology quickly and easily into your apps

Pros of Semantic Kernel

  • More extensive documentation and examples
  • Broader language support (C#, Python, Java)
  • Stronger integration with Azure AI services

Cons of Semantic Kernel

  • Steeper learning curve for beginners
  • More complex setup process
  • Primarily focused on enterprise-level applications

Code Comparison

Quivr (Python):

from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores import Chroma

embeddings = HuggingFaceEmbeddings()
db = Chroma(embedding_function=embeddings)

Semantic Kernel (C#):

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.SemanticFunctions;

var kernel = Kernel.Builder.Build();
var promptConfig = new PromptTemplateConfig();
var semanticFunction = kernel.CreateSemanticFunction("Your prompt here", config: promptConfig);

Both repositories offer unique approaches to building AI-powered applications. Quivr focuses on creating a second brain using LLMs and vector databases, while Semantic Kernel provides a more comprehensive framework for integrating AI capabilities into various applications. The choice between the two depends on the specific project requirements, development language preferences, and the desired level of integration with existing systems.

LlamaIndex is a data framework for your LLM applications

Pros of LlamaIndex

  • More comprehensive and flexible indexing system for various data sources
  • Extensive documentation and examples for different use cases
  • Active development with frequent updates and community contributions

Cons of LlamaIndex

  • Steeper learning curve due to its broader scope and functionality
  • May be overkill for simpler projects or specific use cases
  • Requires more setup and configuration compared to Quivr

Code Comparison

Quivr (Python):

from quivr import Quivr

brain = Quivr()
brain.add_file("document.pdf")
results = brain.query("What is the main topic?")

LlamaIndex (Python):

from llama_index import GPTSimpleVectorIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader('data').load_data()
index = GPTSimpleVectorIndex.from_documents(documents)
response = index.query("What is the main topic?")

Both repositories aim to simplify working with large language models and document processing. Quivr focuses on creating a "second brain" for personal knowledge management, while LlamaIndex provides a more general-purpose indexing and querying system for various data sources. LlamaIndex offers more flexibility and advanced features, but Quivr may be easier to set up for specific use cases.

Examples and guides for using the OpenAI API

Pros of openai-cookbook

  • Comprehensive guide with examples for various OpenAI API use cases
  • Regularly updated with new features and best practices
  • Maintained by OpenAI, ensuring accuracy and relevance

Cons of openai-cookbook

  • Focused solely on OpenAI's products, limiting its scope
  • Less emphasis on building complete applications or systems
  • Primarily educational, not a ready-to-use solution

Code Comparison

openai-cookbook:

import openai

response = openai.Completion.create(
  engine="text-davinci-002",
  prompt="Translate the following English text to French: '{}'",
  max_tokens=60
)

quivr:

from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate

llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
    input_variables=["product"],
    template="What is a good name for a company that makes {product}?",
)

Summary

openai-cookbook serves as an extensive resource for developers working with OpenAI's APIs, offering a wide range of examples and best practices. It's regularly updated and maintained by OpenAI, ensuring its content remains current and accurate. However, it's limited to OpenAI's products and doesn't focus on building complete applications.

quivr, on the other hand, is a more comprehensive solution for building AI applications, integrating various language models and offering a broader scope beyond just OpenAI's offerings. It provides a framework for creating more complex AI systems but may require more setup and configuration compared to the straightforward examples in openai-cookbook.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Quivr - Your Second Brain, Empowered by Generative AI

Quivr-logo

Discord Follow GitHub Repo stars Twitter Follow

Quivr, helps you build your second brain, utilizes the power of GenerativeAI to be your personal assistant !

Key Features 🎯

  • Opiniated RAG: We created a RAG that is opinionated, fast and efficient so you can focus on your product
  • LLMs: Quivr works with any LLM, you can use it with OpenAI, Anthropic, Mistral, Gemma, etc.
  • Any File: Quivr works with any file, you can use it with PDF, TXT, Markdown, etc and even add your own parsers.
  • Customize your RAG: Quivr allows you to customize your RAG, add internet search, add tools, etc.
  • Integrations with Megaparse: Quivr works with Megaparse, so you can ingest your files with Megaparse and use the RAG with Quivr.

We take care of the RAG so you can focus on your product. Simply install quivr-core and add it to your project. You can now ingest your files and ask questions.*

We will be improving the RAG and adding more features, stay tuned!

This is the core of Quivr, the brain of Quivr.com.

Getting Started 🚀

You can find everything on the documentation.

Prerequisites 📋

Ensure you have the following installed:

  • Python 3.10 or newer

30 seconds Installation 💽

  • Step 1: Install the package

    pip install quivr-core # Check that the installation worked
    
  • Step 2: Create a RAG with 5 lines of code

    import tempfile
    
    from quivr_core import Brain
    
    if __name__ == "__main__":
        with tempfile.NamedTemporaryFile(mode="w", suffix=".txt") as temp_file:
            temp_file.write("Gold is a liquid of blue-like colour.")
            temp_file.flush()
    
            brain = Brain.from_files(
                name="test_brain",
                file_paths=[temp_file.name],
            )
    
            answer = brain.ask(
                "what is gold? asnwer in french"
            )
            print("answer:", answer)
    

Configuration

Workflows

Basic RAG

Creating a basic RAG workflow like the one above is simple, here are the steps:

  1. Add your API Keys to your environment variables
import os
os.environ["OPENAI_API_KEY"] = "myopenai_apikey"

Quivr supports APIs from Anthropic, OpenAI, and Mistral. It also supports local models using Ollama.

  1. Create the YAML file basic_rag_workflow.yaml and copy the following content in it
workflow_config:
  name: "standard RAG"
  nodes:
    - name: "START"
      edges: ["filter_history"]

    - name: "filter_history"
      edges: ["rewrite"]

    - name: "rewrite"
      edges: ["retrieve"]

    - name: "retrieve"
      edges: ["generate_rag"]

    - name: "generate_rag" # the name of the last node, from which we want to stream the answer to the user
      edges: ["END"]

# Maximum number of previous conversation iterations
# to include in the context of the answer
max_history: 10

# Reranker configuration
reranker_config:
  # The reranker supplier to use
  supplier: "cohere"

  # The model to use for the reranker for the given supplier
  model: "rerank-multilingual-v3.0"

  # Number of chunks returned by the reranker
  top_n: 5

# Configuration for the LLM
llm_config:

  # maximum number of tokens passed to the LLM to generate the answer
  max_input_tokens: 4000

  # temperature for the LLM
  temperature: 0.7
  1. Create a Brain with the default configuration
from quivr_core import Brain

brain = Brain.from_files(name = "my smart brain",
                        file_paths = ["./my_first_doc.pdf", "./my_second_doc.txt"],
                        )

  1. Launch a Chat
brain.print_info()

from rich.console import Console
from rich.panel import Panel
from rich.prompt import Prompt
from quivr_core.config import RetrievalConfig

config_file_name = "./basic_rag_workflow.yaml"

retrieval_config = RetrievalConfig.from_yaml(config_file_name)

console = Console()
console.print(Panel.fit("Ask your brain !", style="bold magenta"))

while True:
    # Get user input
    question = Prompt.ask("[bold cyan]Question[/bold cyan]")

    # Check if user wants to exit
    if question.lower() == "exit":
        console.print(Panel("Goodbye!", style="bold yellow"))
        break

    answer = brain.ask(question, retrieval_config=retrieval_config)
    # Print the answer with typing effect
    console.print(f"[bold green]Quivr Assistant[/bold green]: {answer.answer}")

    console.print("-" * console.width)

brain.print_info()
  1. You are now all set up to talk with your brain and test different retrieval strategies by simply changing the configuration file!

Go further

You can go further with Quivr by adding internet search, adding tools, etc. Check the documentation for more information.

Contributors ✨

Thanks go to these wonderful people:

Contribute 🤝

Did you get a pull request? Open it, and we'll review it as soon as possible. Check out our project board here to see what we're currently focused on, and feel free to bring your fresh ideas to the table!

Partners ❤️

This project would not be possible without the support of our partners. Thank you for your support!

YCombinator Theodo

License 📄

This project is licensed under the Apache 2.0 License - see the LICENSE file for details