quivr
Open-source RAG Framework for building GenAI Second Brains 🧠 Build productivity assistant (RAG) ⚡️🤖 Chat with your docs (PDF, CSV, ...) & apps using Langchain, GPT 3.5 / 4 turbo, Private, Anthropic, VertexAI, Ollama, LLMs, Groq that you can share with users ! Efficient retrieval augmented generation framework
Top Related Projects
🦜🔗 Build context-aware reasoning applications
the AI-native open-source embedding database
Integrate cutting-edge LLM technology quickly and easily into your apps
LlamaIndex is a data framework for your LLM applications
Examples and guides for using the OpenAI API
Quick Overview
Quivr is an open-source, AI-powered personal productivity assistant. It allows users to store and retrieve information from various sources, acting as a "second brain" to enhance memory and productivity. Quivr uses advanced language models to process and understand user inputs, making it a powerful tool for knowledge management and task organization.
Pros
- Integrates multiple data sources (files, links, notes) into a unified knowledge base
- Utilizes AI for intelligent information retrieval and task management
- Open-source, allowing for community contributions and customization
- Supports natural language interactions for ease of use
Cons
- May require technical knowledge for setup and customization
- Potential privacy concerns due to AI processing of personal data
- Dependency on external AI services may affect reliability and cost
- Learning curve for optimal usage of all features
Getting Started
To get started with Quivr:
-
Clone the repository:
git clone https://github.com/QuivrHQ/quivr.git
-
Install dependencies:
cd quivr pip install -r requirements.txt
-
Set up environment variables:
cp .env.example .env # Edit .env file with your configuration
-
Run the application:
python main.py
For detailed setup instructions and configuration options, refer to the project's README and documentation on the GitHub repository.
Competitor Comparisons
🦜🔗 Build context-aware reasoning applications
Pros of LangChain
- More comprehensive and flexible framework for building LLM applications
- Larger community and ecosystem with extensive documentation
- Supports a wider range of LLMs and integrations
Cons of LangChain
- Steeper learning curve due to its extensive features and abstractions
- Can be overkill for simpler projects or specific use cases
- Requires more setup and configuration compared to Quivr
Code Comparison
LangChain example:
from langchain import OpenAI, LLMChain, PromptTemplate
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(input_variables=["product"], template="What is a good name for a company that makes {product}?")
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run("colorful socks"))
Quivr example:
from quivr import Client
client = Client()
response = client.chat(messages=[{"role": "user", "content": "What is a good name for a company that makes colorful socks?"}])
print(response.choices[0].message.content)
Both repositories aim to simplify working with LLMs, but LangChain offers a more comprehensive toolkit at the cost of increased complexity, while Quivr provides a more streamlined approach for specific use cases.
the AI-native open-source embedding database
Pros of Chroma
- More focused on vector database functionality, offering advanced embedding and similarity search capabilities
- Better suited for large-scale production environments with distributed architecture support
- More extensive documentation and API references
Cons of Chroma
- Steeper learning curve for beginners due to its specialized nature
- Less emphasis on end-user applications compared to Quivr's more user-friendly approach
Code Comparison
Chroma (Python):
import chromadb
client = chromadb.Client()
collection = client.create_collection("my_collection")
collection.add(
documents=["This is a document", "This is another document"],
metadatas=[{"source": "my_source"}, {"source": "my_source"}],
ids=["id1", "id2"]
)
Quivr (Python):
from quivr import Client
client = Client()
brain = client.create_brain("my_brain")
brain.add_knowledge(
"This is a document",
metadata={"source": "my_source"}
)
Both repositories offer tools for managing and querying vector data, but Chroma focuses more on the database aspect, while Quivr provides a higher-level abstraction for building AI-powered applications. Chroma's code emphasizes collection management and document addition, whereas Quivr's code showcases a more intuitive "brain" concept for knowledge management.
Pros of langchain-hub
- Extensive collection of pre-built prompts and chains for various use cases
- Strong integration with the LangChain ecosystem
- Active community contributions and regular updates
Cons of langchain-hub
- More focused on providing components rather than a complete application
- Steeper learning curve for users new to LangChain concepts
- Less emphasis on user interface and visual design
Code Comparison
langchain-hub:
from langchain.prompts import load_prompt
prompt = load_prompt("lc://prompts/conversation/prompt.yaml")
result = prompt.format(input="Hello, how are you?")
Quivr:
from quivr import Brain
brain = Brain("my_brain")
brain.add_knowledge("Hello, I'm an AI assistant.")
response = brain.query("How can I help you today?")
Summary
langchain-hub offers a rich repository of LangChain components, ideal for developers familiar with the ecosystem. Quivr provides a more user-friendly, application-focused approach to building AI-powered knowledge bases. While langchain-hub excels in flexibility and integration with LangChain, Quivr offers a more streamlined experience for creating and querying AI brains.
Integrate cutting-edge LLM technology quickly and easily into your apps
Pros of Semantic Kernel
- More extensive documentation and examples
- Broader language support (C#, Python, Java)
- Stronger integration with Azure AI services
Cons of Semantic Kernel
- Steeper learning curve for beginners
- More complex setup process
- Primarily focused on enterprise-level applications
Code Comparison
Quivr (Python):
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores import Chroma
embeddings = HuggingFaceEmbeddings()
db = Chroma(embedding_function=embeddings)
Semantic Kernel (C#):
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.SemanticFunctions;
var kernel = Kernel.Builder.Build();
var promptConfig = new PromptTemplateConfig();
var semanticFunction = kernel.CreateSemanticFunction("Your prompt here", config: promptConfig);
Both repositories offer unique approaches to building AI-powered applications. Quivr focuses on creating a second brain using LLMs and vector databases, while Semantic Kernel provides a more comprehensive framework for integrating AI capabilities into various applications. The choice between the two depends on the specific project requirements, development language preferences, and the desired level of integration with existing systems.
LlamaIndex is a data framework for your LLM applications
Pros of LlamaIndex
- More comprehensive and flexible indexing system for various data sources
- Extensive documentation and examples for different use cases
- Active development with frequent updates and community contributions
Cons of LlamaIndex
- Steeper learning curve due to its broader scope and functionality
- May be overkill for simpler projects or specific use cases
- Requires more setup and configuration compared to Quivr
Code Comparison
Quivr (Python):
from quivr import Quivr
brain = Quivr()
brain.add_file("document.pdf")
results = brain.query("What is the main topic?")
LlamaIndex (Python):
from llama_index import GPTSimpleVectorIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader('data').load_data()
index = GPTSimpleVectorIndex.from_documents(documents)
response = index.query("What is the main topic?")
Both repositories aim to simplify working with large language models and document processing. Quivr focuses on creating a "second brain" for personal knowledge management, while LlamaIndex provides a more general-purpose indexing and querying system for various data sources. LlamaIndex offers more flexibility and advanced features, but Quivr may be easier to set up for specific use cases.
Examples and guides for using the OpenAI API
Pros of openai-cookbook
- Comprehensive guide with examples for various OpenAI API use cases
- Regularly updated with new features and best practices
- Maintained by OpenAI, ensuring accuracy and relevance
Cons of openai-cookbook
- Focused solely on OpenAI's products, limiting its scope
- Less emphasis on building complete applications or systems
- Primarily educational, not a ready-to-use solution
Code Comparison
openai-cookbook:
import openai
response = openai.Completion.create(
engine="text-davinci-002",
prompt="Translate the following English text to French: '{}'",
max_tokens=60
)
quivr:
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
Summary
openai-cookbook serves as an extensive resource for developers working with OpenAI's APIs, offering a wide range of examples and best practices. It's regularly updated and maintained by OpenAI, ensuring its content remains current and accurate. However, it's limited to OpenAI's products and doesn't focus on building complete applications.
quivr, on the other hand, is a more comprehensive solution for building AI applications, integrating various language models and offering a broader scope beyond just OpenAI's offerings. It provides a framework for creating more complex AI systems but may require more setup and configuration compared to the straightforward examples in openai-cookbook.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Quivr - Your Second Brain, Empowered by Generative AI
Quivr, your second brain, utilizes the power of GenerativeAI to be your personal assistant ! Think of it as Obsidian, but turbocharged with AI capabilities.
Key Features ð¯
- Fast and Efficient: Designed with speed and efficiency at its core. Quivr ensures rapid access to your data.
- Secure: Your data, your control. Always.
- OS Compatible: Ubuntu 20 or newer.
- File Compatibility: Text, Markdown, PDF, Powerpoint, Excel, CSV, Word, Audio, Video
- Open Source: Freedom is beautiful, and so is Quivr. Open source and free to use.
- Public/Private: Share your brains with your users via a public link, or keep them private.
- Offline Mode: Quivr works offline, so you can access your data anytime, anywhere.
Demo Highlight ð¥
https://github.com/quivrhq/quivr/assets/19614572/a6463b73-76c7-4bc0-978d-70562dca71f5
Getting Started ð
You can deploy Quivr to Porter Cloud with one-click:
If you would like to deploy locally, follow these instructions to get a copy of the project up and running on your local machine for development and testing purposes.
You can find everything on the documentation.
Prerequisites ð
Ensure you have the following installed:
- Docker
- Docker Compose
60 seconds Installation ð½
You can find the installation video here.
-
Step 0: Supabase CLI
Follow the instructions here to install the Supabase CLI that is required.
supabase -v # Check that the installation worked
-
Step 1: Clone the repository:
git clone https://github.com/quivrhq/quivr.git && cd quivr
-
Step 2: Copy the
.env.example
filescp .env.example .env
-
Step 3: Update the
.env
filesvim .env # or emacs or vscode or nano
Update OPENAI_API_KEY in the
.env
file.You just need to update the
OPENAI_API_KEY
variable in the.env
file. You can get your API key here. You need to create an account first. And put your credit card information. Don't worry, you won't be charged unless you use the API. You can find more information about the pricing here. -
Step 4: Launch the project
cd backend && supabase start
and then
cd ../ docker compose pull docker compose up
If you have a Mac, go to Docker Desktop > Settings > General and check that the "file sharing implementation" is set to
VirtioFS
.If you are a developer, you can run the project in development mode with the following command:
docker compose -f docker-compose.dev.yml up --build
-
Step 5: Login to the app
You can now sign in to the app with
admin@quivr.app
&admin
. You can access the app at http://localhost:3000/login.You can access Quivr backend API at http://localhost:5050/docs
You can access supabase at http://localhost:54323
Updating Quivr ð
-
Step 1: Pull the latest changes
git pull
-
Step 2: Update the migration
supabase migration up
Contributors â¨
Thanks go to these wonderful people:
Contribute ð¤
Did you get a pull request? Open it, and we'll review it as soon as possible. Check out our project board here to see what we're currently focused on, and feel free to bring your fresh ideas to the table!
Partners â¤ï¸
This project would not be possible without the support of our partners. Thank you for your support!
License ð
This project is licensed under the Apache 2.0 License - see the LICENSE file for details
Stars History ð
Top Related Projects
🦜🔗 Build context-aware reasoning applications
the AI-native open-source embedding database
Integrate cutting-edge LLM technology quickly and easily into your apps
LlamaIndex is a data framework for your LLM applications
Examples and guides for using the OpenAI API
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot