Convert Figma logo to code with AI

run-llama logollama_index

LlamaIndex is a data framework for your LLM applications

35,292
4,958
35,292
667

Top Related Projects

LlamaIndex is a data framework for your LLM applications

14,940

the AI-native open-source embedding database

93,526

🦜🔗 Build context-aware reasoning applications

Integrate cutting-edge LLM technology quickly and easily into your apps

16,603

:mag: AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.

20,153

Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/

Quick Overview

LlamaIndex is an open-source data framework for LLM-based applications. It provides a set of tools to ingest, structure, and access private or domain-specific data for use with large language models (LLMs). LlamaIndex aims to simplify the process of building LLM-powered applications by offering various data connectors, indexing strategies, and query interfaces.

Pros

  • Flexible data ingestion from various sources (PDFs, APIs, databases, etc.)
  • Supports multiple LLM providers (OpenAI, Anthropic, Hugging Face, etc.)
  • Offers advanced indexing and retrieval methods for efficient data access
  • Extensive documentation and active community support

Cons

  • Learning curve for beginners due to the wide range of features
  • Dependency on external LLM providers for core functionality
  • May require fine-tuning for optimal performance in specific use cases
  • Resource-intensive for large datasets or complex queries

Code Examples

  1. Basic usage with OpenAI:
from llama_index import VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms import OpenAI

documents = SimpleDirectoryReader('data').load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What is the main topic of the documents?")
print(response)
  1. Using a custom LLM:
from llama_index.llms import HuggingFaceLLM
from llama_index import ServiceContext, VectorStoreIndex

llm = HuggingFaceLLM(model_name="gpt2")
service_context = ServiceContext.from_defaults(llm=llm)
index = VectorStoreIndex.from_documents(documents, service_context=service_context)
  1. Creating a chat engine:
from llama_index.chat_engine import CondenseQuestionChatEngine

chat_engine = CondenseQuestionChatEngine.from_defaults(
    query_engine=index.as_query_engine(),
    verbose=True
)
response = chat_engine.chat("Tell me about the documents.")
print(response)

Getting Started

  1. Install LlamaIndex:

    pip install llama-index
    
  2. Set up your OpenAI API key:

    import os
    os.environ['OPENAI_API_KEY'] = 'your-api-key-here'
    
  3. Create a simple index and query:

    from llama_index import VectorStoreIndex, SimpleDirectoryReader
    
    documents = SimpleDirectoryReader('data').load_data()
    index = VectorStoreIndex.from_documents(documents)
    query_engine = index.as_query_engine()
    response = query_engine.query("What is this document about?")
    print(response)
    

Competitor Comparisons

LlamaIndex is a data framework for your LLM applications

Pros of llama_index

  • More established project with a larger community and contributor base
  • Extensive documentation and examples available
  • Regular updates and active development

Cons of llama_index

  • Potentially more complex for beginners due to its extensive features
  • May have a steeper learning curve compared to simpler alternatives

Code Comparison

Both repositories appear to be the same project, so there isn't a relevant code comparison to make. The repository run-llama/llama_index seems to be the main and only repository for the LlamaIndex project.

Summary

LlamaIndex is a data framework designed to help build LLM applications. It provides tools for ingesting, structuring, and accessing private or domain-specific data in LLM applications. The project is actively maintained and has a growing community of users and contributors.

Since both repositories mentioned in the prompt appear to be the same project, there aren't distinct differences to compare. The LlamaIndex project offers a comprehensive solution for working with LLMs and structured data, but may require some time to fully understand and utilize its capabilities.

14,940

the AI-native open-source embedding database

Pros of Chroma

  • Specialized focus on vector databases and embeddings
  • Simpler API for vector search and similarity operations
  • Better performance for large-scale vector operations

Cons of Chroma

  • More limited in scope compared to LlamaIndex's broader data structuring capabilities
  • Less flexibility for complex query operations and data transformations
  • Fewer integrations with external data sources and AI models

Code Comparison

LlamaIndex example:

from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader('data').load_data()
index = GPTVectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What is the capital of France?")

Chroma example:

import chromadb
client = chromadb.Client()
collection = client.create_collection("my_collection")
collection.add(documents=["Paris is the capital of France"], ids=["1"])
results = collection.query(query_texts=["What is the capital of France?"], n_results=1)

Both LlamaIndex and Chroma offer powerful tools for working with vector data and embeddings. LlamaIndex provides a more comprehensive framework for structuring and querying various data types, while Chroma excels in specialized vector database operations with a simpler API. The choice between them depends on the specific requirements of your project and the complexity of your data processing needs.

93,526

🦜🔗 Build context-aware reasoning applications

Pros of LangChain

  • More comprehensive framework with a wider range of tools and integrations
  • Stronger community support and more extensive documentation
  • Flexible architecture allowing for easy customization and extension

Cons of LangChain

  • Steeper learning curve due to its broader scope and complexity
  • Can be overkill for simpler projects that don't require its full feature set

Code Comparison

LangChain:

from langchain import OpenAI, LLMChain, PromptTemplate

llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(input_variables=["product"], template="What is a good name for a company that makes {product}?")
chain = LLMChain(llm=llm, prompt=prompt)

LlamaIndex:

from llama_index import GPTSimpleVectorIndex, Document
from llama_index.readers import SimpleWebPageReader

documents = SimpleWebPageReader(url="https://example.com").load_data()
index = GPTSimpleVectorIndex.from_documents(documents)

Both LangChain and LlamaIndex are powerful tools for working with language models and building AI applications. LangChain offers a more comprehensive framework with broader capabilities, while LlamaIndex focuses more specifically on efficient indexing and retrieval of information. The choice between them depends on the specific requirements of your project and the level of complexity you're comfortable working with.

Integrate cutting-edge LLM technology quickly and easily into your apps

Pros of Semantic Kernel

  • More comprehensive framework for building AI applications
  • Better integration with Azure services and other Microsoft tools
  • Stronger focus on enterprise-level development and scalability

Cons of Semantic Kernel

  • Steeper learning curve due to more complex architecture
  • Less flexibility for custom indexing and retrieval methods
  • Primarily designed for C# developers, with limited support for other languages

Code Comparison

Semantic Kernel (C#):

var kernel = Kernel.Builder.Build();
var function = kernel.CreateSemanticFunction("Generate a story about {{$input}}");
var result = await kernel.RunAsync("a brave knight", function);

LlamaIndex (Python):

from llama_index import GPTSimpleVectorIndex, Document
documents = [Document("content")]
index = GPTSimpleVectorIndex.from_documents(documents)
response = index.query("Generate a story about a brave knight")

Both repositories aim to simplify the integration of large language models into applications, but they take different approaches. Semantic Kernel offers a more comprehensive framework with stronger enterprise focus, while LlamaIndex provides a more flexible and lightweight solution for indexing and querying data using LLMs.

16,603

:mag: AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.

Pros of Haystack

  • More comprehensive and modular framework for building end-to-end NLP pipelines
  • Supports a wider range of NLP tasks beyond just question answering
  • Better suited for production-ready applications with features like caching and scalability

Cons of Haystack

  • Steeper learning curve due to its more complex architecture
  • Less focused on RAG-specific optimizations compared to LlamaIndex
  • May be overkill for simpler projects that only require basic RAG functionality

Code Comparison

Haystack:

from haystack import Pipeline
from haystack.nodes import EmbeddingRetriever, FARMReader

retriever = EmbeddingRetriever(document_store=document_store)
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
pipe = Pipeline()
pipe.add_node(component=retriever, name="Retriever", inputs=["Query"])
pipe.add_node(component=reader, name="Reader", inputs=["Retriever"])

LlamaIndex:

from llama_index import GPTSimpleVectorIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader('data').load_data()
index = GPTSimpleVectorIndex.from_documents(documents)
response = index.query("What is the capital of France?")
20,153

Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/

Pros of Qdrant

  • Specialized vector database with advanced similarity search capabilities
  • Supports filtering and faceted search alongside vector queries
  • Written in Rust, offering high performance and memory efficiency

Cons of Qdrant

  • More focused on vector search, less versatile for general data indexing
  • Steeper learning curve for users not familiar with vector databases
  • May require additional setup and infrastructure compared to LlamaIndex

Code Comparison

Qdrant (creating and querying a collection):

from qdrant_client import QdrantClient

client = QdrantClient("localhost", port=6333)
client.create_collection("my_collection", vector_size=768)
client.search("my_collection", query_vector=[0.2, 0.1, ...], limit=5)

LlamaIndex (creating and querying an index):

from llama_index import GPTSimpleVectorIndex, Document

documents = [Document("text1"), Document("text2")]
index = GPTSimpleVectorIndex.from_documents(documents)
response = index.query("What is the meaning of life?")

Both repositories offer powerful tools for working with vector data and search, but they serve different primary purposes. Qdrant is a specialized vector database focusing on similarity search, while LlamaIndex is a more general-purpose framework for building AI-powered applications with various index types and query capabilities.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

🗂️ LlamaIndex 🦙

PyPI - Downloads GitHub contributors Discord Ask AI

LlamaIndex (GPT Index) is a data framework for your LLM application. Building with LlamaIndex typically involves working with LlamaIndex core and a chosen set of integrations (or plugins). There are two ways to start building with LlamaIndex in Python:

  1. Starter: llama-index (https://pypi.org/project/llama-index/). A starter Python package that includes core LlamaIndex as well as a selection of integrations.

  2. Customized: llama-index-core (https://pypi.org/project/llama-index-core/). Install core LlamaIndex and add your chosen LlamaIndex integration packages on LlamaHub that are required for your application. There are over 300 LlamaIndex integration packages that work seamlessly with core, allowing you to build with your preferred LLM, embedding, and vector store providers.

The LlamaIndex Python library is namespaced such that import statements which include core imply that the core package is being used. In contrast, those statements without core imply that an integration package is being used.

# typical pattern
from llama_index.core.xxx import ClassABC  # core submodule xxx
from llama_index.xxx.yyy import (
    SubclassABC,
)  # integration yyy for submodule xxx

# concrete example
from llama_index.core.llms import LLM
from llama_index.llms.openai import OpenAI

Important Links

LlamaIndex.TS (Typescript/Javascript): https://github.com/run-llama/LlamaIndexTS.

Documentation: https://docs.llamaindex.ai/en/stable/.

Twitter: https://twitter.com/llama_index.

Discord: https://discord.gg/dGcwcsnxhU.

Ecosystem

🚀 Overview

NOTE: This README is not updated as frequently as the documentation. Please check out the documentation above for the latest updates!

Context

  • LLMs are a phenomenal piece of technology for knowledge generation and reasoning. They are pre-trained on large amounts of publicly available data.
  • How do we best augment LLMs with our own private data?

We need a comprehensive toolkit to help perform this data augmentation for LLMs.

Proposed Solution

That's where LlamaIndex comes in. LlamaIndex is a "data framework" to help you build LLM apps. It provides the following tools:

  • Offers data connectors to ingest your existing data sources and data formats (APIs, PDFs, docs, SQL, etc.).
  • Provides ways to structure your data (indices, graphs) so that this data can be easily used with LLMs.
  • Provides an advanced retrieval/query interface over your data: Feed in any LLM input prompt, get back retrieved context and knowledge-augmented output.
  • Allows easy integrations with your outer application framework (e.g. with LangChain, Flask, Docker, ChatGPT, anything else).

LlamaIndex provides tools for both beginner users and advanced users. Our high-level API allows beginner users to use LlamaIndex to ingest and query their data in 5 lines of code. Our lower-level APIs allow advanced users to customize and extend any module (data connectors, indices, retrievers, query engines, reranking modules), to fit their needs.

💡 Contributing

Interested in contributing? Contributions to LlamaIndex core as well as contributing integrations that build on the core are both accepted and highly encouraged! See our Contribution Guide for more details.

📄 Documentation

Full documentation can be found here: https://docs.llamaindex.ai/en/latest/.

Please check it out for the most up-to-date tutorials, how-to guides, references, and other resources!

💻 Example Usage

# custom selection of integrations to work with core
pip install llama-index-core
pip install llama-index-llms-openai
pip install llama-index-llms-replicate
pip install llama-index-embeddings-huggingface

Examples are in the docs/examples folder. Indices are in the indices folder (see list of indices below).

To build a simple vector store index using OpenAI:

import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("YOUR_DATA_DIRECTORY").load_data()
index = VectorStoreIndex.from_documents(documents)

To build a simple vector store index using non-OpenAI LLMs, e.g. Llama 2 hosted on Replicate, where you can easily create a free trial API token:

import os

os.environ["REPLICATE_API_TOKEN"] = "YOUR_REPLICATE_API_TOKEN"

from llama_index.core import Settings, VectorStoreIndex, SimpleDirectoryReader
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.llms.replicate import Replicate
from transformers import AutoTokenizer

# set the LLM
llama2_7b_chat = "meta/llama-2-7b-chat:8e6975e5ed6174911a6ff3d60540dfd4844201974602551e10e9e87ab143d81e"
Settings.llm = Replicate(
    model=llama2_7b_chat,
    temperature=0.01,
    additional_kwargs={"top_p": 1, "max_new_tokens": 300},
)

# set tokenizer to match LLM
Settings.tokenizer = AutoTokenizer.from_pretrained(
    "NousResearch/Llama-2-7b-chat-hf"
)

# set the embed model
Settings.embed_model = HuggingFaceEmbedding(
    model_name="BAAI/bge-small-en-v1.5"
)

documents = SimpleDirectoryReader("YOUR_DATA_DIRECTORY").load_data()
index = VectorStoreIndex.from_documents(
    documents,
)

To query:

query_engine = index.as_query_engine()
query_engine.query("YOUR_QUESTION")

By default, data is stored in-memory. To persist to disk (under ./storage):

index.storage_context.persist()

To reload from disk:

from llama_index.core import StorageContext, load_index_from_storage

# rebuild storage context
storage_context = StorageContext.from_defaults(persist_dir="./storage")
# load index
index = load_index_from_storage(storage_context)

🔧 Dependencies

We use poetry as the package manager for all Python packages. As a result, the dependencies of each Python package can be found by referencing the pyproject.toml file in each of the package's folders.

cd <desired-package-folder>
pip install poetry
poetry install --with dev

📖 Citation

Reference to cite if you use LlamaIndex in a paper:

@software{Liu_LlamaIndex_2022,
author = {Liu, Jerry},
doi = {10.5281/zenodo.1234},
month = {11},
title = {{LlamaIndex}},
url = {https://github.com/jerryjliu/llama_index},
year = {2022}
}