milvus
A cloud-native vector database, storage for next generation AI applications
Top Related Projects
Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database.
the AI-native open-source embedding database
Free and Open Source, Distributed, RESTful Search Engine
AI + Data, online. https://vespa.ai
Quick Overview
Milvus is an open-source vector database designed for managing and searching large-scale vector data. It is built to power AI applications and support similarity search at scale. Milvus offers high performance, scalability, and ease of use for handling embedding vectors generated by machine learning models.
Pros
- High performance and scalability for similarity search on large datasets
- Supports multiple index types and distance metrics for diverse use cases
- Integrates well with popular AI frameworks and tools
- Offers both standalone and distributed deployment options
Cons
- Steep learning curve for beginners unfamiliar with vector databases
- Limited support for complex queries compared to traditional relational databases
- Requires careful tuning and configuration for optimal performance
- Documentation can be inconsistent or outdated in some areas
Code Examples
- Creating a collection and inserting vectors:
from pymilvus import Collection, FieldSchema, CollectionSchema, DataType
# Define collection schema
fields = [
FieldSchema("id", DataType.INT64, is_primary=True),
FieldSchema("embedding", DataType.FLOAT_VECTOR, dim=128)
]
schema = CollectionSchema(fields)
# Create collection
collection = Collection("my_collection", schema)
# Insert vectors
entities = [
[1, 2, 3], # ids
[[1.0, 2.0, ..., 128.0], [2.0, 3.0, ..., 129.0], [3.0, 4.0, ..., 130.0]] # embeddings
]
collection.insert(entities)
- Performing a vector similarity search:
# Search for similar vectors
search_params = {"metric_type": "L2", "params": {"nprobe": 10}}
results = collection.search(
data=[[1.0, 2.0, ..., 128.0]], # query vector
anns_field="embedding",
param=search_params,
limit=5,
expr=None
)
# Process results
for hits in results:
for hit in hits:
print(f"ID: {hit.id}, Distance: {hit.distance}")
- Creating an index for faster searches:
# Create an IVF_FLAT index
index_params = {
"metric_type": "L2",
"index_type": "IVF_FLAT",
"params": {"nlist": 1024}
}
collection.create_index("embedding", index_params)
Getting Started
To get started with Milvus:
-
Install Milvus using Docker:
docker compose up -d
-
Install the Python client:
pip install pymilvus
-
Connect to Milvus:
from pymilvus import connections connections.connect("default", host="localhost", port="19530")
-
Create a collection, insert data, and perform searches as shown in the code examples above.
For more detailed instructions, refer to the official Milvus documentation.
Competitor Comparisons
Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
Pros of Qdrant
- Written in Rust, offering high performance and memory safety
- Supports filtering during search, allowing for more precise queries
- Provides a simple and intuitive API for vector search operations
Cons of Qdrant
- Smaller community and ecosystem compared to Milvus
- Limited support for distributed deployments and scalability
- Fewer indexing algorithms and data types supported
Code Comparison
Qdrant (Python client):
from qdrant_client import QdrantClient
client = QdrantClient("localhost", port=6333)
client.search(
collection_name="my_collection",
query_vector=[0.2, 0.1, 0.9, 0.7],
limit=5
)
Milvus (Python client):
from milvus import Milvus, IndexType, MetricType
milvus = Milvus(host='localhost', port='19530')
status, results = milvus.search(
collection_name="my_collection",
query_records=[[0.2, 0.1, 0.9, 0.7]],
top_k=5,
params={"nprobe": 16}
)
Both libraries offer straightforward APIs for vector search operations, but Milvus provides more advanced configuration options and supports a wider range of index types and metrics.
Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database.
Pros of Weaviate
- Built-in GraphQL API for easy querying and data manipulation
- Supports multiple vector index types (HNSW, LSH, FAISS)
- Modular architecture with pluggable modules for different functionalities
Cons of Weaviate
- Smaller community and ecosystem compared to Milvus
- Limited support for distributed deployments and scaling
Code Comparison
Milvus (Python client):
from pymilvus import Collection, connections
connections.connect()
collection = Collection("example")
results = collection.search(
data=[[1.0, 2.0, 3.0]],
anns_field="vector_field",
param={"metric_type": "L2", "params": {"nprobe": 10}},
limit=5
)
Weaviate (Python client):
import weaviate
client = weaviate.Client("http://localhost:8080")
results = client.query.get("Example", ["field1", "field2"]).with_near_vector({
"vector": [1.0, 2.0, 3.0]
}).with_limit(5).do()
Both repositories offer vector similarity search capabilities, but they differ in their approach and features. Milvus focuses on high-performance vector indexing and searching, while Weaviate provides a more comprehensive data object storage solution with built-in GraphQL support. The code examples demonstrate the different query styles and client interactions for each system.
the AI-native open-source embedding database
Pros of Chroma
- Simpler setup and usage, ideal for smaller-scale projects
- Built-in support for various embedding models
- More lightweight and easier to integrate into existing Python projects
Cons of Chroma
- Limited scalability compared to Milvus for large-scale vector search
- Fewer advanced features and customization options
- Less mature ecosystem and community support
Code Comparison
Chroma:
import chromadb
client = chromadb.Client()
collection = client.create_collection("my_collection")
collection.add(documents=["doc1", "doc2"], metadatas=[{"source": "a"}, {"source": "b"}], ids=["id1", "id2"])
results = collection.query(query_texts=["query"], n_results=2)
Milvus:
from pymilvus import Collection, connections
connections.connect()
collection = Collection("my_collection")
collection.insert([["doc1", "doc2"], [{"source": "a"}, {"source": "b"}]])
results = collection.search(data=["query"], anns_field="vector_field", param={}, limit=2)
Both Chroma and Milvus offer vector database solutions, but they cater to different use cases. Chroma is more suitable for smaller projects and quick integrations, while Milvus excels in large-scale, high-performance vector search applications. The code examples demonstrate the simplicity of Chroma's API compared to Milvus' more detailed configuration options.
Free and Open Source, Distributed, RESTful Search Engine
Pros of Elasticsearch
- Mature and widely adopted full-text search engine with extensive documentation
- Powerful query language (DSL) for complex searches and aggregations
- Supports a wide range of data types and use cases beyond vector search
Cons of Elasticsearch
- Less optimized for high-dimensional vector search compared to Milvus
- Can be resource-intensive and complex to set up and maintain at scale
- Limited support for advanced vector indexing algorithms like HNSW
Code Comparison
Elasticsearch (indexing a vector):
PUT /my-index/_doc/1
{
"my_vector": [1.5, 2.5, 3.5, 4.5, 5.5]
}
Milvus (inserting vectors):
collection.insert([
[1.5, 2.5, 3.5, 4.5, 5.5],
[2.5, 3.5, 4.5, 5.5, 6.5]
])
Both Elasticsearch and Milvus are powerful tools for search and data storage, but they have different strengths. Elasticsearch excels in full-text search and general-purpose document storage, while Milvus is specifically designed for efficient vector similarity search. The choice between them depends on the specific requirements of your project, such as the type of data you're working with and the scale of vector operations needed.
AI + Data, online. https://vespa.ai
Pros of Vespa
- More comprehensive search and recommendation capabilities, including real-time big data serving and content-based ranking
- Built-in support for machine learning model serving and online feature computation
- Flexible schema-less data model allowing for easy integration of structured and unstructured data
Cons of Vespa
- Steeper learning curve due to its more complex architecture and broader feature set
- Higher resource requirements for deployment and operation, especially for smaller-scale applications
- Less focused on vector similarity search compared to Milvus' specialized approach
Code Comparison
Milvus (Python client):
from milvus import Milvus, IndexType, MetricType
client = Milvus(host='localhost', port='19530')
client.create_collection('example', dimension=128, index_type=IndexType.IVF_FLAT, metric_type=MetricType.L2)
Vespa (Application package):
<schema>
<document type="example">
<field name="id" type="string" indexing="summary | attribute"/>
<field name="embedding" type="tensor<float>(x[128])" indexing="attribute | index"/>
</document>
</schema>
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
What is Milvus?
Milvus is an open-source vector database built to power embedding similarity search and AI applications. Milvus makes unstructured data search more accessible, and provides a consistent user experience regardless of the deployment environment.
Milvus 2.0 is a cloud-native vector database with storage and computation separated by design. All components in this refactored version of Milvus are stateless to enhance elasticity and flexibility. For more architecture details, see Milvus Architecture Overview.
Milvus was released under the open-source Apache License 2.0 in October 2019. It is currently a graduate project under LF AI & Data Foundation.
Key features
Millisecond search on trillion vector datasets
Average latency measured in milliseconds on trillion vector datasets.Simplified unstructured data management
Reliable, always on vector database
Milvusâ built-in replication and failover/failback features ensure data and applications can maintain business continuity in the event of a disruption.Highly scalable and elastic
Component-level scalability makes it possible to scale up and down on demand. Milvus can autoscale at a component level according to the load type, making resource scheduling much more efficient.Hybrid search
Since Milvus 2.4, we introduced multi-vector support and a hybrid search framework, which means users can bring in several vector fields (up to 10) into a single collection. These vectors in different columns represent diverse facets of data, originating from different embedding models or undergoing distinct processing methods. The results of hybrid searches are integrated using reranking strategies, such as Reciprocal Rank Fusion (RRF) and Weighted Scoring.This feature is particularly useful in comprehensive search scenarios, such as identifying the most similar person in a vector library based on various attributes like pictures, voice, fingerprints, etc. For details, refer to Hybrid Search for more.
Unified Lambda structure
Milvus combines stream and batch processing for data storage to balance timeliness and efficiency. Its unified interface makes vector similarity search a breeze.Community supported, industry recognized
With over 1,000 enterprise users, 27,000+ stars on GitHub, and an active open-source community, youâre not alone when you use Milvus. As a graduate project under the LF AI & Data Foundation, Milvus has institutional support.Quick start
Start with Zilliz Cloud
Zilliz Cloud is a fully managed service on cloud and the simplest way to deploy LF AI Milvus®, See Zilliz Cloud and start your free trial.
Install Milvus
Build Milvus from source code
Check the requirements first.
Linux systems (Ubuntu 20.04 or later recommended):
go: >= 1.21
cmake: >= 3.26.4
gcc: 9.5
python: > 3.8 and <= 3.11
MacOS systems with x86_64 (Big Sur 11.5 or later recommended):
go: >= 1.21
cmake: >= 3.26.4
llvm: >= 15
python: > 3.8 and <= 3.11
MacOS systems with Apple Silicon (Monterey 12.0.1 or later recommended):
go: >= 1.21 (Arch=ARM64)
cmake: >= 3.26.4
llvm: >= 15
python: > 3.8 and <= 3.11
Clone Milvus repo and build.
# Clone github repository.
$ git clone https://github.com/milvus-io/milvus.git
# Install third-party dependencies.
$ cd milvus/
$ ./scripts/install_deps.sh
# Compile Milvus.
$ make
For the full story, see developer's documentation.
IMPORTANT The master branch is for the development of Milvus v2.0. On March 9th, 2021, we released Milvus v1.0, the first stable version of Milvus with long-term support. To use Milvus v1.0, switch to branch 1.0.
Milvus 2.0 vs. 1.x: Cloud-native, distributed architecture, highly scalable, and more
See Milvus 2.0 vs. 1.x for more information.
Real world demos
Image search | Chatbots | Chemical structure search |
---|
Image Search
Images made searchable. Instantaneously return the most similar images from a massive database.
Chatbots
Interactive digital customer service that saves users time and businesses money.
Chemical Structure Search
Blazing fast similarity search, substructure search, or superstructure search for a specified molecule.
Bootcamps
Milvus bootcamp is designed to expose users to both the simplicity and depth of the vector database. Discover how to run benchmark tests as well as build similarity search applications spanning chatbots, recommendation systems, reverse image search, molecular search, and much more.
Contributing
Contributions to Milvus are welcome from everyone. See Guidelines for Contributing for details on submitting patches and the contribution workflow. See our community repository to learn about our governance and access more community resources.
All contributors
Documentation
For guidance on installation, development, deployment, and administration, check out Milvus Docs. For technical milestones and enhancement proposals, check out milvus confluence
SDK
The implemented SDK and its API documentation are listed below:
- PyMilvus SDK
- Java SDK
- Go SDK
- Cpp SDK(under development)
- Node SDK
- Rust SDK(under development)
- CSharp SDK(under development)
Attu
Attu provides an intuitive and efficient GUI for Milvus.
Community
Join the Milvus community on Discord to share your suggestions, advice, and questions with our engineering team.
You can also check out our FAQ page to discover solutions or answers to your issues or questions.
Subscribe to Milvus mailing lists:
Follow Milvus on social media:
Reference
Reference to cite when you use Milvus in a research paper:
@inproceedings{2021milvus,
title={Milvus: A Purpose-Built Vector Data Management System},
author={Wang, Jianguo and Yi, Xiaomeng and Guo, Rentong and Jin, Hai and Xu, Peng and Li, Shengjun and Wang, Xiangyu and Guo, Xiangzhou and Li, Chengming and Xu, Xiaohai and others},
booktitle={Proceedings of the 2021 International Conference on Management of Data},
pages={2614--2627},
year={2021}
}
@article{2022manu,
title={Manu: a cloud native vector database management system},
author={Guo, Rentong and Luan, Xiaofan and Xiang, Long and Yan, Xiao and Yi, Xiaomeng and Luo, Jigao and Cheng, Qianya and Xu, Weizhi and Luo, Jiarui and Liu, Frank and others},
journal={Proceedings of the VLDB Endowment},
volume={15},
number={12},
pages={3548--3561},
year={2022},
publisher={VLDB Endowment}
}
Acknowledgments
Milvus adopts dependencies from the following:
- Thanks to FAISS for the excellent search library.
- Thanks to etcd for providing great open-source key-value store tools.
- Thanks to Pulsar for its wonderful distributed pub-sub messaging system.
- Thanks to Tantivy for its full-text search engine library written in Rust.
- Thanks to RocksDB for the powerful storage engines.
Milvus is adopted by following opensource project:
- Towhee a flexible, application-oriented framework for computing embedding vectors over unstructured data.
- Haystack an open source NLP framework that leverages Transformer models
- Langchain Building applications with LLMs through composability
- LLamaIndex a data framework for your LLM applications
- GPTCache a library for creating semantic cache to store responses from LLM queries.
Top Related Projects
Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database.
the AI-native open-source embedding database
Free and Open Source, Distributed, RESTful Search Engine
AI + Data, online. https://vespa.ai
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot