Top Related Projects
Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk
Non-Metric Space Library (NMSLIB): An efficient similarity search library and a toolkit for evaluation of k-NN methods for generic non-metric spaces.
Uniform Manifold Approximation and Projection
Benchmarks of approximate nearest neighbor libraries in Python
A cloud-native vector database, storage for next generation AI applications
Google Research
Quick Overview
Faiss (Facebook AI Similarity Search) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. Faiss is written in C++ with complete wrappers for Python/numpy.
Pros
- High performance and scalability for large datasets
- Supports both CPU and GPU implementations
- Offers a wide range of indexing algorithms for different use cases
- Well-documented with extensive examples and tutorials
Cons
- Steep learning curve for beginners
- Limited support for sparse vectors
- Can be complex to fine-tune for optimal performance
- Requires careful memory management for very large datasets
Code Examples
- Creating an index and adding vectors:
import numpy as np
import faiss
d = 64 # dimension
nb = 100000 # database size
nq = 10000 # nb of queries
np.random.seed(1234) # make reproducible
xb = np.random.random((nb, d)).astype('float32')
xq = np.random.random((nq, d)).astype('float32')
index = faiss.IndexFlatL2(d) # build the index
print(index.is_trained)
index.add(xb) # add vectors to the index
print(index.ntotal)
- Searching the index:
k = 4 # we want to see 4 nearest neighbors
D, I = index.search(xq, k) # actual search
print(I[:5]) # neighbors of the 5 first queries
print(D[:5]) # distances of the 5 first queries
- Using GPU for faster processing:
res = faiss.StandardGpuResources() # use a single GPU
index_flat = faiss.IndexFlatL2(d) # build a flat (CPU) index
gpu_index_flat = faiss.index_cpu_to_gpu(res, 0, index_flat) # make it into a GPU index
gpu_index_flat.add(xb) # add vectors to the index
D, I = gpu_index_flat.search(xq, k) # search
Getting Started
To get started with Faiss:
-
Install Faiss:
pip install faiss-cpu # CPU-only version # or pip install faiss-gpu # GPU version (requires CUDA)
-
Import and use Faiss in your Python code:
import faiss import numpy as np # Create some sample data d = 64 # dimension nb = 100000 # database size xb = np.random.random((nb, d)).astype('float32') # Create an index index = faiss.IndexFlatL2(d) index.add(xb) # Search the index k = 4 # number of nearest neighbors xq = np.random.random((1, d)).astype('float32') D, I = index.search(xq, k) print(f"Distances: {D}\nIndices: {I}")
Competitor Comparisons
Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk
Pros of Annoy
- Simpler implementation and easier to use, especially for beginners
- Better support for memory-mapping, allowing for faster load times
- More flexible index building with support for incremental additions
Cons of Annoy
- Generally slower search performance compared to FAISS
- Limited to angular and Euclidean distance metrics
- Less optimized for GPU acceleration
Code Comparison
Annoy:
from annoy import AnnoyIndex
f = 40 # Length of item vector
t = AnnoyIndex(f, 'angular')
for i in range(1000):
v = [random.gauss(0, 1) for z in range(f)]
t.add_item(i, v)
t.build(10) # 10 trees
t.save('test.ann')
FAISS:
import faiss
import numpy as np
d = 64 # dimension
nb = 100000 # database size
nq = 10000 # nb of queries
xb = np.random.random((nb, d)).astype('float32')
xq = np.random.random((nq, d)).astype('float32')
index = faiss.IndexFlatL2(d)
index.add(xb)
D, I = index.search(xq, k)
Both libraries offer efficient solutions for approximate nearest neighbor search, but FAISS generally provides better performance and more advanced features, while Annoy is simpler to use and offers some unique advantages like memory-mapping.
Non-Metric Space Library (NMSLIB): An efficient similarity search library and a toolkit for evaluation of k-NN methods for generic non-metric spaces.
Pros of nmslib
- Supports a wider range of distance metrics, including non-metric spaces
- Generally faster for high-dimensional data and large datasets
- More flexible API with support for various index types
Cons of nmslib
- Less actively maintained compared to FAISS
- Fewer built-in GPU acceleration options
- Documentation can be less comprehensive
Code Comparison
nmslib:
import nmslib
index = nmslib.init(method='hnsw', space='cosinesimil')
index.addDataPointBatch(data)
index.createIndex({'post': 2}, print_progress=True)
ids, distances = index.knnQuery(query, k=10)
FAISS:
import faiss
index = faiss.IndexFlatL2(dimension)
index.add(data)
distances, ids = index.search(query, k=10)
Both libraries offer efficient nearest neighbor search capabilities, but nmslib provides more flexibility in terms of distance metrics and index types. FAISS, on the other hand, has better GPU support and is more actively maintained. The choice between the two depends on specific use cases and requirements.
Uniform Manifold Approximation and Projection
Pros of UMAP
- Focuses on dimensionality reduction and visualization
- Better preserves global structure in low-dimensional embeddings
- Faster runtime for large datasets compared to t-SNE
Cons of UMAP
- Limited to dimensionality reduction, not optimized for similarity search
- May require more parameter tuning to achieve optimal results
- Less mature ecosystem compared to FAISS
Code Comparison
UMAP example:
import umap
reducer = umap.UMAP()
embedding = reducer.fit_transform(data)
FAISS example:
import faiss
index = faiss.IndexFlatL2(d)
index.add(xb)
D, I = index.search(xq, k)
UMAP is primarily used for dimensionality reduction and visualization, while FAISS is designed for efficient similarity search and clustering in high-dimensional spaces. UMAP offers better preservation of global structure in low-dimensional embeddings, making it suitable for exploratory data analysis. FAISS, on the other hand, provides a wide range of indexing algorithms optimized for fast nearest neighbor search, making it more versatile for large-scale information retrieval tasks.
Benchmarks of approximate nearest neighbor libraries in Python
Pros of ann-benchmarks
- Comprehensive benchmarking suite for various ANN algorithms
- Language-agnostic, supporting implementations in multiple programming languages
- Regularly updated with new algorithms and datasets
Cons of ann-benchmarks
- Primarily focused on benchmarking, not providing a production-ready library
- May have higher overhead due to supporting multiple languages and algorithms
- Less optimized for specific use cases compared to specialized libraries
Code Comparison
ann-benchmarks (Python):
import annoy
index = annoy.AnnoyIndex(f=40, metric='angular')
for i, v in enumerate(X_train):
index.add_item(i, v)
index.build(10)
FAISS (C++):
faiss::IndexFlatL2 index(d);
index.add(n, xb);
index.search(nq, xq, k, distances, labels);
Summary
While ann-benchmarks offers a comprehensive benchmarking suite for various ANN algorithms, FAISS provides a highly optimized library focused on efficient similarity search. ann-benchmarks is ideal for comparing different algorithms across languages, while FAISS is better suited for production use in large-scale applications requiring high performance.
A cloud-native vector database, storage for next generation AI applications
Pros of Milvus
- Designed as a cloud-native vector database, offering better scalability and distributed processing
- Supports multiple index types and similarity metrics, providing more flexibility
- Includes built-in data management features like data persistence and CRUD operations
Cons of Milvus
- Higher complexity and resource requirements for setup and maintenance
- Steeper learning curve due to more advanced features and configurations
- May be overkill for simpler use cases or smaller-scale applications
Code Comparison
Faiss (Python):
import faiss
index = faiss.IndexFlatL2(dimension)
index.add(vectors)
D, I = index.search(query_vectors, k)
Milvus (Python):
from milvus import Milvus, IndexType
client = Milvus(host='localhost', port='19530')
client.create_collection({'collection_name': 'example', 'dimension': dimension})
client.insert('example', vectors)
status, results = client.search('example', query_vectors, top_k=k)
Both libraries offer efficient vector similarity search, but Milvus provides a more comprehensive solution for large-scale, distributed vector data management, while Faiss focuses on high-performance vector indexing and searching with a simpler API.
Google Research
Pros of google-research
- Broader scope, covering various research areas in AI and machine learning
- More frequent updates and contributions from Google researchers
- Includes implementations of cutting-edge algorithms and techniques
Cons of google-research
- Less focused on a specific problem domain, potentially harder to navigate
- May have less consistent documentation and code structure across projects
- Some projects might be experimental or not production-ready
Code comparison
faiss:
IndexFlatL2 index(d);
index.add(nb, xb);
index.search(nq, xq, k, distances, labels);
google-research (example from BERT):
bert_config = modeling.BertConfig.from_json_file(bert_config_file)
model = modeling.BertModel(
config=bert_config,
is_training=is_training,
input_ids=input_ids,
input_mask=input_mask,
token_type_ids=segment_ids,
use_one_hot_embeddings=use_one_hot_embeddings)
Summary
faiss is a specialized library for efficient similarity search and clustering of dense vectors, while google-research is a diverse collection of research projects covering various AI and machine learning topics. faiss offers a more focused and optimized solution for specific use cases, while google-research provides a broader range of cutting-edge research implementations but may require more effort to navigate and integrate into production systems.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Faiss
Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning. Faiss is written in C++ with complete wrappers for Python/numpy. Some of the most useful algorithms are implemented on the GPU. It is developed primarily at Meta's Fundamental AI Research group.
News
See CHANGELOG.md for detailed information about latest features.
Introduction
Faiss contains several methods for similarity search. It assumes that the instances are represented as vectors and are identified by an integer, and that the vectors can be compared with L2 (Euclidean) distances or dot products. Vectors that are similar to a query vector are those that have the lowest L2 distance or the highest dot product with the query vector. It also supports cosine similarity, since this is a dot product on normalized vectors.
Some of the methods, like those based on binary vectors and compact quantization codes, solely use a compressed representation of the vectors and do not require to keep the original vectors. This generally comes at the cost of a less precise search but these methods can scale to billions of vectors in main memory on a single server. Other methods, like HNSW and NSG add an indexing structure on top of the raw vectors to make searching more efficient.
The GPU implementation can accept input from either CPU or GPU memory. On a server with GPUs, the GPU indexes can be used a drop-in replacement for the CPU indexes (e.g., replace IndexFlatL2
with GpuIndexFlatL2
) and copies to/from GPU memory are handled automatically. Results will be faster however if both input and output remain resident on the GPU. Both single and multi-GPU usage is supported.
Installing
Faiss comes with precompiled libraries for Anaconda in Python, see faiss-cpu and faiss-gpu. The library is mostly implemented in C++, the only dependency is a BLAS implementation. Optional GPU support is provided via CUDA, and the Python interface is also optional. It compiles with cmake. See INSTALL.md for details.
How Faiss works
Faiss is built around an index type that stores a set of vectors, and provides a function to search in them with L2 and/or dot product vector comparison. Some index types are simple baselines, such as exact search. Most of the available indexing structures correspond to various trade-offs with respect to
- search time
- search quality
- memory used per index vector
- training time
- adding time
- need for external data for unsupervised training
The optional GPU implementation provides what is likely (as of March 2017) the fastest exact and approximate (compressed-domain) nearest neighbor search implementation for high-dimensional vectors, fastest Lloyd's k-means, and fastest small k-selection algorithm known. The implementation is detailed here.
Full documentation of Faiss
The following are entry points for documentation:
- the full documentation can be found on the wiki page, including a tutorial, a FAQ and a troubleshooting section
- the doxygen documentation gives per-class information extracted from code comments
- to reproduce results from our research papers, Polysemous codes and Billion-scale similarity search with GPUs, refer to the benchmarks README. For Link and code: Fast indexing with graphs and compact regression codes, see the link_and_code README
Authors
The main authors of Faiss are:
- Hervé Jégou initiated the Faiss project and wrote its first implementation
- Matthijs Douze implemented most of the CPU Faiss
- Jeff Johnson implemented all of the GPU Faiss
- Lucas Hosseini implemented the binary indexes and the build system
- Chengqi Deng implemented NSG, NNdescent and much of the additive quantization code.
- Alexandr Guzhva many optimizations: SIMD, memory allocation and layout, fast decoding kernels for vector codecs, etc.
- Gergely Szilvasy build system, benchmarking framework.
Reference
References to cite when you use Faiss in a research paper:
@article{douze2024faiss,
title={The Faiss library},
author={Matthijs Douze and Alexandr Guzhva and Chengqi Deng and Jeff Johnson and Gergely Szilvasy and Pierre-Emmanuel Mazaré and Maria Lomeli and Lucas Hosseini and Hervé Jégou},
year={2024},
eprint={2401.08281},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
For the GPU version of Faiss, please cite:
@article{johnson2019billion,
title={Billion-scale similarity search with {GPUs}},
author={Johnson, Jeff and Douze, Matthijs and J{\'e}gou, Herv{\'e}},
journal={IEEE Transactions on Big Data},
volume={7},
number={3},
pages={535--547},
year={2019},
publisher={IEEE}
}
Join the Faiss community
For public discussion of Faiss or for questions, there is a Facebook group at https://www.facebook.com/groups/faissusers/
We monitor the issues page of the repository. You can report bugs, ask questions, etc.
Legal
Faiss is MIT-licensed, refer to the LICENSE file in the top level directory.
Copyright © Meta Platforms, Inc.
Top Related Projects
Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk
Non-Metric Space Library (NMSLIB): An efficient similarity search library and a toolkit for evaluation of k-NN methods for generic non-metric spaces.
Uniform Manifold Approximation and Projection
Benchmarks of approximate nearest neighbor libraries in Python
A cloud-native vector database, storage for next generation AI applications
Google Research
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot