Top Related Projects
Parallel computing with task scheduling
Out-of-Core hybrid Apache Arrow/NumPy DataFrame for Python, ML, visualization and exploration of big tabular data at a billion rows per second 🚀
Dataframes powered by a multithreaded, vectorized query engine, written in Rust
cuDF - GPU DataFrame Library
A Python package for manipulating 2-dimensional tabular data structures
Quick Overview
Modin is a pandas-compatible, distributed DataFrame library that allows users to speed up their pandas workflows by transparently distributing the computation across multiple cores and machines. It aims to provide a drop-in replacement for pandas, allowing users to leverage the power of distributed computing without having to rewrite their existing code.
Pros
- Performance Boost: Modin can significantly speed up data processing tasks by distributing the workload across multiple cores or machines, resulting in faster execution times.
- Pandas Compatibility: Modin provides a pandas-compatible API, allowing users to seamlessly integrate it into their existing workflows without having to rewrite their code.
- Scalability: Modin can scale to handle larger datasets and more complex computations by leveraging distributed computing resources.
- Ease of Use: Modin is designed to be easy to use, with a familiar API that requires minimal changes to existing pandas code.
Cons
- Dependency on Distributed Computing Frameworks: Modin relies on distributed computing frameworks like Dask or Ray, which can add complexity and require additional setup and configuration.
- Limited Functionality: While Modin aims to be a drop-in replacement for pandas, it may not yet support all the features and functionality of the original library.
- Performance Overhead: The overhead of distributing the computation across multiple cores or machines can sometimes offset the performance gains, especially for smaller datasets or simple operations.
- Learning Curve: Users who are new to distributed computing may need to invest time in understanding the underlying concepts and frameworks used by Modin.
Code Examples
import modin.pandas as pd
# Load a CSV file into a Modin DataFrame
df = pd.read_csv('data.csv')
# Perform a groupby operation and calculate the mean
grouped_df = df.groupby('category')['value'].mean()
# Filter the DataFrame based on a condition
filtered_df = df[df['value'] > 100]
# Apply a custom function to each row of the DataFrame
def double_value(row):
return row['value'] * 2
df['doubled_value'] = df.apply(double_value, axis=1)
Getting Started
To get started with Modin, you can follow these steps:
- Install Modin using pip:
pip install modin
- Import the Modin pandas API and use it in your code:
import modin.pandas as pd
# Load a CSV file into a Modin DataFrame
df = pd.read_csv('data.csv')
# Perform operations on the DataFrame
print(df.head())
- Optionally, you can configure Modin to use a specific distributed computing framework, such as Dask or Ray, by setting the
MODIN_ENGINE
environment variable:
export MODIN_ENGINE=dask
- Explore the Modin documentation and API to learn more about the available features and functionality: https://modin.readthedocs.io/en/latest/
Competitor Comparisons
Parallel computing with task scheduling
Pros of Dask
- More mature and widely adopted in the data science community
- Supports a broader range of data structures and operations beyond pandas
- Offers advanced features like distributed computing and task scheduling
Cons of Dask
- Steeper learning curve, especially for users familiar with pandas
- May require more setup and configuration for distributed computing
- Performance gains might be less noticeable for smaller datasets
Code Comparison
Modin:
import modin.pandas as pd
df = pd.read_csv("large_file.csv")
result = df.groupby("column").mean()
Dask:
import dask.dataframe as dd
df = dd.read_csv("large_file.csv")
result = df.groupby("column").mean().compute()
Key Differences
- Modin aims to be a drop-in replacement for pandas, requiring minimal code changes
- Dask offers more flexibility and control over distributed computing
- Modin focuses on optimizing pandas operations, while Dask provides a broader ecosystem for various data processing tasks
Both projects aim to improve performance and scalability for data processing, but they take different approaches. Modin is ideal for users who want to speed up existing pandas workflows with minimal changes, while Dask is better suited for more complex distributed computing scenarios and larger-scale data processing tasks.
Out-of-Core hybrid Apache Arrow/NumPy DataFrame for Python, ML, visualization and exploration of big tabular data at a billion rows per second 🚀
Pros of Vaex
- Designed for handling large datasets (up to 1 billion rows) efficiently
- Supports out-of-core computing, allowing processing of datasets larger than RAM
- Offers advanced visualization capabilities for big data exploration
Cons of Vaex
- Less compatible with existing pandas code, requiring more modifications
- Smaller community and ecosystem compared to Modin
- Limited support for certain pandas operations and data types
Code Comparison
Vaex:
import vaex
df = vaex.from_csv('large_dataset.csv')
result = df.groupby('category').agg({'value': 'mean'})
Modin:
import modin.pandas as pd
df = pd.read_csv('large_dataset.csv')
result = df.groupby('category')['value'].mean()
Both Vaex and Modin aim to improve performance for large-scale data processing, but they take different approaches. Vaex focuses on out-of-core computing and visualization for extremely large datasets, while Modin aims to be a drop-in replacement for pandas with improved performance. Vaex may require more code changes but can handle larger datasets, whereas Modin offers better pandas compatibility but may have limitations with extremely large datasets.
Dataframes powered by a multithreaded, vectorized query engine, written in Rust
Pros of Polars
- Faster performance, especially for large datasets
- More memory-efficient due to its columnar data structure
- Native implementation in Rust, offering better low-level optimizations
Cons of Polars
- Smaller ecosystem and fewer integrations compared to Modin
- Steeper learning curve for users familiar with pandas
- Less comprehensive documentation and community support
Code Comparison
Modin:
import modin.pandas as pd
df = pd.read_csv("large_file.csv")
result = df.groupby("category").agg({"sales": "sum"})
Polars:
import polars as pl
df = pl.read_csv("large_file.csv")
result = df.groupby("category").agg(pl.sum("sales"))
Key Differences
- Modin aims to be a drop-in replacement for pandas, maintaining API compatibility
- Polars introduces a new API, focusing on performance and efficiency
- Modin distributes computations across cores/clusters, while Polars optimizes single-machine performance
- Polars offers both eager and lazy execution modes, providing more flexibility in query optimization
Use Cases
- Choose Modin for easier migration from pandas and distributed computing needs
- Opt for Polars when working with large datasets on a single machine and prioritizing performance
cuDF - GPU DataFrame Library
Pros of cudf
- Leverages GPU acceleration for faster data processing
- Designed specifically for large-scale data analytics
- Integrates well with other RAPIDS ecosystem libraries
Cons of cudf
- Requires NVIDIA GPU hardware
- Limited to operations that can be efficiently parallelized on GPUs
- Steeper learning curve compared to pandas-like APIs
Code Comparison
cudf:
import cudf
df = cudf.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]})
result = df.groupby('A').sum()
modin:
import modin.pandas as pd
df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]})
result = df.groupby('A').sum()
Key Differences
- cudf focuses on GPU-accelerated data processing, while modin aims to scale pandas operations across CPUs or GPUs
- cudf requires NVIDIA GPUs, whereas modin can run on various hardware configurations
- cudf's API is similar to pandas but not identical, while modin strives for near-perfect pandas compatibility
- cudf is part of the RAPIDS ecosystem, offering integration with other GPU-accelerated libraries
- modin provides a more familiar pandas-like experience, making it easier for existing pandas users to adopt
A Python package for manipulating 2-dimensional tabular data structures
Pros of datatable
- Faster performance for large datasets, especially for operations like grouping and sorting
- Memory-efficient, using less RAM for data processing
- Supports out-of-memory computations for datasets larger than available RAM
Cons of datatable
- Less comprehensive API compared to pandas, which Modin aims to replicate
- Steeper learning curve due to differences from pandas syntax
- Smaller community and ecosystem compared to Modin and pandas
Code Comparison
Modin:
import modin.pandas as pd
df = pd.read_csv("large_file.csv")
result = df.groupby("column").mean()
datatable:
import datatable as dt
df = dt.fread("large_file.csv")
result = df[:, dt.mean(f.values), by("column")]
Both libraries aim to improve performance for large-scale data processing, but they take different approaches. Modin focuses on providing a pandas-like API with distributed computing, while datatable offers a new syntax optimized for speed and memory efficiency. The choice between them depends on specific use cases, familiarity with pandas, and performance requirements.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Scale your pandas workflows by changing one line of code
What is Modin?
Modin is a drop-in replacement for pandas. While pandas is single-threaded, Modin lets you instantly speed up your workflows by scaling pandas so it uses all of your cores. Modin works especially well on larger datasets, where pandas becomes painfully slow or runs out of memory. Also, Modin comes with the additional APIs to improve user experience.
By simply replacing the import statement, Modin offers users effortless speed and scale for their pandas workflows:
In the GIFs below, Modin (left) and pandas (right) perform the same pandas operations on a 2GB dataset. The only difference between the two notebook examples is the import statement.
The charts below show the speedup you get by replacing pandas with Modin based on the examples above. The example notebooks can be found here. To learn more about the speedups you could get with Modin and try out some examples on your own, check out our 10-minute quickstart guide to try out some examples on your own!
Installation
From PyPI
Modin can be installed with pip
on Linux, Windows and MacOS:
pip install "modin[all]" # (Recommended) Install Modin with Ray and Dask engines.
If you want to install Modin with a specific engine, we recommend:
pip install "modin[ray]" # Install Modin dependencies and Ray.
pip install "modin[dask]" # Install Modin dependencies and Dask.
pip install "modin[mpi]" # Install Modin dependencies and MPI through unidist.
To get Modin on MPI through unidist (as of unidist 0.5.0) fully working
it is required to have a working MPI implementation installed beforehand.
Otherwise, installation of modin[mpi]
may fail. Refer to
Installing with pip
section of the unidist documentation for more details about installation.
Note: Since Modin 0.30.0 we use a reduced set of Ray dependencies: ray
instead of ray[default]
.
This means that the dashboard and cluster launcher are no longer installed by default.
If you need those, consider installing ray[default]
along with modin[ray]
.
Modin automatically detects which engine(s) you have installed and uses that for scheduling computation.
From conda-forge
Installing from conda forge using modin-all
will install Modin and three engines: Ray, Dask and
MPI through unidist.
conda install -c conda-forge modin-all
Each engine can also be installed individually (and also as a combination of several engines):
conda install -c conda-forge modin-ray # Install Modin dependencies and Ray.
conda install -c conda-forge modin-dask # Install Modin dependencies and Dask.
conda install -c conda-forge modin-mpi # Install Modin dependencies and MPI through unidist.
Note: Since Modin 0.30.0 we use a reduced set of Ray dependencies: ray-core
instead of ray-default
.
This means that the dashboard and cluster launcher are no longer installed by default.
If you need those, consider installing ray-default
along with modin-ray
.
Refer to Installing with conda section of the unidist documentation for more details on how to install a specific MPI implementation to run on.
To speed up conda installation we recommend using libmamba solver. To do this install it in a base environment:
conda install -n base conda-libmamba-solver
and then use it during istallation either like:
conda install -c conda-forge modin-ray --experimental-solver=libmamba
or starting from conda 22.11 and libmamba solver 22.12 versions:
conda install -c conda-forge modin-ray --solver=libmamba
Choosing a Compute Engine
If you want to choose a specific compute engine to run on, you can set the environment
variable MODIN_ENGINE
and Modin will do computation with that engine:
export MODIN_ENGINE=ray # Modin will use Ray
export MODIN_ENGINE=dask # Modin will use Dask
export MODIN_ENGINE=unidist # Modin will use Unidist
If you want to choose the Unidist engine, you should set the additional environment
variable UNIDIST_BACKEND
. Currently, Modin only supports MPI through unidist:
export UNIDIST_BACKEND=mpi # Unidist will use MPI backend
This can also be done within a notebook/interpreter before you import Modin:
import modin.config as modin_cfg
import unidist.config as unidist_cfg
modin_cfg.Engine.put("ray") # Modin will use Ray
modin_cfg.Engine.put("dask") # Modin will use Dask
modin_cfg.Engine.put('unidist') # Modin will use Unidist
unidist_cfg.Backend.put('mpi') # Unidist will use MPI backend
Note: You should not change the engine after your first operation with Modin as it will result in undefined behavior.
Which engine should I use?
On Linux, MacOS, and Windows you can install and use either Ray, Dask or MPI through unidist. There is no knowledge required to use either of these engines as Modin abstracts away all of the complexity, so feel free to pick either!
Pandas API Coverage
pandas Object | Modin's Ray Engine Coverage | Modin's Dask Engine Coverage | Modin's Unidist Engine Coverage |
---|---|---|---|
pd.DataFrame | <img src=https://img.shields.io/badge/api%20coverage-90.8%25-hunter.svg> | <img src=https://img.shields.io/badge/api%20coverage-90.8%25-hunter.svg> | <img src=https://img.shields.io/badge/api%20coverage-90.8%25-hunter.svg> |
pd.Series | <img src=https://img.shields.io/badge/api%20coverage-88.05%25-green.svg> | <img src=https://img.shields.io/badge/api%20coverage-88.05%25-green.svg> | <img src=https://img.shields.io/badge/api%20coverage-88.05%25-green.svg> |
pd.read_csv | â | â | â |
pd.read_table | â | â | â |
pd.read_parquet | â | â | â |
pd.read_sql | â | â | â |
pd.read_feather | â | â | â |
pd.read_excel | â | â | â |
pd.read_json | â³ï¸ | â³ï¸ | â³ï¸ |
pd.read_<other> | â´ï¸ | â´ï¸ | â´ï¸ |
More about Modin
For the complete documentation on Modin, visit our ReadTheDocs page.
Scale your pandas workflow by changing a single line of code.
Note: In local mode (without a cluster), Modin will create and manage a local (Dask or Ray) cluster for the execution.
To use Modin, you do not need to specify how to distribute the data, or even know how many cores your system has. In fact, you can continue using your previous pandas notebooks while experiencing a considerable speedup from Modin, even on a single machine. Once you've changed your import statement, you're ready to use Modin just like you would with pandas!
Faster pandas, even on your laptop
The modin.pandas
DataFrame is an extremely light-weight parallel DataFrame.
Modin transparently distributes the data and computation so that you can continue using the same pandas API
while working with more data faster. Because it is so light-weight,
Modin provides speed-ups of up to 4x on a laptop with 4 physical cores.
In pandas, you are only able to use one core at a time when you are doing computation of
any kind. With Modin, you are able to use all of the CPU cores on your machine. Even with a
traditionally synchronous task like read_csv
, we see large speedups by efficiently
distributing the work across your entire machine.
import modin.pandas as pd
df = pd.read_csv("my_dataset.csv")
Modin can handle the datasets that pandas can't
Often data scientists have to switch between different tools for operating on datasets of different sizes. Processing large dataframes with pandas is slow, and pandas does not support working with dataframes that are too large to fit into the available memory. As a result, pandas workflows that work well for prototyping on a few MBs of data do not scale to tens or hundreds of GBs (depending on the size of your machine). Modin supports operating on data that does not fit in memory, so that you can comfortably work with hundreds of GBs without worrying about substantial slowdown or memory errors. With cluster and out of core support, Modin is a DataFrame library with both great single-node performance and high scalability in a cluster.
Modin Architecture
We designed Modin's architecture to be modular so we can plug in different components as they develop and improve:
Other Resources
Getting Started with Modin
- Documentation
- 10-min Quickstart Guide
- Examples and Tutorials
- Videos and Blogposts
- Benchmarking Modin
Modin Community
Learn More about Modin
- Frequently Asked Questions (FAQs)
- Troubleshooting Guide
- Development Guide
- Modin is built on many years of research and development at UC Berkeley. Check out these selected papers to learn more about how Modin works:
- Flexible Rule-Based Decomposition and Metadata Independence in Modin (VLDB 2021)
- Dataframe Systems: Theory, Architecture, and Implementation (PhD Dissertation 2021)
- Towards Scalable Dataframe Systems (VLDB 2020)
Getting Involved
modin.pandas
is currently under active development. Requests and contributions are welcome!
For more information on how to contribute to Modin, check out the Modin Contribution Guide.
License
Top Related Projects
Parallel computing with task scheduling
Out-of-Core hybrid Apache Arrow/NumPy DataFrame for Python, ML, visualization and exploration of big tabular data at a billion rows per second 🚀
Dataframes powered by a multithreaded, vectorized query engine, written in Rust
cuDF - GPU DataFrame Library
A Python package for manipulating 2-dimensional tabular data structures
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot