Top Related Projects
Always know what to expect from your data.
Data validation using Python type hints
A light-weight, flexible, and expressive statistical data testing library
Clean APIs for data cleaning. Python implementation of R package Janitor
Synthetic data generation for tabular data
Intake is a lightweight package for finding, investigating, loading and disseminating data.
Quick Overview
Pandera is a statistical data validation toolkit for Python that provides a flexible and expressive API for defining data schemas and validating pandas DataFrames. It allows users to define schema objects that can be used to validate data, generate synthetic data, and create data types with built-in validation.
Pros
- Seamless integration with pandas and numpy ecosystems
- Supports both runtime validation and static type checking
- Provides informative error messages for failed validations
- Allows for custom validation functions and hypothesis strategies
Cons
- May introduce performance overhead for large datasets
- Learning curve for users unfamiliar with schema validation concepts
- Limited support for non-pandas data structures
- Some advanced features require additional dependencies
Code Examples
- Defining a simple schema:
import pandera as pa
schema = pa.DataFrameSchema({
"column1": pa.Column(int, pa.Check.greater_than(0)),
"column2": pa.Column(str, pa.Check.isin(["A", "B", "C"])),
"column3": pa.Column(float, pa.Check.between(0, 1))
})
- Validating a DataFrame:
import pandas as pd
df = pd.DataFrame({
"column1": [1, 2, 3],
"column2": ["A", "B", "C"],
"column3": [0.1, 0.5, 0.9]
})
validated_df = schema.validate(df)
- Using decorators for function input validation:
@pa.check_input(schema)
def process_data(df: pd.DataFrame) -> pd.DataFrame:
# Your data processing logic here
return df
Getting Started
To get started with Pandera, install it using pip:
pip install pandera
Then, import the library and create a simple schema:
import pandera as pa
import pandas as pd
schema = pa.DataFrameSchema({
"name": pa.Column(str),
"age": pa.Column(int, pa.Check.greater_than(0)),
"city": pa.Column(str, pa.Check.isin(["New York", "London", "Tokyo"]))
})
df = pd.DataFrame({
"name": ["Alice", "Bob", "Charlie"],
"age": [25, 30, 35],
"city": ["New York", "London", "Paris"]
})
try:
validated_df = schema.validate(df)
except pa.errors.SchemaError as e:
print(f"Validation failed: {e}")
This example creates a simple schema, defines a DataFrame, and attempts to validate it against the schema. If validation fails, it will print an error message.
Competitor Comparisons
Always know what to expect from your data.
Pros of Great Expectations
- More comprehensive data validation framework with a wider range of built-in expectations
- Supports multiple data sources including databases, cloud storage, and file systems
- Provides data documentation and profiling capabilities
Cons of Great Expectations
- Steeper learning curve due to its more complex architecture
- Heavier setup and configuration process
- Can be overkill for simpler data validation tasks
Code Comparison
Great Expectations:
import great_expectations as ge
df = ge.read_csv("data.csv")
df.expect_column_values_to_be_between("age", min_value=0, max_value=120)
Pandera:
import pandera as pa
schema = pa.DataFrameSchema({
"age": pa.Column(int, pa.Check.in_range(0, 120))
})
schema.validate(df)
Both libraries offer data validation capabilities, but Great Expectations provides a more comprehensive framework with additional features, while Pandera focuses on simplicity and ease of use for DataFrame validation.
Data validation using Python type hints
Pros of Pydantic
- Broader scope, supporting general data validation and serialization
- Integrated with FastAPI for API development
- More extensive ecosystem and community support
Cons of Pydantic
- Less specialized for pandas DataFrame validation
- May require more setup for complex DataFrame schemas
Code Comparison
Pydantic:
from pydantic import BaseModel, Field
class User(BaseModel):
id: int
name: str = Field(..., min_length=1)
age: int = Field(..., ge=0, le=120)
Pandera:
import pandera as pa
schema = pa.DataFrameSchema({
"id": pa.Column(int),
"name": pa.Column(str, pa.Check(lambda x: len(x) > 0)),
"age": pa.Column(int, pa.Check.in_range(0, 120))
})
Pydantic is more general-purpose, while Pandera is tailored for DataFrame validation. Pydantic's syntax is class-based, whereas Pandera uses a more DataFrame-centric approach. Both libraries offer robust data validation, but Pandera's focus on DataFrames makes it more intuitive for pandas users working with tabular data.
A light-weight, flexible, and expressive statistical data testing library
Pros of Pandera
- Identical functionality and features
- Same level of community support and development
- Consistent documentation and examples
Cons of Pandera
- No significant differences in drawbacks
- Equivalent performance characteristics
- Similar learning curve for new users
Code Comparison
Both repositories contain the same codebase, so a code comparison is not applicable. Here's a sample of how to use Pandera in both cases:
import pandera as pa
schema = pa.DataFrameSchema({
"column1": pa.Column(int),
"column2": pa.Column(float, pa.Check.greater_than(0)),
"column3": pa.Column(str, pa.Check.isin(["A", "B", "C"]))
})
validated_df = schema.validate(df)
This code would work identically in both repositories, as they are the same project.
Clean APIs for data cleaning. Python implementation of R package Janitor
Pros of pyjanitor
- Focuses on data cleaning and preparation tasks with a wide range of functions
- Provides a more intuitive API for common data manipulation operations
- Integrates well with pandas and extends its functionality
Cons of pyjanitor
- Less emphasis on data validation compared to Pandera
- May have a steeper learning curve for users not familiar with method chaining
- Limited schema definition capabilities
Code Comparison
pyjanitor:
import janitor
import pandas as pd
df = pd.DataFrame(...)
cleaned_df = (
df.clean_names()
.remove_empty()
.drop_duplicate_columns()
.encode_categorical()
)
Pandera:
import pandera as pa
schema = pa.DataFrameSchema({
"column1": pa.Column(int, nullable=False),
"column2": pa.Column(str, checks=pa.Check.str_length(1, 100))
})
validated_df = schema.validate(df)
The code examples highlight the different focus areas of each library. pyjanitor emphasizes data cleaning operations, while Pandera focuses on schema definition and validation.
Synthetic data generation for tabular data
Pros of SDV
- Comprehensive synthetic data generation capabilities
- Supports multiple data types (tabular, time series, relational)
- Includes advanced features like privacy preservation and constraints
Cons of SDV
- Steeper learning curve due to more complex functionality
- May be overkill for simple data validation tasks
- Potentially slower performance for large datasets
Code Comparison
SDV (Synthetic Data Generation):
from sdv import Tabular
model = Tabular('my_table')
model.fit(real_data)
synthetic_data = model.sample(num_rows=1000)
Pandera (Data Validation):
import pandera as pa
schema = pa.DataFrameSchema({
'column1': pa.Column(int),
'column2': pa.Column(str, pa.Check.str_length(1, 100))
})
validated_df = schema.validate(df)
SDV focuses on generating synthetic data that mimics real datasets, while Pandera specializes in data validation and schema enforcement. SDV offers more comprehensive data generation capabilities, but Pandera provides a simpler and more lightweight approach to ensuring data quality and consistency.
Intake is a lightweight package for finding, investigating, loading and disseminating data.
Pros of Intake
- Broader data source support, including remote and cloud-based sources
- Flexible catalog system for organizing and discovering data assets
- Built-in data visualization capabilities
Cons of Intake
- Less focused on data validation and schema enforcement
- May require more setup for complex data pipelines
- Limited support for advanced statistical checks
Code Comparison
Intake:
import intake
catalog = intake.open_catalog("my_catalog.yml")
dataset = catalog.my_dataset.read()
Pandera:
import pandera as pa
schema = pa.DataFrameSchema({
"column1": pa.Column(int),
"column2": pa.Column(str)
})
validated_df = schema.validate(df)
Intake focuses on data discovery and access, while Pandera emphasizes data validation and schema enforcement. Intake's code demonstrates catalog-based data loading, whereas Pandera's code shows schema definition and validation for pandas DataFrames.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
The Open-source Framework for Precision Data Testing
ð ð â
Data validation for scientists, engineers, and analysts seeking correctness.
pandera
is a Union.ai open
source project that provides a flexible and expressive API for performing data
validation on dataframe-like objects to make data processing pipelines more readable and robust.
Dataframes contain information that pandera
explicitly validates at runtime.
This is useful in production-critical or reproducible research settings. With
pandera
, you can:
- Define a schema once and use it to validate different dataframe types including pandas, polars, dask, modin, and pyspark.
- Check the types and
properties of columns in a
DataFrame
or values in aSeries
. - Perform more complex statistical validation like hypothesis testing.
- Parse data to standardize the preprocessing steps needed to produce valid data.
- Seamlessly integrate with existing data analysis/processing pipelines via function decorators.
- Define dataframe models with the class-based API with pydantic-style syntax and validate dataframes using the typing syntax.
- Synthesize data from schema objects for property-based testing with pandas data structures.
- Lazily Validate dataframes so that all validation checks are executed before raising an error.
- Integrate with a rich ecosystem of python tools like pydantic, fastapi, and mypy.
Documentation
The official documentation is hosted here: https://pandera.readthedocs.io
Install
Using pip:
pip install pandera
Using conda:
conda install -c conda-forge pandera
Extras
Installing additional functionality:
pip
pip install 'pandera[hypotheses]' # hypothesis checks
pip install 'pandera[io]' # yaml/script schema io utilities
pip install 'pandera[strategies]' # data synthesis strategies
pip install 'pandera[mypy]' # enable static type-linting of pandas
pip install 'pandera[fastapi]' # fastapi integration
pip install 'pandera[dask]' # validate dask dataframes
pip install 'pandera[pyspark]' # validate pyspark dataframes
pip install 'pandera[modin]' # validate modin dataframes
pip install 'pandera[modin-ray]' # validate modin dataframes with ray
pip install 'pandera[modin-dask]' # validate modin dataframes with dask
pip install 'pandera[geopandas]' # validate geopandas geodataframes
pip install 'pandera[polars]' # validate polars dataframes
conda
conda install -c conda-forge pandera-hypotheses # hypothesis checks
conda install -c conda-forge pandera-io # yaml/script schema io utilities
conda install -c conda-forge pandera-strategies # data synthesis strategies
conda install -c conda-forge pandera-mypy # enable static type-linting of pandas
conda install -c conda-forge pandera-fastapi # fastapi integration
conda install -c conda-forge pandera-dask # validate dask dataframes
conda install -c conda-forge pandera-pyspark # validate pyspark dataframes
conda install -c conda-forge pandera-modin # validate modin dataframes
conda install -c conda-forge pandera-modin-ray # validate modin dataframes with ray
conda install -c conda-forge pandera-modin-dask # validate modin dataframes with dask
conda install -c conda-forge pandera-geopandas # validate geopandas geodataframes
conda install -c conda-forge pandera-polars # validate polars dataframes
Quick Start
import pandas as pd
import pandera as pa
# data to validate
df = pd.DataFrame({
"column1": [1, 4, 0, 10, 9],
"column2": [-1.3, -1.4, -2.9, -10.1, -20.4],
"column3": ["value_1", "value_2", "value_3", "value_2", "value_1"]
})
# define schema
schema = pa.DataFrameSchema({
"column1": pa.Column(int, checks=pa.Check.le(10)),
"column2": pa.Column(float, checks=pa.Check.lt(-1.2)),
"column3": pa.Column(str, checks=[
pa.Check.str_startswith("value_"),
# define custom checks as functions that take a series as input and
# outputs a boolean or boolean Series
pa.Check(lambda s: s.str.split("_", expand=True).shape[1] == 2)
]),
})
validated_df = schema(df)
print(validated_df)
# column1 column2 column3
# 0 1 -1.3 value_1
# 1 4 -1.4 value_2
# 2 0 -2.9 value_3
# 3 10 -10.1 value_2
# 4 9 -20.4 value_1
DataFrame Model
pandera
also provides an alternative API for expressing schemas inspired
by dataclasses and
pydantic. The equivalent DataFrameModel
for the above DataFrameSchema
would be:
from pandera.typing import Series
class Schema(pa.DataFrameModel):
column1: int = pa.Field(le=10)
column2: float = pa.Field(lt=-1.2)
column3: str = pa.Field(str_startswith="value_")
@pa.check("column3")
def column_3_check(cls, series: Series[str]) -> Series[bool]:
"""Check that values have two elements after being split with '_'"""
return series.str.split("_", expand=True).shape[1] == 2
Schema.validate(df)
Development Installation
git clone https://github.com/pandera-dev/pandera.git
cd pandera
export PYTHON_VERSION=... # specify desired python version
pip install -r dev/requirements-${PYTHON_VERSION}.txt
pip install -e .
Tests
pip install pytest
pytest tests
Contributing to pandera
All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
A detailed overview on how to contribute can be found in the contributing guide on GitHub.
Issues
Go here to submit feature requests or bugfixes.
Need Help?
There are many ways of getting help with your questions. You can ask a question on Github Discussions page or reach out to the maintainers and pandera community on Discord
Why pandera
?
- dataframe-centric data types, column nullability, and uniqueness are first-class concepts.
- Define dataframe models with the class-based API with pydantic-style syntax and validate dataframes using the typing syntax.
check_input
andcheck_output
decorators enable seamless integration with existing code.Check
s provide flexibility and performance by providing access topandas
API by design and offers built-in checks for common data tests.Hypothesis
class provides a tidy-first interface for statistical hypothesis testing.Check
s andHypothesis
objects support both tidy and wide data validation.- Use schemas as generative contracts to synthesize data for unit testing.
- Schema inference allows you to bootstrap schemas from data.
How to Cite
If you use pandera
in the context of academic or industry research, please
consider citing the paper and/or software package.
Paper
@InProceedings{ niels_bantilan-proc-scipy-2020,
author = { {N}iels {B}antilan },
title = { pandera: {S}tatistical {D}ata {V}alidation of {P}andas {D}ataframes },
booktitle = { {P}roceedings of the 19th {P}ython in {S}cience {C}onference },
pages = { 116 - 124 },
year = { 2020 },
editor = { {M}eghann {A}garwal and {C}hris {C}alloway and {D}illon {N}iederhut and {D}avid {S}hupe },
doi = { 10.25080/Majora-342d178e-010 }
}
Software Package
License and Credits
pandera
is licensed under the MIT license and is written and
maintained by Niels Bantilan (niels@union.ai)
Top Related Projects
Always know what to expect from your data.
Data validation using Python type hints
A light-weight, flexible, and expressive statistical data testing library
Clean APIs for data cleaning. Python implementation of R package Janitor
Synthetic data generation for tabular data
Intake is a lightweight package for finding, investigating, loading and disseminating data.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot