Convert Figma logo to code with AI

google logodifferential-privacy

Google's differential privacy libraries.

3,022
346
3,022
14

Top Related Projects

9,429

Perform data science on data that remains in someone else's server

1,919

Library for training machine learning models with privacy for training data

1,669

Training PyTorch models with differential privacy

3,519

Microsoft SEAL is an easy-to-use and powerful homomorphic encryption library.

Quick Overview

Google's Differential Privacy library is an open-source project that provides a set of tools and algorithms for adding privacy guarantees to data analysis tasks. It implements differential privacy techniques to protect individual privacy while allowing meaningful statistical analysis on datasets.

Pros

  • Robust implementation of differential privacy algorithms
  • Supports multiple programming languages (C++, Go, Java)
  • Backed by Google's expertise in privacy and security
  • Includes various statistical functions and utilities

Cons

  • Steep learning curve for those new to differential privacy concepts
  • Limited documentation and examples for some features
  • May introduce performance overhead in large-scale data processing
  • Requires careful parameter tuning to balance privacy and utility

Code Examples

  1. Adding noise to a sum:
#include "differential_privacy/algorithms/bounded-sum.h"

std::vector<double> data = {1.0, 2.0, 3.0, 4.0, 5.0};
auto bounded_sum = BoundedSum<double>::Builder()
    .SetEpsilon(1.0)
    .SetLower(0)
    .SetUpper(10)
    .Build()
    .ValueOrDie();
Output result = bounded_sum->Result(data.begin(), data.end()).ValueOrDie();
double noisy_sum = GetValue<double>(result);
  1. Computing a private mean:
#include "differential_privacy/algorithms/bounded-mean.h"

std::vector<int> data = {10, 20, 30, 40, 50};
auto bounded_mean = BoundedMean<int>::Builder()
    .SetEpsilon(0.5)
    .SetLower(0)
    .SetUpper(100)
    .Build()
    .ValueOrDie();
Output result = bounded_mean->Result(data.begin(), data.end()).ValueOrDie();
double private_mean = GetValue<double>(result);
  1. Generating a histogram with differential privacy:
#include "differential_privacy/algorithms/count.h"

std::vector<int> data = {1, 2, 2, 3, 3, 3, 4, 4, 5};
auto count = Count<int>::Builder()
    .SetEpsilon(1.0)
    .Build()
    .ValueOrDie();
std::map<int, Output> histogram;
for (int i = 1; i <= 5; ++i) {
    histogram[i] = count->Result(data.begin(), data.end(), [i](int x) { return x == i; }).ValueOrDie();
}

Getting Started

  1. Clone the repository:

    git clone https://github.com/google/differential-privacy.git
    
  2. Install dependencies (for C++):

    sudo apt-get install cmake build-essential
    
  3. Build the library:

    cd differential-privacy/cc
    mkdir build && cd build
    cmake ..
    make
    
  4. Include the necessary headers in your C++ project and link against the built library.

Competitor Comparisons

9,429

Perform data science on data that remains in someone else's server

Pros of PySyft

  • Broader focus on privacy-preserving machine learning, including federated learning and secure multi-party computation
  • More user-friendly Python interface, making it accessible to data scientists
  • Active community and frequent updates

Cons of PySyft

  • Less specialized in differential privacy compared to Differential-privacy
  • May have a steeper learning curve for those specifically interested in differential privacy

Code Comparison

PySyft example:

import syft as sy

# Create virtual workers
alice = sy.VirtualWorker(hook, id="alice")
bob = sy.VirtualWorker(hook, id="bob")

# Create and send tensors to workers
x = torch.tensor([1, 2, 3, 4]).send(alice)
y = torch.tensor([5, 6, 7, 8]).send(bob)

Differential-privacy example:

#include "differential_privacy/algorithms/bounded-sum.h"

std::unique_ptr<BoundedSum<int64_t>> bounded_sum =
    BoundedSum<int64_t>::Builder()
        .SetEpsilon(1.0)
        .SetLower(0)
        .SetUpper(10)
        .Build()
        .ValueOrDie();

PySyft offers a more Python-centric approach, focusing on distributed machine learning, while Differential-privacy provides a lower-level C++ implementation specifically for differential privacy algorithms.

1,919

Library for training machine learning models with privacy for training data

Pros of TensorFlow Privacy

  • Specifically designed for machine learning applications, particularly with TensorFlow
  • Offers a wider range of privacy-preserving techniques beyond differential privacy
  • Integrates seamlessly with existing TensorFlow workflows

Cons of TensorFlow Privacy

  • More complex to use for non-TensorFlow projects
  • Less general-purpose than Differential Privacy
  • May have a steeper learning curve for those not familiar with TensorFlow

Code Comparison

Differential Privacy:

std::unique_ptr<BoundedMean<int>> mean = BoundedMean<int>::Builder()
    .SetEpsilon(1.0)
    .SetLower(0)
    .SetUpper(100)
    .Build()
    .ValueOrDie();

TensorFlow Privacy:

dp_optimizer = dp_optimizer_keras.DPKerasSGDOptimizer(
    l2_norm_clip=1.0,
    noise_multiplier=0.1,
    num_microbatches=1,
    learning_rate=0.1)
1,669

Training PyTorch models with differential privacy

Pros of Opacus

  • Specifically designed for PyTorch, offering seamless integration with PyTorch models
  • Provides easy-to-use APIs for adding differential privacy to deep learning models
  • Includes built-in support for privacy accounting and adaptive clipping

Cons of Opacus

  • Limited to PyTorch ecosystem, less versatile for other frameworks or languages
  • Focuses primarily on deep learning applications, may not be ideal for other DP use cases
  • Less comprehensive documentation compared to Differential-privacy

Code Comparison

Opacus:

model = MyModel()
optimizer = torch.optim.SGD(model.parameters(), lr=0.05)
privacy_engine = PrivacyEngine()
model, optimizer, train_loader = privacy_engine.make_private(
    module=model,
    optimizer=optimizer,
    data_loader=train_loader,
    noise_multiplier=1.1,
    max_grad_norm=1.0,
)

Differential-privacy:

std::unique_ptr<BoundedMean<int64_t>> mean = BoundedMean<int64_t>::Builder()
    .SetEpsilon(1.0)
    .SetLower(0)
    .SetUpper(100)
    .Build()
    .ValueOrDie();
Output result = mean->Result(input_data.begin(), input_data.end()).ValueOrDie();
3,519

Microsoft SEAL is an easy-to-use and powerful homomorphic encryption library.

Pros of SEAL

  • Focuses on homomorphic encryption, offering more advanced cryptographic capabilities
  • Provides a comprehensive C++ library with .NET wrappers for broader language support
  • Offers better performance for certain privacy-preserving computations

Cons of SEAL

  • Steeper learning curve due to its focus on advanced cryptographic techniques
  • Less emphasis on differential privacy, which may be more suitable for some use cases
  • Requires more computational resources for complex operations

Code Comparison

SEAL example (C++):

Encryptor encryptor(context, public_key);
Ciphertext encrypted = encryptor.encrypt(encoder.encode(5.0));

Differential-privacy example (C++):

std::unique_ptr<BoundedMean<int>> mean = BoundedMean<int>::Builder()
    .SetEpsilon(1.0)
    .SetLower(0)
    .SetUpper(100)
    .Build()
    .ValueOrDie();

Summary

SEAL focuses on homomorphic encryption, offering advanced cryptographic capabilities but with a steeper learning curve. Differential-privacy provides a more accessible approach to privacy-preserving computations, with a specific focus on differential privacy techniques. SEAL may be more suitable for complex cryptographic operations, while Differential-privacy is better for straightforward privacy-preserving data analysis.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Differential Privacy

Note
If you are unfamiliar with differential privacy (DP), you might want to go through "A friendly, non-technical introduction to differential privacy".

This repository contains libraries to generate ε- and (ε, δ)-differentially private statistics over datasets. It contains the following tools.

  • Privacy on Beam is an end-to-end differential privacy framework built on top of Apache Beam. It is intended to be easy to use, even by non-experts.
  • Three "DP building block" libraries, in C++, Go, and Java. These libraries implement basic noise addition primitives and differentially private aggregations. Privacy on Beam is implemented using these libraries.
  • A stochastic tester, used to help catch regressions that could make the differential privacy property no longer hold.
  • A differential privacy accounting library, used for tracking privacy budget.
  • A command line interface for running differentially private SQL queries with ZetaSQL.
  • DP Auditorium is a library for auditing differential privacy guarantees.

To get started on generating differentially private data, we recommend you follow the Privacy on Beam codelab.

Currently, the DP building block libraries support the following algorithms:

AlgorithmC++GoJava
Laplace mechanismSupportedSupportedSupported
Gaussian mechanismSupportedSupportedSupported
CountSupportedSupportedSupported
SumSupportedSupportedSupported
MeanSupportedSupportedSupported
VarianceSupportedSupportedSupported
Standard deviationSupportedSupportedPlanned
QuantilesSupportedSupportedSupported
Automatic bounds approximationSupportedPlannedSupported
Truncated geometric thresholdingSupportedSupportedSupported
Laplace thresholdingSupportedSupportedSupported
Gaussian thresholdingPlannedSupportedSupported
Pre-thresholdingSupportedSupportedSupported

Implementations of the Laplace mechanism and the Gaussian mechanism use secure noise generation. These mechanisms can be used to perform computations that aren't covered by the algorithms implemented in our libraries.

The DP building block libraries and Privacy on Beam are suitable for research, experimental, or production use cases, while the other tools are currently experimental and subject to change.

How to Build

In order to run the differential privacy library, you need to install Bazel in version 5.3.2, if you don't have it already. Follow the instructions for your platform on the Bazel website

You also need to install Git, if you don't have it already. Follow the instructions for your platform on the Git website.

Once you've installed Bazel and Git, open a Terminal and clone the differential privacy directory into a local folder:

git clone https://github.com/google/differential-privacy.git

Navigate into the differential-privacy folder you just created, and build the differential privacy library and dependencies using Bazel (note: ... is a part of the command and not a placeholder):

To build the C++ library, run:

cd cc
bazel build ...

To build the Go library, run:

cd go
bazel build ...

To build the Java library, run:

cd java
bazel build ...

To build Privacy on Beam, run:

cd privacy-on-beam
bazel build ...

You may need to install additional dependencies when building the PostgreSQL extension, for example on Ubuntu you will need these packages:

sudo apt-get install make libreadline-dev bison flex

Caveats of the DP building block libraries

Differential privacy requires some bound on maximum number of contributions each user can make to a single aggregation. The DP building block libraries don't perform such bounding: their implementation assumes that each user contributes only a fixed number of rows to each partition. That number can be configured by the user. The library neither verifies nor enforces this limit; it is the caller's responsibility to pre-process data to enforce this.

We chose not to implement this step at the DP building block level because it requires some global operation over the data: group by user, and aggregate or subsample the contributions of each user before passing them on to the DP building block aggregators. Given scalability constraints, this pre-processing must be done by a higher-level part of the infrastructure, typically a distributed processing framework: for example, Privacy on Beam relies on Apache Beam for this operation.

For more detail about our approach to building scalable end-to-end differential privacy frameworks, we recommend reading:

  1. Differential privacy computations in data pipelines reference doc, which describes how to build such a system using any data pipeline framework (e.g. Apache Beam).
  2. Our paper about differentially private SQL, which describes such a system. Even though the interface of Privacy on Beam is different, it conceptually uses the same framework as the one described in this paper.

Known issues

Our floating-point implementations are subject to the vulnerabilities described in Casacuberta et al. "Widespread Underestimation of Sensitivity in Differentially Private Libraries and How to Fix it" (specifically the rounding, repeated rounding, and re-ordering attacks). These vulnerabilities are particularly concerning when an attacker can control some of the contents of a dataset and/or its order. Our integer implementations are not subject to the vulnerabilities described in the paper (though note that Java does not have an integer implementation).

Support

We will continue to publish updates and improvements to the library. We are happy to accept contributions to this project. Please follow our guidelines when sending pull requests. We will respond to issues filed in this project. If we intend to stop publishing improvements and responding to issues we will publish notice here at least 3 months in advance.

License

Apache License 2.0

Support Disclaimer

This is not an officially supported Google product.

Reach out

We are always keen on learning about how you use this library and what use cases it helps you to solve. We have two communication channels:

Please refrain from sending any personal identifiable information. If you wish to delete a message you've previously sent, please contact us.

Related projects

  • PyDP, a Python wrapper of our C++ DP building block library, driven by the OpenMined open-source community.
  • PipelineDP, an end-to-end differential privacy framework (similar to Privacy on Beam) that works with Apache Beam & Apache Spark in Python, co-developed by Google and OpenMined.
  • OpenDP, a community effort around tools for statistical analysis of sensitive private data.
  • TensorFlow Privacy, a library to train machine learning models with differential privacy.