Convert Figma logo to code with AI

ydataai logoydata-synthetic

Synthetic data generators for tabular and time-series data

1,412
234
1,412
53

Top Related Projects

2,289

Synthetic data generation for tabular data

1 Line of code data quality profiling & exploratory data analysis for Pandas and Spark DataFrames.

30,129

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

11,083

Low-code framework for building custom LLMs, neural networks, and other AI models

17,500

Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

185,446

An Open Source Machine Learning Framework for Everyone

Quick Overview

YData Synthetic is an open-source library for generating synthetic data using various machine learning techniques, including GANs (Generative Adversarial Networks) and VAEs (Variational Autoencoders). It aims to provide a solution for creating high-quality synthetic data that preserves the statistical properties of the original dataset while ensuring privacy and data augmentation.

Pros

  • Offers multiple synthetic data generation techniques, including GANs and VAEs
  • Supports both tabular and time series data
  • Provides pre-processing and post-processing utilities for data handling
  • Includes privacy preservation features to protect sensitive information

Cons

  • Limited documentation and examples for some advanced features
  • Requires a good understanding of machine learning concepts for optimal use
  • May have a steeper learning curve for users new to synthetic data generation
  • Performance can vary depending on the complexity and size of the input data

Code Examples

  1. Loading and preprocessing data:
from ydata_synthetic.preprocessing.regular.processor import RegularDataProcessor

# Load and preprocess data
data_prep = RegularDataProcessor(df)
processed_data = data_prep.process()
  1. Training a GAN model:
from ydata_synthetic.synthesizers import RegularSynthesizer

# Initialize and train a GAN model
synthesizer = RegularSynthesizer(model='GAN', n_epochs=300)
synthesizer.fit(processed_data)
  1. Generating synthetic data:
# Generate synthetic samples
synthetic_data = synthesizer.sample(n_samples=1000)

# Inverse transform the data to original format
synthetic_data = data_prep.inverse_transform(synthetic_data)

Getting Started

To get started with YData Synthetic, follow these steps:

  1. Install the library:
pip install ydata-synthetic
  1. Import necessary modules and load your data:
import pandas as pd
from ydata_synthetic.preprocessing.regular.processor import RegularDataProcessor
from ydata_synthetic.synthesizers import RegularSynthesizer

# Load your data
df = pd.read_csv('your_data.csv')

# Preprocess the data
data_prep = RegularDataProcessor(df)
processed_data = data_prep.process()

# Train a synthesizer
synthesizer = RegularSynthesizer(model='GAN', n_epochs=300)
synthesizer.fit(processed_data)

# Generate synthetic data
synthetic_data = synthesizer.sample(n_samples=1000)
synthetic_data = data_prep.inverse_transform(synthetic_data)

This quick start guide demonstrates how to load data, preprocess it, train a GAN model, and generate synthetic samples.

Competitor Comparisons

2,289

Synthetic data generation for tabular data

Pros of SDV

  • More comprehensive suite of synthetic data generation tools
  • Better documentation and tutorials for beginners
  • Larger community and more frequent updates

Cons of SDV

  • Can be slower for large datasets
  • More complex setup and configuration
  • Steeper learning curve for advanced features

Code Comparison

SDV:

from sdv import Metadata, SDV

metadata = Metadata()
metadata.add_table('table_name', data)
sdv = SDV()
sdv.fit(metadata)
synthetic_data = sdv.sample('table_name')

ydata-synthetic:

from ydata_synthetic.synthesizers import ModelParameters, RegularSynthesizer

synthesizer = RegularSynthesizer(ModelParameters())
synthesizer.fit(data)
synthetic_data = synthesizer.sample(num_samples)

Both libraries offer straightforward APIs for generating synthetic data, but SDV requires more setup with metadata definition. ydata-synthetic provides a more streamlined approach for single-table scenarios.

1 Line of code data quality profiling & exploratory data analysis for Pandas and Spark DataFrames.

Pros of ydata-profiling

  • Comprehensive data profiling and reporting capabilities
  • Generates interactive HTML reports for easy data exploration
  • Supports various data formats and integrates well with pandas DataFrames

Cons of ydata-profiling

  • Focused solely on data profiling, lacking synthetic data generation features
  • May require more computational resources for large datasets
  • Limited customization options compared to ydata-synthetic

Code Comparison

ydata-profiling:

from ydata_profiling import ProfileReport

profile = ProfileReport(df, title="Profiling Report")
profile.to_file("report.html")

ydata-synthetic:

from ydata_synthetic.synthesizers import RegularSynthesizer

synthesizer = RegularSynthesizer(model_parameters, n_cpu=4)
synthetic_data = synthesizer.fit_sample(data)

ydata-profiling excels in data analysis and visualization, providing detailed insights into dataset characteristics. It generates comprehensive reports but lacks synthetic data generation capabilities. On the other hand, ydata-synthetic focuses on creating synthetic datasets, offering more flexibility in data generation but with fewer built-in profiling features. The choice between the two depends on whether the primary need is data analysis (ydata-profiling) or synthetic data generation (ydata-synthetic).

30,129

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Pros of fairseq

  • Comprehensive toolkit for sequence modeling tasks
  • Supports a wide range of architectures and pre-trained models
  • Highly customizable and extensible for research purposes

Cons of fairseq

  • Steeper learning curve due to its complexity
  • Primarily focused on natural language processing tasks
  • Requires more computational resources for training and inference

Code Comparison

fairseq:

from fairseq.models.transformer import TransformerModel

model = TransformerModel.from_pretrained('/path/to/model')
tokens = model.encode('Hello world')
output = model.decode(tokens)

ydata-synthetic:

from ydata_synthetic.synthesizers import ModelParameters, RegularSynthesizer

synthesizer = RegularSynthesizer(ModelParameters())
synthetic_data = synthesizer.fit_sample(real_data)

Key Differences

  • fairseq is primarily designed for NLP tasks, while ydata-synthetic focuses on generating synthetic data
  • fairseq offers more flexibility and customization options, but ydata-synthetic is more user-friendly for data generation tasks
  • fairseq requires more setup and configuration, whereas ydata-synthetic provides a simpler API for quick implementation
11,083

Low-code framework for building custom LLMs, neural networks, and other AI models

Pros of ludwig

  • More versatile, supporting a wide range of machine learning tasks beyond synthetic data generation
  • Offers a user-friendly declarative machine learning tool that requires minimal coding
  • Has a larger community and more frequent updates

Cons of ludwig

  • Steeper learning curve due to its broader scope and capabilities
  • May be overkill for projects focused solely on synthetic data generation
  • Requires more computational resources for complex models

Code comparison

ludwig:

from ludwig.api import LudwigModel

model = LudwigModel(config)
results = model.train(dataset=train_data)
predictions = model.predict(dataset=test_data)

ydata-synthetic:

from ydata_synthetic.synthesizers import ModelParameters, GaussianCopula

synthesizer = GaussianCopula(ModelParameters())
synthetic_data = synthesizer.fit_sample(data)

Summary

ludwig is a more comprehensive machine learning framework that can handle various tasks, including synthetic data generation. It offers greater flexibility but may be more complex for users solely interested in generating synthetic data. ydata-synthetic, on the other hand, is specifically designed for synthetic data generation, making it more straightforward for this particular use case but less versatile overall.

17,500

Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

Pros of CNTK

  • More mature and established deep learning framework with extensive documentation
  • Supports a wider range of neural network architectures and algorithms
  • Offers high-performance distributed training across multiple GPUs and machines

Cons of CNTK

  • Less active development and community support compared to newer frameworks
  • Steeper learning curve for beginners in deep learning
  • Limited focus on synthetic data generation compared to ydata-synthetic

Code Comparison

CNTK example (creating a simple neural network):

import cntk as C

input = C.input_variable(2)
output = C.layers.Dense(1)(input)
model = C.sigmoid(output)

ydata-synthetic example (generating synthetic data):

from ydata_synthetic import SyntheticDataGenerator

generator = SyntheticDataGenerator()
synthetic_data = generator.generate(num_samples=1000)

While CNTK focuses on building and training neural networks, ydata-synthetic specializes in generating synthetic data for various use cases. CNTK provides a comprehensive toolkit for deep learning tasks, whereas ydata-synthetic offers a more targeted solution for creating artificial datasets to augment or replace real data in machine learning projects.

185,446

An Open Source Machine Learning Framework for Everyone

Pros of TensorFlow

  • Extensive ecosystem with a wide range of tools and libraries
  • Strong support for production deployment and scalability
  • Comprehensive documentation and large community support

Cons of TensorFlow

  • Steeper learning curve for beginners
  • Can be more complex and verbose for simple tasks
  • Slower development cycle compared to more lightweight frameworks

Code Comparison

TensorFlow:

import tensorflow as tf

model = tf.keras.Sequential([
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.Dense(10, activation='softmax')
])

ydata-synthetic:

from ydata_synthetic.synthesizers import ModelParameters, GaussianCopulaSynthesizer

synthesizer = GaussianCopulaSynthesizer(ModelParameters())
synthesizer.fit(data)
synthetic_data = synthesizer.sample(n_samples)

Summary

TensorFlow is a comprehensive deep learning framework with a vast ecosystem, while ydata-synthetic is a specialized library for generating synthetic data. TensorFlow offers more flexibility and scalability for general machine learning tasks, but ydata-synthetic provides a simpler, more focused approach for synthetic data generation. The choice between the two depends on the specific requirements of your project and your familiarity with each framework.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

YData Synthetic Logo

Join us on Discord

YData Synthetic

YData-Synthetic is an open-source package developed in 2020 with the primary goal of educating users about generative models for synthetic data generation. Designed as a collection of models, it was intended for exploratory studies and educational purposes. However, it was not optimized for the quality, performance, and scalability needs typically required by organizations.

!!! note "Update" Even though the journey was fun, and we have learned a lot from the community it is now time to upgrade ydata-synthetic. Heading towards the future of synthetic data generation we recommend users to transition to ydata-sdk, which provides a superior experience with enhanced performance, precision, and ease of use, making it the preferred tool for synthetic data generation and a perfect introduction to Generative AI.

Synthetic data

What is synthetic data?

Synthetic data is artificially generated data that is not collected from real world events. It replicates the statistical components of real data without containing any identifiable information, ensuring individuals' privacy.

Why Synthetic Data?

Synthetic data can be used for many applications:

  • Privacy compliance for data-sharing and Machine Learning development
  • Remove bias
  • Balance datasets
  • Augment datasets

Looking for an end-to-end solution to Synthetic Data Generation?
YData Fabric enables the generation of high-quality datasets within a full UI experience, from data preparation to synthetic data generation and evaluation.
Check out the Community Version.

ydata-synthetic to ydata-sdk

With the upcoming update of ydata-syntheticto ydata-sdk, users will now have access to a single API that automatically selects and optimizes the best generative model for their data. This streamlined approach eliminates the need to choose between various models manually, as the API intelligently identifies the optimal model based on the specific dataset and use case.

Instead of having to manually select from models such as:

  • GAN
  • CGAN (Conditional GAN)
  • WGAN (Wasserstein GAN)
  • WGAN-GP (Wassertein GAN with Gradient Penalty)
  • DRAGAN (Deep Regret Analytic GAN)
  • Cramer GAN (Cramer Distance Solution to Biased Wasserstein Gradients)
  • CWGAN-GP (Conditional Wassertein GAN with Gradient Penalty)
  • CTGAN (Conditional Tabular GAN)
  • TimeGAN (specifically for time-series data)
  • DoppelGANger (specifically for time-series data)

The new API handles model selection automatically, optimizing for the best performance in fidelity, utility, and privacy. This significantly simplifies the synthetic data generation process, ensuring that users get the highest quality output without the need for manual intervention and tiring hyperparameter tuning.

Are you ready to learn more about synthetic data and the best-practices for synthetic data generation? For more materials on synthetic data generation with Python see the documentation.

Quickstart

Binary installers for the latest released version are available at the Python Package Index (PyPI).

pip install ydata-sdk

The UI guide for synthetic data generation

YData Fabric offers an UI interface to guide you through the steps and inputs to generate structure data. You can experiment today with YData Fabric by registering the Community version.

Examples

Here you can find usage examples of the package and models to synthesize tabular data.

Datasets for you to experiment

Here are some example datasets for you to try with the synthesizers:

Tabular datasets

Sequential datasets

Project Resources

Find below useful literature of how to generate synthetic data and available generative models:

Tabular data

Sequential data

Support

For support in using this library, please join our Discord server. Our Discord community is very friendly and great about quickly answering questions about the use and development of the library. Click here to join our Discord community!

FAQs

Have a question? Check out the Frequently Asked Questions about ydata-synthetic. If you feel something is missing, feel free to book a beary informal chat with us.

License

MIT License