Convert Figma logo to code with AI

Lightning-Universe logolightning-bolts

Toolbox of models, callbacks, and datasets for AI/ML researchers.

1,676
320
1,676
63

Top Related Projects

Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

82,049

Tensors and Dynamic neural networks in Python with strong GPU acceleration

185,446

An Open Source Machine Learning Framework for Everyone

scikit-learn: machine learning in Python

26,090

The fastai deep learning library

Quick Overview

Lightning Bolts is a collection of state-of-the-art deep learning models, components, and utilities for the PyTorch Lightning ecosystem. It provides researchers and practitioners with ready-to-use implementations of popular models, datasets, and training recipes to accelerate deep learning research and development.

Pros

  • Extensive collection of pre-implemented models and components
  • Seamless integration with PyTorch Lightning for efficient and scalable training
  • Well-documented and easy to use
  • Regularly updated with new models and features

Cons

  • May have a steeper learning curve for those unfamiliar with PyTorch Lightning
  • Some implementations might not be as optimized as specialized libraries
  • Limited to PyTorch ecosystem
  • Dependency on PyTorch Lightning may introduce additional complexity

Code Examples

  1. Loading a pre-trained BERT model:
from pl_bolts.models.self_supervised import SimCLR

# Load a pre-trained SimCLR model
model = SimCLR.load_from_checkpoint("path/to/checkpoint.ckpt")
  1. Using a built-in dataset:
from pl_bolts.datamodules import CIFAR10DataModule

# Create a CIFAR10 datamodule
dm = CIFAR10DataModule(data_dir="path/to/data", batch_size=32)
dm.setup()
  1. Implementing a custom self-supervised learning task:
from pl_bolts.models.self_supervised import SimSiam
from pl_bolts.models.self_supervised.simclr.transforms import SimCLRTrainDataTransform

# Create a SimSiam model with custom transforms
model = SimSiam(
    encoder="resnet50",
    input_height=224,
    first_conv=True,
    maxpool1=True,
    train_transform=SimCLRTrainDataTransform(input_height=224),
)

Getting Started

To get started with Lightning Bolts, follow these steps:

  1. Install the library:
pip install lightning-bolts
  1. Import and use components in your PyTorch Lightning code:
from pytorch_lightning import Trainer
from pl_bolts.models.self_supervised import SimCLR
from pl_bolts.datamodules import CIFAR10DataModule

# Create a SimCLR model and CIFAR10 datamodule
model = SimCLR(num_samples=50000, batch_size=256)
dm = CIFAR10DataModule(data_dir="path/to/data", batch_size=256)

# Train the model
trainer = Trainer(max_epochs=100, gpus=1)
trainer.fit(model, datamodule=dm)

This example sets up a SimCLR self-supervised learning model and trains it on the CIFAR10 dataset using PyTorch Lightning.

Competitor Comparisons

Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.

Pros of pytorch-lightning

  • More comprehensive and widely adopted framework for deep learning
  • Extensive documentation and community support
  • Seamless integration with PyTorch ecosystem

Cons of pytorch-lightning

  • Steeper learning curve for beginners
  • Larger codebase, potentially overwhelming for simple projects

Code Comparison

lightning-bolts:

from pl_bolts.models import VAE

model = VAE(input_height=28)
trainer = pl.Trainer(max_epochs=10)
trainer.fit(model, train_loader)

pytorch-lightning:

import pytorch_lightning as pl

class MyModel(pl.LightningModule):
    def training_step(self, batch, batch_idx):
        loss = self.compute_loss(batch)
        return loss

model = MyModel()
trainer = pl.Trainer(max_epochs=10)
trainer.fit(model, train_loader)

Summary

pytorch-lightning is a more comprehensive framework with broader adoption and extensive documentation. It offers seamless integration with the PyTorch ecosystem but has a steeper learning curve. lightning-bolts, on the other hand, provides pre-built models and components that can be easily integrated into pytorch-lightning projects, making it more accessible for specific use cases but potentially less flexible for custom implementations.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Pros of transformers

  • Extensive collection of pre-trained models for various NLP tasks
  • Comprehensive documentation and tutorials
  • Large and active community support

Cons of transformers

  • Steeper learning curve for beginners
  • Larger library size and potentially slower import times

Code comparison

transformers:

from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)

lightning-bolts:

from pl_bolts.models import VAE
model = VAE(input_height=28)
model.train()
x = torch.rand(5, 1, 28, 28)
loss = model(x)
loss.backward()

Key differences

  • transformers focuses on NLP tasks, while lightning-bolts covers a broader range of ML models
  • lightning-bolts integrates seamlessly with PyTorch Lightning, offering a more structured approach to training
  • transformers provides more out-of-the-box pre-trained models, while lightning-bolts emphasizes flexibility and customization
82,049

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Pros of pytorch

  • Comprehensive deep learning framework with extensive functionality
  • Large, active community with frequent updates and contributions
  • Supports a wide range of hardware accelerators and platforms

Cons of pytorch

  • Steeper learning curve for beginners
  • More complex setup and configuration for specific use cases
  • Larger codebase and dependencies, potentially slower development for simple projects

Code comparison

pytorch:

import torch

x = torch.tensor([1, 2, 3])
y = torch.tensor([4, 5, 6])
z = torch.add(x, y)

lightning-bolts:

from pl_bolts.models import LinearRegression

model = LinearRegression(input_dim=1, output_dim=1)
trainer = pl.Trainer(max_epochs=100)
trainer.fit(model, train_dataloader)

Summary

pytorch is a comprehensive deep learning framework offering extensive functionality and wide-ranging support. It has a large, active community but comes with a steeper learning curve and more complex setup.

lightning-bolts, built on top of PyTorch Lightning, provides pre-built models and components for faster development. It offers a more streamlined experience for common tasks but may have limitations for highly customized projects.

The code comparison shows pytorch's low-level tensor operations versus lightning-bolts' high-level model implementation, illustrating the difference in abstraction levels between the two libraries.

185,446

An Open Source Machine Learning Framework for Everyone

Pros of TensorFlow

  • Larger ecosystem with more tools, libraries, and community support
  • Better performance for large-scale deployments and distributed computing
  • More comprehensive documentation and learning resources

Cons of TensorFlow

  • Steeper learning curve, especially for beginners
  • Less flexibility and more verbose code compared to PyTorch-based frameworks
  • Slower development cycle and more complex debugging process

Code Comparison

TensorFlow:

import tensorflow as tf

model = tf.keras.Sequential([
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='categorical_crossentropy')

Lightning-Bolts:

import pytorch_lightning as pl

class SimpleModel(pl.LightningModule):
    def __init__(self):
        super().__init__()
        self.layer = torch.nn.Linear(28 * 28, 10)
    
    def training_step(self, batch, batch_idx):
        x, y = batch
        y_hat = self.layer(x.view(x.size(0), -1))
        loss = F.cross_entropy(y_hat, y)
        return loss

Lightning-Bolts, built on PyTorch Lightning, offers a more concise and modular approach to deep learning, while TensorFlow provides a more comprehensive ecosystem with better performance for large-scale projects. The choice between the two depends on specific project requirements and developer preferences.

scikit-learn: machine learning in Python

Pros of scikit-learn

  • Comprehensive collection of machine learning algorithms and tools
  • Well-established, mature library with extensive documentation
  • Seamless integration with NumPy and SciPy ecosystems

Cons of scikit-learn

  • Limited support for deep learning and neural networks
  • Not optimized for GPU acceleration or distributed computing
  • Less focus on cutting-edge research implementations

Code Comparison

scikit-learn:

from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification

X, y = make_classification(n_samples=1000, n_features=4)
clf = RandomForestClassifier()
clf.fit(X, y)

lightning-bolts:

from pl_bolts.models.vision import UNet
from pytorch_lightning import Trainer

model = UNet(num_classes=10)
trainer = Trainer(gpus=1)
trainer.fit(model)

lightning-bolts focuses on providing implementations of state-of-the-art deep learning models and techniques, particularly for computer vision and NLP tasks. It leverages PyTorch Lightning for efficient training and deployment. In contrast, scikit-learn offers a broader range of traditional machine learning algorithms and tools, with a focus on simplicity and ease of use for general-purpose machine learning tasks.

26,090

The fastai deep learning library

Pros of fastai

  • Simpler API with high-level abstractions for common deep learning tasks
  • Integrated curriculum and learning resources for beginners
  • Opinionated defaults that work well out-of-the-box

Cons of fastai

  • Less flexibility for customizing low-level components
  • Smaller ecosystem compared to PyTorch Lightning
  • Steeper learning curve for transitioning to other frameworks

Code Comparison

fastai:

from fastai.vision.all import *
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fit_one_cycle(4)

lightning-bolts:

from pl_bolts.models import LitResnet
model = LitResnet(lr=0.05)
trainer = pl.Trainer(max_epochs=4)
trainer.fit(model, train_dataloader, val_dataloader)

Key Differences

  • fastai focuses on rapid prototyping and ease of use
  • lightning-bolts provides more modular components for research
  • fastai has a more opinionated approach to model training
  • lightning-bolts offers greater flexibility in experiment design

Use Cases

  • fastai: Ideal for beginners and quick prototyping
  • lightning-bolts: Better suited for researchers and advanced practitioners

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Deep Learning components for extending PyTorch Lightning


Installation • Latest Docs • Stable Docs • About • Community • Website • License

PyPI Status PyPI - Downloads Build Status codecov

Documentation Status Slack license DOI


Getting Started

Pip / Conda

pip install lightning-bolts
Other installations

Install bleeding-edge (no guarantees)

pip install https://github.com/Lightning-Universe/lightning-bolts/archive/refs/heads/master.zip

To install all optional dependencies

pip install lightning-bolts["extra"]

What is Bolts?

Bolts package provides a variety of components to extend PyTorch Lightning, such as callbacks & datasets, for applied research and production.

Example 1: Accelerate Lightning Training with the Torch ORT Callback

Torch ORT converts your model into an optimized ONNX graph, speeding up training & inference when using NVIDIA or AMD GPUs. See the documentation for more details.

from pytorch_lightning import LightningModule, Trainer
import torchvision.models as models
from pl_bolts.callbacks import ORTCallback


class VisionModel(LightningModule):
    def __init__(self):
        super().__init__()
        self.model = models.vgg19_bn(pretrained=True)

    ...


model = VisionModel()
trainer = Trainer(gpus=1, callbacks=ORTCallback())
trainer.fit(model)

Example 2: Introduce Sparsity with the SparseMLCallback to Accelerate Inference

We can introduce sparsity during fine-tuning with SparseML, which ultimately allows us to leverage the DeepSparse engine to see performance improvements at inference time.

from pytorch_lightning import LightningModule, Trainer
import torchvision.models as models
from pl_bolts.callbacks import SparseMLCallback


class VisionModel(LightningModule):
    def __init__(self):
        super().__init__()
        self.model = models.vgg19_bn(pretrained=True)

    ...


model = VisionModel()
trainer = Trainer(gpus=1, callbacks=SparseMLCallback(recipe_path="recipe.yaml"))
trainer.fit(model)

Are specific research implementations supported?

We'd like to encourage users to contribute general components that will help a broad range of problems; however, components that help specific domains will also be welcomed!

For example, a callback to help train SSL models would be a great contribution; however, the next greatest SSL model from your latest paper would be a good contribution to Lightning Flash.

Use Lightning Flash to train, predict and serve state-of-the-art models for applied research. We suggest looking at our VISSL Flash integration for SSL-based tasks.

Contribute!

Bolts is supported by the PyTorch Lightning team and the PyTorch Lightning community!

Join our Slack and/or read our CONTRIBUTING guidelines to get help becoming a contributor!


License

Please observe the Apache 2.0 license that is listed in this repository. In addition, the Lightning framework is Patent Pending.