lucid
A collection of infrastructure and tools for research in neural network interpretability.
Top Related Projects
TensorFlow Graphics: Differentiable Graphics Layers for TensorFlow
TensorFlow's Visualization Toolkit
Model interpretability and understanding for PyTorch
A flexible tool for creating, organizing, and sharing visualizations of live, rich data. Supports Torch and Numpy.
Quick Overview
Lucid is an open-source library for visualizing and interpreting neural networks, particularly focusing on feature visualization and attribution techniques. It provides tools to explore and understand the inner workings of deep learning models, primarily those built with TensorFlow.
Pros
- Offers powerful visualization techniques for neural network interpretability
- Supports a wide range of feature visualization and attribution methods
- Integrates well with TensorFlow and Keras models
- Provides high-quality, publication-ready visualizations
Cons
- Primarily focused on TensorFlow, limiting its use with other deep learning frameworks
- Steep learning curve for users new to neural network interpretability
- Documentation can be sparse or outdated in some areas
- May require significant computational resources for complex visualizations
Code Examples
- Visualizing neurons in a pre-trained InceptionV1 model:
import lucid.modelzoo.vision_models as models
from lucid.misc.io import show
from lucid.optvis import render
model = models.InceptionV1()
obj = model.neuron(layer='mixed4d', channel=139)
render.render_vis(model, obj)
- Creating an activation atlas:
from lucid.modelzoo.vision_models import InceptionV1
from lucid.optvis import render, param, transform
from lucid.misc.io import show
model = InceptionV1()
layer = model.layers['mixed4d']
obj = param.image(224)
obj += 0.1 * transform.jitter(16)
obj += 0.1 * transform.random_scale([0.9, 1.1])
render.render_vis(model, obj, param_f=param.image, thresholds=(1, 2, 3))
- Generating class activation maps:
from lucid.modelzoo.vision_models import InceptionV1
from lucid.misc.io import show
from lucid.optvis import render, param
import numpy as np
model = InceptionV1()
layer = model.layers['mixed4c']
class_vector = np.zeros(1000)
class_vector[282] = 1 # 282 is the index for "tiger cat"
obj = render.objectives.channel("softmax2_pre_activation", class_vector)
render.render_vis(model, obj)
Getting Started
To get started with Lucid, follow these steps:
-
Install Lucid using pip:
pip install lucid
-
Import the necessary modules:
import lucid.modelzoo.vision_models as models from lucid.misc.io import show from lucid.optvis import render
-
Load a pre-trained model and visualize a neuron:
model = models.InceptionV1() obj = model.neuron(layer='mixed4d', channel=139) render.render_vis(model, obj)
For more advanced usage and examples, refer to the Lucid documentation and tutorials on the project's GitHub repository.
Competitor Comparisons
Pros of Vision Transformer
- Focuses on state-of-the-art vision transformer models
- Provides implementation of ViT architecture
- Includes pre-trained models and fine-tuning scripts
Cons of Vision Transformer
- Limited to vision transformer models
- Less emphasis on interpretability and visualization
- Narrower scope compared to Lucid's broader neural network tools
Code Comparison
Vision Transformer:
class Transformer(nn.Module):
def __init__(self, num_layers, dim, num_heads, mlp_ratio=4., qkv_bias=False, drop_rate=0.):
super().__init__()
self.layers = nn.ModuleList([
TransformerBlock(dim, num_heads, mlp_ratio, qkv_bias, drop_rate)
for _ in range(num_layers)])
Lucid:
def render_vis(model, objective_f, param_f=None, optimizer=None, transforms=None,
thresholds=(512,), print_objectives=None, verbose=False):
# ... (implementation details)
return T(image).cpu().numpy()[0]
Vision Transformer focuses on implementing the transformer architecture for vision tasks, while Lucid provides tools for visualizing and interpreting neural networks across various architectures. Vision Transformer is more specialized, while Lucid offers a broader set of utilities for neural network exploration and understanding.
TensorFlow Graphics: Differentiable Graphics Layers for TensorFlow
Pros of Graphics
- More comprehensive 3D graphics capabilities
- Broader scope, covering rendering, geometry processing, and more
- Active development with regular updates
Cons of Graphics
- Steeper learning curve due to broader feature set
- Less focused on neural network visualization
- Potentially more complex setup for simple tasks
Code Comparison
Lucid (visualizing neural network activations):
import lucid.modelzoo.vision_models as models
import lucid.optvis.render as render
model = models.InceptionV1()
obj = model.mixed4a_3x3_pre_relu_0
render.render_vis(model, obj)
Graphics (3D mesh rendering):
import tensorflow_graphics.geometry.representation as tfg_geometry
import tensorflow_graphics.rendering.camera as tfg_camera
vertices = tfg_geometry.mesh.sample_points(mesh)
pixels = tfg_camera.perspective.project(vertices, focal, principal_point)
Summary
While Lucid focuses on neural network visualization, Graphics offers a broader range of 3D graphics capabilities. Lucid may be more suitable for those specifically interested in understanding and visualizing neural networks, while Graphics provides a more comprehensive toolkit for 3D graphics tasks in TensorFlow.
TensorFlow's Visualization Toolkit
Pros of TensorBoard
- Comprehensive visualization tool for TensorFlow models
- Integrated with TensorFlow ecosystem, making it easy to use with existing projects
- Supports a wide range of visualizations, including scalar summaries, images, and graphs
Cons of TensorBoard
- Can be complex to set up and configure for advanced use cases
- Limited to TensorFlow-specific visualizations and may not be suitable for other deep learning frameworks
- May have performance issues with very large datasets or complex models
Code Comparison
TensorBoard:
import tensorflow as tf
from tensorboard import program
tb = program.TensorBoard()
tb.configure(argv=[None, '--logdir', log_dir])
url = tb.launch()
Lucid:
import lucid.modelzoo.vision_models as models
import lucid.optvis.render as render
model = models.InceptionV1()
obj = model.mixed4a_3x3_pre_relu_0
render.render_vis(model, obj)
Key Differences
- TensorBoard focuses on general-purpose visualization for TensorFlow models
- Lucid specializes in neural network interpretability and feature visualization
- TensorBoard is more suitable for monitoring training progress and model performance
- Lucid excels at generating visual representations of neural network activations
Model interpretability and understanding for PyTorch
Pros of Captum
- More comprehensive and feature-rich, offering a wider range of interpretability techniques
- Better integration with PyTorch ecosystem and easier to use with existing PyTorch models
- More active development and community support
Cons of Captum
- Steeper learning curve due to more complex API and extensive features
- Potentially slower execution for some techniques compared to Lucid's optimized TensorFlow implementation
Code Comparison
Captum example:
from captum.attr import IntegratedGradients
ig = IntegratedGradients(model)
attributions = ig.attribute(input, target=target_class)
Lucid example:
import lucid.modelzoo.vision_models as models
import lucid.optvis.render as render
model = models.InceptionV1()
obj = model.mixed4a_3x3_pre_relu_conv[:, 14]
render.render_vis(model, obj)
Summary
Captum offers a more comprehensive suite of interpretability tools for PyTorch users, while Lucid provides a simpler, more visualization-focused approach for TensorFlow models. Captum's extensive features come with a steeper learning curve, but it benefits from better PyTorch integration and active community support. Lucid excels in creating visually appealing interpretations of neural networks, particularly for computer vision tasks.
A flexible tool for creating, organizing, and sharing visualizations of live, rich data. Supports Torch and Numpy.
Pros of Visdom
- More flexible visualization options, supporting various plot types and layouts
- Real-time, interactive visualizations with browser-based interface
- Language-agnostic, can be used with different ML frameworks
Cons of Visdom
- Less focused on interpretability of neural networks
- Requires running a separate server process
- May have a steeper learning curve for beginners
Code Comparison
Visdom:
vis = visdom.Visdom()
vis.line(X=np.array([1, 2, 3]), Y=np.array([1, 2, 3]))
Lucid:
import lucid.modelzoo.vision_models as models
import lucid.optvis.render as render
model = models.InceptionV1()
obj = model.mixed4a_3x3_pre_relu_conv[4, 5]
render.render_vis(model, obj)
Summary
Visdom is a more general-purpose visualization tool for machine learning, offering a wide range of plot types and real-time interactivity. It's suitable for various ML frameworks and languages. Lucid, on the other hand, is specifically designed for visualizing and interpreting neural networks, with a focus on TensorFlow models. Lucid provides more specialized tools for understanding network behavior, while Visdom offers greater flexibility in data visualization across different domains.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Lucid
Lucid is a collection of infrastructure and tools for research in neural network interpretability.
We're not currently supporting tensorflow 2!
If you'd like to use lucid in colab which defaults to tensorflow 2, add this magic to a cell before you import tensorflow:
%tensorflow_version 1.x
Lucid is research code, not production code. We provide no guarantee it will work for your use case. Lucid is maintained by volunteers who are unable to provide significant technical support.
- ðâNotebooks -- Get started without any setup!
- ðâReading -- Learn more about visualizing neural nets.
- ð¬âCommunity -- Want to get involved? Please reach out!
- ð§âAdditional Information -- Licensing, code style, etc.
- ð¬âStart Doing Research! -- Want to get involved? We're trying to research openly!
- ð¦ Visualize your own model -- How to import your own model for visualization
Notebooks
Start visualizing neural networks with no setup. The following notebooks run right from your browser, thanks to Colaboratory. It's a Jupyter notebook environment that requires no setup to use and runs entirely in the cloud.
You can run the notebooks on your local machine, too. Clone the repository and find them in the notebooks
subfolder. You will need to run a local instance of the Jupyter notebook environment to execute them.
Tutorial Notebooks
Feature Visualization Notebooks
Notebooks corresponding to the Feature Visualization article
Building Blocks Notebooks
Notebooks corresponding to the Building Blocks of Interpretability article
Differentiable Image Parameterizations Notebooks
Notebooks corresponding to the Differentiable Image Parameterizations article
Activation Atlas Notebooks
Notebooks corresponding to the Activation Atlas article
Miscellaneous Notebooks
Recomended Reading
- Feature Visualization
- The Building Blocks of Interpretability
- Using Artiï¬cial Intelligence to Augment Human Intelligence
- Visualizing Representations: Deep Learning and Human Beings
- Differentiable Image Parameterizations
- Activation Atlas
Related Talks
- Lessons from a year of Distill ML Research (Shan Carter, OpenVisConf)
- Machine Learning for Visualization (Ian Johnson, OpenVisConf)
Community
We're in #proj-lucid
on the Distill slack (join link).
We'd love to see more people doing research in this space!
Additional Information
License and Disclaimer
You may use this software under the Apache 2.0 License. See LICENSE.
This project is research code. It is not an official Google product.
Special consideration for TensorFlow dependency
Lucid requires tensorflow
, but does not explicitly depend on it in setup.py
. Due to the way tensorflow is packaged and some deficiencies in how pip handles dependencies, specifying either the GPU or the non-GPU version of tensorflow will conflict with the version of tensorflow your already may have installed.
If you don't want to add your own dependency on tensorflow, you can specify which tensorflow version you want lucid to install by selecting from extras_require
like so: lucid[tf]
or lucid[tf_gpu]
.
In actual practice, we recommend you use your already installed version of tensorflow.
Top Related Projects
TensorFlow Graphics: Differentiable Graphics Layers for TensorFlow
TensorFlow's Visualization Toolkit
Model interpretability and understanding for PyTorch
A flexible tool for creating, organizing, and sharing visualizations of live, rich data. Supports Torch and Numpy.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot