Convert Figma logo to code with AI

openai logoCLIP

CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image

24,594
3,202
24,594
220

Top Related Projects

4,616

PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation

An open source implementation of CLIP.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

4,279

Quick Overview

CLIP (Contrastive Language-Image Pre-training) is a neural network trained on a variety of image-text pairs. It can be used to perform zero-shot classification of images, enabling it to recognize objects and concepts in images without specific training for those categories. CLIP was developed by OpenAI and represents a significant advancement in computer vision and natural language processing integration.

Pros

  • Versatile zero-shot learning capabilities for image classification
  • Robust performance across a wide range of visual concepts
  • Can be fine-tuned for specific tasks with minimal additional training
  • Enables novel applications combining visual and textual information

Cons

  • Computationally intensive, requiring significant resources for training and inference
  • May exhibit biases present in its training data
  • Performance can be inconsistent across different domains or unusual image types
  • Limited by the scope of its training data and may struggle with highly specialized or technical concepts

Code Examples

  1. Loading CLIP and performing zero-shot classification:
import torch
from PIL import Image
import clip

# Load the model
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load("ViT-B/32", device=device)

# Prepare the image and text
image = preprocess(Image.open("cat.jpg")).unsqueeze(0).to(device)
text = clip.tokenize(["a photo of a cat", "a photo of a dog"]).to(device)

# Perform classification
with torch.no_grad():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)
    logits_per_image, logits_per_text = model(image, text)
    probs = logits_per_image.softmax(dim=-1).cpu().numpy()

print(f"Label probs: {probs}")
  1. Using CLIP for image-text similarity:
import torch
import clip
from PIL import Image

# Load CLIP model
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load("ViT-B/32", device=device)

# Prepare image and text
image = preprocess(Image.open("sunset.jpg")).unsqueeze(0).to(device)
text = clip.tokenize(["a beautiful sunset", "a cityscape"]).to(device)

# Calculate similarity
with torch.no_grad():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)
    similarity = (100.0 * image_features @ text_features.T).softmax(dim=-1)

print(f"Similarity scores: {similarity}")
  1. Fine-tuning CLIP for a custom task:
import torch
import clip
from torch import nn, optim

# Load pre-trained CLIP model
device = "cuda" if torch.cuda.is_available() else "cpu"
model, _ = clip.load("ViT-B/32", device=device)

# Create a custom classification head
class CustomClassifier(nn.Module):
    def __init__(self, clip_model):
        super().__init__()
        self.clip_model = clip_model
        self.classifier = nn.Linear(512, num_classes)

    def forward(self, image):
        features = self.clip_model.encode_image(image)
        return self.classifier(features)

# Initialize the custom model and optimizer
custom_model = CustomClassifier(model).to(device)
optimizer = optim.Adam(custom_model.parameters(), lr=1e-4)

# Training loop (simplified)
for epoch in range(num_epochs):
    for images, labels in dataloader:
        images, labels = images.to(device), labels.to(device)
        optimizer.zero_grad()
        outputs = custom_model(images)
        loss = nn.CrossEntropyLoss()(outputs, labels)
        loss.backward()
        optimizer.step()

Getting Started

To get started with CLIP, first install the required packages:

pip install torch torchvision ftfy regex tqdm
pip

Competitor Comparisons

4,616

PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation

Pros of BLIP

  • Supports image-text generation tasks like image captioning and visual question answering
  • Offers a more versatile architecture for various vision-language tasks
  • Provides pre-trained models for immediate use in downstream applications

Cons of BLIP

  • May require more computational resources for training and inference
  • Has a smaller community and fewer third-party implementations compared to CLIP
  • Potentially more complex to integrate into existing projects due to its multi-task nature

Code Comparison

BLIP example:

from PIL import Image
import requests
from transformers import BlipProcessor, BlipForConditionalGeneration

processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base")

img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' 
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')

inputs = processor(raw_image, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))

CLIP example:

import torch
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel

model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = logits_per_image.softmax(dim=1)
print(probs)

An open source implementation of CLIP.

Pros of open_clip

  • Offers a wider range of pre-trained models and architectures
  • Provides more flexibility in training and fine-tuning options
  • Includes additional features like support for custom datasets and data augmentation

Cons of open_clip

  • May have slightly lower performance on some benchmarks compared to the original CLIP
  • Requires more setup and configuration for advanced use cases
  • Documentation might be less comprehensive for certain features

Code Comparison

CLIP:

import torch
from PIL import Image
from clip import clip

model, preprocess = clip.load("ViT-B/32", device="cuda")
image = preprocess(Image.open("image.jpg")).unsqueeze(0).to("cuda")
text = clip.tokenize(["a photo of a cat", "a photo of a dog"]).to("cuda")

with torch.no_grad():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)

open_clip:

import torch
from PIL import Image
import open_clip

model, _, preprocess = open_clip.create_model_and_transforms('ViT-B-32', pretrained='laion2b_s34b_b79k')
image = preprocess(Image.open("image.jpg")).unsqueeze(0)
text = open_clip.tokenize(["a photo of a cat", "a photo of a dog"])

with torch.no_grad():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Pros of Transformers

  • Broader scope: Supports a wide range of NLP tasks and models
  • Extensive documentation and community support
  • Regular updates and new model implementations

Cons of Transformers

  • Larger library size and potentially higher resource requirements
  • Steeper learning curve due to its extensive features
  • May have slower inference times for specific tasks compared to CLIP

Code Comparison

CLIP (image-text similarity):

import torch
from PIL import Image
import clip

model, preprocess = clip.load("ViT-B/32", device="cuda")
image = preprocess(Image.open("image.jpg")).unsqueeze(0).to("cuda")
text = clip.tokenize(["a dog", "a cat"]).to("cuda")

with torch.no_grad():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)

Transformers (text classification):

from transformers import pipeline

classifier = pipeline("sentiment-analysis")
result = classifier("I love this movie!")
print(result)
4,279

Pros of BioGPT

  • Specialized for biomedical text processing and generation
  • Trained on a large corpus of biomedical literature
  • Supports domain-specific tasks like named entity recognition and relation extraction

Cons of BioGPT

  • Limited to biomedical domain, less versatile for general-purpose tasks
  • May require more domain expertise to use effectively
  • Smaller community and fewer resources compared to CLIP

Code Comparison

CLIP (Python):

import torch
from PIL import Image
import clip

model, preprocess = clip.load("ViT-B/32", device="cuda")
image = preprocess(Image.open("image.jpg")).unsqueeze(0).to("cuda")
text = clip.tokenize(["a dog", "a cat"]).to("cuda")

with torch.no_grad():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)

BioGPT (Python):

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("microsoft/biogpt")
model = AutoModelForCausalLM.from_pretrained("microsoft/biogpt")

inputs = tokenizer("Alzheimer's disease is", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0]))

Pros of Vision Transformer

  • Focuses specifically on image classification tasks, potentially offering better performance in this domain
  • Implements the original Vision Transformer (ViT) architecture, providing a clean and straightforward implementation
  • Includes pre-trained models and evaluation scripts for easy use and benchmarking

Cons of Vision Transformer

  • Limited to image-only tasks, lacking CLIP's multimodal capabilities
  • May require more computational resources for training compared to CLIP's efficient contrastive learning approach
  • Less versatile in terms of downstream tasks and transfer learning

Code Comparison

Vision Transformer:

class Transformer(nn.Module):
    def __init__(self, num_layers, dim, num_heads, mlp_ratio=4., qkv_bias=False, drop_rate=0.):
        super().__init__()
        self.layers = nn.ModuleList([
            TransformerBlock(dim, num_heads, mlp_ratio, qkv_bias, drop_rate)
            for _ in range(num_layers)])

CLIP:

class CLIP(nn.Module):
    def __init__(self, embed_dim: int, image_resolution: int, vision_layers: Union[Tuple[int, int, int, int], int]):
        super().__init__()
        self.visual = VisionTransformer(
            input_resolution=image_resolution,
            patch_size=16,
            width=768,
            layers=vision_layers,
            heads=12,
            output_dim=embed_dim
        )

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

CLIP

[Blog] [Paper] [Model Card] [Colab]

CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. We found CLIP matches the performance of the original ResNet50 on ImageNet “zero-shot” without using any of the original 1.28M labeled examples, overcoming several major challenges in computer vision.

Approach

CLIP

Usage

First, install PyTorch 1.7.1 (or later) and torchvision, as well as small additional dependencies, and then install this repo as a Python package. On a CUDA GPU machine, the following will do the trick:

$ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0
$ pip install ftfy regex tqdm
$ pip install git+https://github.com/openai/CLIP.git

Replace cudatoolkit=11.0 above with the appropriate CUDA version on your machine or cpuonly when installing on a machine without a GPU.

import torch
import clip
from PIL import Image

device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load("ViT-B/32", device=device)

image = preprocess(Image.open("CLIP.png")).unsqueeze(0).to(device)
text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)

with torch.no_grad():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)
    
    logits_per_image, logits_per_text = model(image, text)
    probs = logits_per_image.softmax(dim=-1).cpu().numpy()

print("Label probs:", probs)  # prints: [[0.9927937  0.00421068 0.00299572]]

API

The CLIP module clip provides the following methods:

clip.available_models()

Returns the names of the available CLIP models.

clip.load(name, device=..., jit=False)

Returns the model and the TorchVision transform needed by the model, specified by the model name returned by clip.available_models(). It will download the model as necessary. The name argument can also be a path to a local checkpoint.

The device to run the model can be optionally specified, and the default is to use the first CUDA device if there is any, otherwise the CPU. When jit is False, a non-JIT version of the model will be loaded.

clip.tokenize(text: Union[str, List[str]], context_length=77)

Returns a LongTensor containing tokenized sequences of given text input(s). This can be used as the input to the model


The model returned by clip.load() supports the following methods:

model.encode_image(image: Tensor)

Given a batch of images, returns the image features encoded by the vision portion of the CLIP model.

model.encode_text(text: Tensor)

Given a batch of text tokens, returns the text features encoded by the language portion of the CLIP model.

model(image: Tensor, text: Tensor)

Given a batch of images and a batch of text tokens, returns two Tensors, containing the logit scores corresponding to each image and text input. The values are cosine similarities between the corresponding image and text features, times 100.

More Examples

Zero-Shot Prediction

The code below performs zero-shot prediction using CLIP, as shown in Appendix B in the paper. This example takes an image from the CIFAR-100 dataset, and predicts the most likely labels among the 100 textual labels from the dataset.

import os
import clip
import torch
from torchvision.datasets import CIFAR100

# Load the model
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load('ViT-B/32', device)

# Download the dataset
cifar100 = CIFAR100(root=os.path.expanduser("~/.cache"), download=True, train=False)

# Prepare the inputs
image, class_id = cifar100[3637]
image_input = preprocess(image).unsqueeze(0).to(device)
text_inputs = torch.cat([clip.tokenize(f"a photo of a {c}") for c in cifar100.classes]).to(device)

# Calculate features
with torch.no_grad():
    image_features = model.encode_image(image_input)
    text_features = model.encode_text(text_inputs)

# Pick the top 5 most similar labels for the image
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
similarity = (100.0 * image_features @ text_features.T).softmax(dim=-1)
values, indices = similarity[0].topk(5)

# Print the result
print("\nTop predictions:\n")
for value, index in zip(values, indices):
    print(f"{cifar100.classes[index]:>16s}: {100 * value.item():.2f}%")

The output will look like the following (the exact numbers may be slightly different depending on the compute device):

Top predictions:

           snake: 65.31%
          turtle: 12.29%
    sweet_pepper: 3.83%
          lizard: 1.88%
       crocodile: 1.75%

Note that this example uses the encode_image() and encode_text() methods that return the encoded features of given inputs.

Linear-probe evaluation

The example below uses scikit-learn to perform logistic regression on image features.

import os
import clip
import torch

import numpy as np
from sklearn.linear_model import LogisticRegression
from torch.utils.data import DataLoader
from torchvision.datasets import CIFAR100
from tqdm import tqdm

# Load the model
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load('ViT-B/32', device)

# Load the dataset
root = os.path.expanduser("~/.cache")
train = CIFAR100(root, download=True, train=True, transform=preprocess)
test = CIFAR100(root, download=True, train=False, transform=preprocess)


def get_features(dataset):
    all_features = []
    all_labels = []
    
    with torch.no_grad():
        for images, labels in tqdm(DataLoader(dataset, batch_size=100)):
            features = model.encode_image(images.to(device))

            all_features.append(features)
            all_labels.append(labels)

    return torch.cat(all_features).cpu().numpy(), torch.cat(all_labels).cpu().numpy()

# Calculate the image features
train_features, train_labels = get_features(train)
test_features, test_labels = get_features(test)

# Perform logistic regression
classifier = LogisticRegression(random_state=0, C=0.316, max_iter=1000, verbose=1)
classifier.fit(train_features, train_labels)

# Evaluate using the logistic regression classifier
predictions = classifier.predict(test_features)
accuracy = np.mean((test_labels == predictions).astype(float)) * 100.
print(f"Accuracy = {accuracy:.3f}")

Note that the C value should be determined via a hyperparameter sweep using a validation split.

See Also