Top Related Projects
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNetV4, MobileNet-V3 & V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
An open source implementation of CLIP.
Quick Overview
DINOv2 is a state-of-the-art self-supervised learning method for computer vision tasks. It builds upon the original DINO (self-DIstillation with NO labels) approach, offering improved performance and versatility across various vision applications. DINOv2 is designed to learn powerful visual representations without the need for labeled data.
Pros
- Achieves excellent performance on a wide range of computer vision tasks
- Requires no labeled data for training, making it suitable for scenarios with limited annotations
- Provides pre-trained models of various sizes, from small to extra-large
- Offers flexibility in terms of model architectures and downstream applications
Cons
- May require significant computational resources for training large models
- The complexity of the method might make it challenging for beginners to understand and implement
- Limited documentation and examples compared to some more established libraries
- Potential overfitting on specific datasets or domains if not carefully tuned
Code Examples
- Loading a pre-trained DINOv2 model:
import torch
from dinov2.models import build_model
# Load a pre-trained ViT-S/14 model
model = build_model("dinov2_vits14")
model.eval()
- Extracting features from an image:
from torchvision import transforms
from PIL import Image
# Prepare image
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
image = Image.open("path/to/image.jpg")
input_tensor = transform(image).unsqueeze(0)
# Extract features
with torch.no_grad():
features = model(input_tensor)
- Fine-tuning DINOv2 for a custom task:
import torch.nn as nn
import torch.optim as optim
# Add a custom classification head
num_classes = 10
model.head = nn.Linear(model.num_features, num_classes)
# Prepare optimizer and loss function
optimizer = optim.Adam(model.parameters(), lr=1e-4)
criterion = nn.CrossEntropyLoss()
# Fine-tuning loop (simplified)
for epoch in range(num_epochs):
for inputs, labels in dataloader:
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
Getting Started
To get started with DINOv2, follow these steps:
- Install the required dependencies:
pip install torch torchvision
git clone https://github.com/facebookresearch/dinov2.git
cd dinov2
pip install -e .
- Load a pre-trained model and use it for inference:
import torch
from dinov2.models import build_model
model = build_model("dinov2_vits14")
model.eval()
# Use the model for inference or fine-tuning
For more detailed instructions and advanced usage, refer to the official DINOv2 repository and documentation.
Competitor Comparisons
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Pros of transformers
- Broader scope: Supports a wide range of NLP tasks and models
- Extensive documentation and community support
- Regular updates and new model implementations
Cons of transformers
- Larger codebase, potentially more complex to navigate
- May have higher computational requirements for some models
Code Comparison
transformers:
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
dinov2:
import torch
from dinov2.models import build_model
model = build_model("dinov2_vits14")
model.load_state_dict(torch.load("path/to/weights.pth"))
Key Differences
- transformers focuses on NLP tasks, while dinov2 is primarily for computer vision
- transformers offers a unified API for various models, dinov2 is specific to DINOv2
- transformers has a more extensive ecosystem and integrations with other libraries
Use Cases
- transformers: Ideal for a wide range of NLP tasks and experimentation
- dinov2: Best for computer vision tasks, especially those benefiting from self-supervised learning
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
Pros of CLIP
- Multimodal learning: CLIP can understand both images and text, enabling versatile applications
- Zero-shot learning capabilities: Can classify images into arbitrary categories without fine-tuning
- Efficient training on large-scale datasets: Leverages contrastive learning for better generalization
Cons of CLIP
- Limited to image-text pairs: DINOv2 can work with images alone, offering more flexibility
- Fixed model architecture: DINOv2 provides various model sizes and configurations
- Less focus on self-supervised learning: DINOv2 emphasizes self-supervised pretraining
Code Comparison
CLIP:
import clip
model, preprocess = clip.load("ViT-B/32")
image = preprocess(Image.open("image.jpg")).unsqueeze(0)
text = clip.tokenize(["a dog", "a cat"])
logits_per_image, logits_per_text = model(image, text)
DINOv2:
import torch
from dinov2.models import build_model
model = build_model("dinov2_vits14")
image = preprocess(Image.open("image.jpg")).unsqueeze(0)
features = model(image)
Pros of vision_transformer
- Simpler implementation, focusing on core ViT architecture
- Easier to understand and modify for research purposes
- Includes pre-trained models for quick experimentation
Cons of vision_transformer
- Less comprehensive feature set compared to DINOv2
- Limited to vision tasks, while DINOv2 offers multi-modal capabilities
- Fewer optimization techniques and advanced training strategies
Code Comparison
vision_transformer:
class VisionTransformer(nn.Module):
def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000,
embed_dim=768, depth=12, num_heads=12, mlp_ratio=4., qkv_bias=True,
representation_size=None, distilled=False, drop_rate=0.,
attn_drop_rate=0., drop_path_rate=0., embed_layer=PatchEmbed, norm_layer=None,
act_layer=None, weight_init=''):
super().__init__()
# ... (implementation details)
DINOv2:
class DinoVisionTransformer(nn.Module):
def __init__(
self,
img_size=224,
patch_size=16,
in_chans=3,
embed_dim=768,
depth=12,
num_heads=12,
mlp_ratio=4.0,
qkv_bias=True,
ffn_bias=True,
proj_bias=True,
distillation=False,
drop_path_rate=0.0,
init_values=None,
embed_layer=PatchEmbed,
act_layer=nn.GELU,
block_fn=Block,
ffn_layer="mlp",
init_scale=0.001,
):
super().__init__()
# ... (implementation details)
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
Pros of Swin-Transformer
- More efficient for high-resolution images due to its hierarchical structure
- Better performance on various vision tasks, especially object detection and segmentation
- Easier integration with existing CNN-based frameworks
Cons of Swin-Transformer
- Less versatile compared to DINOv2's self-supervised learning approach
- May require more task-specific fine-tuning for optimal performance
- Limited to vision tasks, while DINOv2 can be applied to multi-modal scenarios
Code Comparison
Swin-Transformer:
from swin_transformer import SwinTransformer
model = SwinTransformer(
img_size=224,
patch_size=4,
in_chans=3,
num_classes=1000,
embed_dim=96,
depths=[2, 2, 6, 2],
num_heads=[3, 6, 12, 24],
window_size=7
)
DINOv2:
import torch
from dinov2.models import build_model
model = build_model("dinov2_vits14")
state_dict = torch.load("path/to/dinov2_vits14_pretrained.pth")
model.load_state_dict(state_dict)
Both repositories provide powerful vision transformer architectures, but they differ in their approach and application. Swin-Transformer offers a more efficient solution for high-resolution images and traditional vision tasks, while DINOv2 focuses on self-supervised learning and multi-modal capabilities.
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNetV4, MobileNet-V3 & V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
Pros of pytorch-image-models
- Extensive collection of pre-trained models and architectures
- Regular updates and active community support
- Comprehensive documentation and examples
Cons of pytorch-image-models
- Focuses primarily on image classification tasks
- May require more setup and configuration for specific use cases
Code Comparison
pytorch-image-models:
import timm
model = timm.create_model('resnet50', pretrained=True)
output = model(input_tensor)
dinov2:
import torch
from dinov2.models import build_model
model = build_model('dinov2_vitb14')
output = model(input_tensor)
Both repositories provide powerful tools for working with image models in PyTorch. pytorch-image-models offers a wide range of pre-trained models and is well-suited for various image classification tasks. dinov2, on the other hand, focuses on self-supervised learning and provides models specifically designed for tasks like object detection and segmentation.
While pytorch-image-models is more versatile and has a larger community, dinov2 excels in self-supervised learning scenarios and offers state-of-the-art performance in certain tasks. The choice between the two depends on the specific requirements of your project and the type of image-related tasks you're working on.
An open source implementation of CLIP.
Pros of open_clip
- More flexible and customizable, allowing users to train their own CLIP models
- Supports a wider range of model architectures and training configurations
- Provides extensive documentation and examples for various use cases
Cons of open_clip
- May require more setup and configuration compared to DINOv2's pre-trained models
- Potentially less optimized for specific tasks than DINOv2's self-supervised approach
- Could have a steeper learning curve for users new to CLIP models
Code Comparison
open_clip:
import open_clip
model, _, preprocess = open_clip.create_model_and_transforms('ViT-B-32', pretrained='laion2b_s34b_b79k')
text = open_clip.tokenize(["a photo of a cat", "a photo of a dog"])
image = preprocess(Image.open("path/to/image.jpg")).unsqueeze(0)
DINOv2:
import torch
from dinov2.models import build_model
model = build_model('dinov2_vitb14')
image = torch.randn(1, 3, 224, 224)
features = model(image)
Both repositories offer powerful vision models, but open_clip focuses on CLIP (Contrastive Language-Image Pre-training) models, while DINOv2 provides self-supervised vision transformers. open_clip offers more flexibility in training and customization, while DINOv2 may be easier to use out-of-the-box for certain vision tasks.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
:new: [2023-10-26] Added DINOv2 backbones with registers, following Vision Transformers Need Registers.
DINOv2: Learning Robust Visual Features without Supervision
Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy V. Vo, Marc Szafraniec, Vasil Khalidov, Patrick Labatut, Armand Joulin, Piotr Bojanowski
[Paper #1
] Paper #2
] [Blog
] [Demo
] [BibTeX
]
PyTorch implementation and pretrained models for DINOv2. For details, see the papers: DINOv2: Learning Robust Visual Features without Supervision and Vision Transformers Need Registers.
DINOv2 models produce high-performance visual features that can be directly employed with classifiers as simple as linear layers on a variety of computer vision tasks; these visual features are robust and perform well across domains without any requirement for fine-tuning. The models were pretrained on a dataset of 142 M images without using any labels or annotations.
https://github.com/facebookresearch/dinov2/assets/60359573/f168823e-7922-415a-b429-578badf5c356
Pretrained models
model | # of params |
with registers |
ImageNet k-NN |
ImageNet linear |
download |
---|---|---|---|---|---|
ViT-S/14 distilled | 21 M | :x: | 79.0% | 81.1% | backbone only |
ViT-S/14 distilled | 21 M | :white_check_mark: | 79.1% | 80.9% | backbone only |
ViT-B/14 distilled | 86 M | :x: | 82.1% | 84.5% | backbone only |
ViT-B/14 distilled | 86 M | :white_check_mark: | 82.0% | 84.6% | backbone only |
ViT-L/14 distilled | 300 M | :x: | 83.5% | 86.3% | backbone only |
ViT-L/14 distilled | 300 M | :white_check_mark: | 83.8% | 86.7% | backbone only |
ViT-g/14 | 1,100 M | :x: | 83.5% | 86.5% | backbone only |
ViT-g/14 | 1,100 M | :white_check_mark: | 83.7% | 87.1% | backbone only |
Pretrained backbones (via PyTorch Hub)
Please follow the instructions here to install PyTorch (the only required dependency for loading the model). Installing PyTorch with CUDA support is strongly recommended.
A corresponding model card is included in the repository.
import torch
# DINOv2
dinov2_vits14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14')
dinov2_vitb14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitb14')
dinov2_vitl14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14')
dinov2_vitg14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14')
# DINOv2 with registers
dinov2_vits14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14_reg')
dinov2_vitb14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitb14_reg')
dinov2_vitl14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14_reg')
dinov2_vitg14_reg = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14_reg')
Pretrained heads - Image classification
backbone | with registers |
download |
---|---|---|
ImageNet | ||
ViT-S/14 distilled | :x: | linear head (1 layer, 4 layers) |
ViT-S/14 distilled | :white_check_mark: | linear head (1 layer, 4 layers) |
ViT-B/14 distilled | :x: | linear head (1 layer, 4 layers) |
ViT-B/14 distilled | :white_check_mark: | linear head (1 layer, 4 layers) |
ViT-L/14 distilled | :x: | linear head (1 layer, 4 layers) |
ViT-L/14 distilled | :white_check_mark: | linear head (1 layer, 4 layers) |
ViT-g/14 | :x: | linear head (1 layer, 4 layers) |
ViT-g/14 | :white_check_mark: | linear head (1 layer, 4 layers) |
The (full) classifier models can be loaded via PyTorch Hub:
import torch
# DINOv2
dinov2_vits14_lc = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14_lc')
dinov2_vitb14_lc = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitb14_lc')
dinov2_vitl14_lc = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14_lc')
dinov2_vitg14_lc = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14_lc')
# DINOv2 with registers
dinov2_vits14_reg_lc = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14_reg_lc')
dinov2_vitb14_reg_lc = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitb14_reg_lc')
dinov2_vitl14_reg_lc = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14_reg_lc')
dinov2_vitg14_reg_lc = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14_reg_lc')
Pretrained heads - Depth estimation
backbone | download head | |
---|---|---|
NYUd | KITTI | |
ViT-S/14 distilled | linear (1 layer, 4 layers), DPT | linear (1 layer, 4 layers), DPT |
ViT-B/14 distilled | linear (1 layer, 4 layers), DPT | linear (1 layer, 4 layers), DPT |
ViT-L/14 distilled | linear (1 layer, 4 layers), DPT | linear (1 layer, 4 layers), DPT |
ViT-g/14 | linear (1 layer, 4 layers), DPT | linear (1 layer, 4 layers), DPT |
Pretrained heads - Semantic segmentation
backbone | download model | download head | |
---|---|---|---|
ADE20K | ADE20K | VOC2012 | |
ViT-S/14 distilled | linear, multi-scale | linear, multi-scale | |
ViT-B/14 distilled | linear, multi-scale | linear, multi-scale | |
ViT-L/14 distilled | linear, multi-scale | linear, multi-scale | |
ViT-g/14 | Mask2Former | linear, multi-scale | linear, multi-scale |
Installation
The training and evaluation code requires PyTorch 2.0 and xFormers 0.0.18 as well as a number of other 3rd party packages. Note that the code has only been tested with the specified versions and also expects a Linux environment. To setup all the required dependencies for training and evaluation, please follow the instructions below:
conda (Recommended) - Clone the repository and then create and activate a dinov2
conda environment using the provided environment definition:
conda env create -f conda.yaml
conda activate dinov2
pip - Clone the repository and then use the provided requirements.txt
to install the dependencies:
pip install -r requirements.txt
For dense tasks (depth estimation and semantic segmentation), there are additional dependencies (specific versions of mmcv
and mmsegmentation
) which are captured in the extras
dependency specifications:
conda (Recommended):
conda env create -f conda-extras.yaml
conda activate dinov2-extras
pip:
pip install -r requirements.txt -r requirements-extras.txt
Data preparation
ImageNet-1k
The root directory of the dataset should hold the following contents:
<ROOT>/test/ILSVRC2012_test_00000001.JPEG
<ROOT>/test/[..]
<ROOT>/test/ILSVRC2012_test_00100000.JPEG
<ROOT>/train/n01440764/n01440764_10026.JPEG
<ROOT>/train/[...]
<ROOT>/train/n15075141/n15075141_9993.JPEG
<ROOT>/val/n01440764/ILSVRC2012_val_00000293.JPEG
<ROOT>/val/[...]
<ROOT>/val/n15075141/ILSVRC2012_val_00049174.JPEG
<ROOT>/labels.txt
The provided dataset implementation expects a few additional metadata files to be present under the extra directory:
<EXTRA>/class-ids-TRAIN.npy
<EXTRA>/class-ids-VAL.npy
<EXTRA>/class-names-TRAIN.npy
<EXTRA>/class-names-VAL.npy
<EXTRA>/entries-TEST.npy
<EXTRA>/entries-TRAIN.npy
<EXTRA>/entries-VAL.npy
These metadata files can be generated (once) with the following lines of Python code:
from dinov2.data.datasets import ImageNet
for split in ImageNet.Split:
dataset = ImageNet(split=split, root="<ROOT>", extra="<EXTRA>")
dataset.dump_extra()
Note that the root and extra directories do not have to be distinct directories.
ImageNet-22k
Please adapt the dataset class to match your local setup.
:warning: To execute the commands provided in the next sections for training and evaluation, the dinov2
package should be included in the Python module search path, i.e. simply prefix the command to run with PYTHONPATH=.
.
Training
Fast setup: training DINOv2 ViT-L/16 on ImageNet-1k
Run DINOv2 training on 4 A100-80GB nodes (32 GPUs) in a SLURM cluster environment with submitit:
python dinov2/run/train/train.py \
--nodes 4 \
--config-file dinov2/configs/train/vitl16_short.yaml \
--output-dir <PATH/TO/OUTPUT/DIR> \
train.dataset_path=ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
Training time is approximately 1 day and the resulting checkpoint should reach 81.6% on k-NN eval and 82.9% on linear eval.
The training code saves the weights of the teacher in the eval
folder every 12500 iterations for evaluation.
Long setup: training DINOv2 ViT-L/14 on ImageNet-22k
Run DINOv2 training on 12 A100-80GB nodes (96 GPUs) in a SLURM cluster environment with submitit:
python dinov2/run/train/train.py \
--nodes 12 \
--config-file dinov2/configs/train/vitl14.yaml \
--output-dir <PATH/TO/OUTPUT/DIR> \
train.dataset_path=ImageNet22k:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
Training time is approximately 3.3 days and the resulting checkpoint should reach 82.0% on k-NN eval and 84.5% on linear eval.
The training code saves the weights of the teacher in the eval
folder every 12500 iterations for evaluation.
Evaluation
The training code regularly saves the teacher weights. In order to evaluate the model, run the following evaluation on a single node:
k-NN classification on ImageNet-1k
python dinov2/run/eval/knn.py \
--config-file <PATH/TO/OUTPUT/DIR>/config.yaml \
--pretrained-weights <PATH/TO/OUTPUT/DIR>/eval/training_24999/teacher_checkpoint.pth \
--output-dir <PATH/TO/OUTPUT/DIR>/eval/training_24999/knn \
--train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \
--val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
Logistic regression classification on ImageNet-1k
python dinov2/run/eval/log_regression.py \
--config-file <PATH/TO/OUTPUT/DIR>/config.yaml \
--pretrained-weights <PATH/TO/OUTPUT/DIR>/eval/training_24999/teacher_checkpoint.pth \
--output-dir <PATH/TO/OUTPUT/DIR>/eval/training_24999/logreg \
--train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \
--val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
Linear classification with data augmentation on ImageNet-1k
python dinov2/run/eval/linear.py \
--config-file <PATH/TO/OUTPUT/DIR>/config.yaml \
--pretrained-weights <PATH/TO/OUTPUT/DIR>/eval/training_24999/teacher_checkpoint.pth \
--output-dir <PATH/TO/OUTPUT/DIR>/eval/training_24999/linear \
--train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \
--val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
We release the weights from evaluating the different models:
model | with registers |
ImageNet top-1 |
linear evaluation |
---|---|---|---|
ViT-S/14 distilled | :x: | 81.1% | linear head weights |
ViT-S/14 distilled | :white_check_mark: | 80.8% | linear head weights |
ViT-B/14 distilled | :x: | 84.5% | linear head weights |
ViT-B/14 distilled | :white_check_mark: | 84.4% | linear head weights |
ViT-L/14 distilled | :x: | 86.3% | linear head weights |
ViT-L/14 distilled | :white_check_mark: | 86.5% | linear head weights |
ViT-g/14 | :x: | 86.5% | linear head weights |
ViT-g/14 | :white_check_mark: | 87.0% | linear head weights |
The performance of the provided pretrained model weights can be evaluated as follows on ImageNet-1k:
python dinov2/run/eval/linear.py \
--config-file dinov2/configs/eval/vitg14_pretrain.yaml \
--pretrained-weights https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_pretrain.pth \
--train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \
--val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
Notebooks
A few notebooks are provided to help the community leverage the models and code:
- Depth estimation - How to load and use the depth heads in combination with a matching backbone via mmcv
- Semantic segmentation - How to load and use the segmentation heads in combination with a matching backbone via mmcv, and also how to load and use the Mask2Former-based segmentation model trained on ADE20K
License
DINOv2 code and model weights are released under the Apache License 2.0. See LICENSE for additional details.
Contributing
See contributing and the code of conduct.
Citing DINOv2
If you find this repository useful, please consider giving a star :star: and citation :t-rex::
@misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Oquab, Maxime and Darcet, Timothée and Moutakanni, Theo and Vo, Huy V. and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patrick and Joulin, Armand and Bojanowski, Piotr},
journal={arXiv:2304.07193},
year={2023}
}
@misc{darcet2023vitneedreg,
title={Vision Transformers Need Registers},
author={Darcet, Timothée and Oquab, Maxime and Mairal, Julien and Bojanowski, Piotr},
journal={arXiv:2309.16588},
year={2023}
}
Top Related Projects
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNetV4, MobileNet-V3 & V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
An open source implementation of CLIP.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot