Convert Figma logo to code with AI

huggingface logotransformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

141,749
28,366
141,749
1,635

Top Related Projects

188,828

An Open Source Machine Learning Framework for Everyone

88,135

Tensors and Dynamic neural networks in Python with strong GPU acceleration

37,573

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

38,880

TensorFlow code and pre-trained models for BERT

30,829

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

31,212

💫 Industrial-strength Natural Language Processing (NLP) in Python

Quick Overview

Hugging Face's Transformers library is a state-of-the-art natural language processing (NLP) toolkit. It provides thousands of pre-trained models for various NLP tasks, supporting multiple deep learning frameworks like PyTorch, TensorFlow, and JAX. The library offers a unified API for using these models, making it easy to download, train, and deploy cutting-edge NLP models.

Pros

  • Extensive collection of pre-trained models for various NLP tasks
  • Easy-to-use API with support for multiple deep learning frameworks
  • Active community and frequent updates
  • Comprehensive documentation and examples

Cons

  • Can be resource-intensive, especially for larger models
  • Learning curve for beginners due to the vast array of models and features
  • Dependency management can be complex
  • Some advanced features may require in-depth understanding of NLP concepts

Code Examples

  1. Loading and using a pre-trained model for sentiment analysis:
from transformers import pipeline

classifier = pipeline("sentiment-analysis")
result = classifier("I love using Transformers!")
print(result)
  1. Fine-tuning a pre-trained model on a custom dataset:
from transformers import AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments

model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")

# Assume 'train_dataset' and 'eval_dataset' are prepared
training_args = TrainingArguments(output_dir="./results", num_train_epochs=3, per_device_train_batch_size=16)
trainer = Trainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset)

trainer.train()
  1. Using a pre-trained model for text generation:
from transformers import GPT2LMHeadModel, GPT2Tokenizer

model = GPT2LMHeadModel.from_pretrained("gpt2")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")

input_text = "Once upon a time"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(input_ids, max_length=50, num_return_sequences=1)

generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)

Getting Started

To get started with Transformers, follow these steps:

  1. Install the library:
pip install transformers
  1. Import and use a pre-trained model:
from transformers import pipeline

# Use a pre-trained model for named entity recognition
ner = pipeline("ner", model="dbmdz/bert-large-cased-finetuned-conll03-english")
text = "My name is Sarah and I work at Google in London."
result = ner(text)
print(result)

This quick start example demonstrates how to install the library and use a pre-trained model for named entity recognition. The Transformers library offers many more features and models, which you can explore in their documentation.

Competitor Comparisons

188,828

An Open Source Machine Learning Framework for Everyone

Pros of TensorFlow

  • More comprehensive ecosystem for end-to-end machine learning
  • Better support for deployment and production environments
  • Stronger performance optimization capabilities

Cons of TensorFlow

  • Steeper learning curve for beginners
  • Less focus on natural language processing tasks
  • More complex API compared to Transformers

Code Comparison

TensorFlow:

import tensorflow as tf

model = tf.keras.Sequential([
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.Dense(10, activation='softmax')
])

Transformers:

from transformers import BertModel, BertTokenizer

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')

TensorFlow is a comprehensive machine learning framework that offers a wide range of tools and libraries for various ML tasks. It excels in performance optimization and deployment scenarios. However, it has a steeper learning curve and a more complex API.

Transformers, on the other hand, focuses specifically on natural language processing tasks and provides easy-to-use interfaces for working with pre-trained models. It offers a more straightforward API for NLP tasks but may not be as versatile for other machine learning applications.

The code comparison illustrates the difference in complexity and focus between the two libraries. TensorFlow requires more setup for creating a basic model, while Transformers allows for quick implementation of pre-trained NLP models.

88,135

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Pros of PyTorch

  • More flexible and lower-level framework, allowing for greater customization
  • Broader scope, supporting a wide range of deep learning applications beyond NLP
  • Larger community and ecosystem, with more third-party libraries and tools

Cons of PyTorch

  • Steeper learning curve for beginners in machine learning
  • Requires more boilerplate code for common NLP tasks
  • Less streamlined API for working with pre-trained models and datasets

Code Comparison

PyTorch:

import torch
import torch.nn as nn

class SimpleModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(10, 1)

    def forward(self, x):
        return self.linear(x)

Transformers:

from transformers import AutoModelForSequenceClassification

model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")
outputs = model(**inputs)

The PyTorch example shows a basic model definition, while the Transformers example demonstrates how easily pre-trained models can be loaded and used.

37,573

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Pros of DeepSpeed

  • Focuses on optimizing large-scale model training and inference
  • Offers advanced distributed training techniques like ZeRO and 3D parallelism
  • Provides significant memory and computational efficiency improvements

Cons of DeepSpeed

  • Steeper learning curve and more complex setup compared to Transformers
  • Less extensive model support and pre-trained models availability
  • Requires more manual configuration for optimal performance

Code Comparison

DeepSpeed:

import deepspeed
model_engine, optimizer, _, _ = deepspeed.initialize(
    args=args, model=model, model_parameters=params)
for step, batch in enumerate(data_loader):
    loss = model_engine(batch)
    model_engine.backward(loss)
    model_engine.step()

Transformers:

from transformers import AutoModelForSequenceClassification, Trainer
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")
trainer = Trainer(model=model, args=training_args, train_dataset=dataset)
trainer.train()

DeepSpeed focuses on efficient large-scale training, offering advanced optimization techniques but with a steeper learning curve. Transformers provides a more user-friendly interface with extensive pre-trained model support, making it easier for quick prototyping and fine-tuning tasks. DeepSpeed is ideal for pushing the boundaries of model size and training efficiency, while Transformers excels in accessibility and rapid development for a wide range of NLP tasks.

38,880

TensorFlow code and pre-trained models for BERT

Pros of BERT

  • Original implementation by Google Research, providing a reference point for the BERT model
  • Focused specifically on BERT, offering a streamlined codebase for this particular architecture
  • Includes pre-training scripts, allowing users to train BERT models from scratch

Cons of BERT

  • Limited to BERT model only, lacking support for other transformer architectures
  • Less actively maintained compared to Transformers, with fewer updates and contributions
  • Fewer features and utilities for downstream tasks and fine-tuning

Code Comparison

BERT:

import modeling
import tokenization

bert_config = modeling.BertConfig.from_json_file("bert_config.json")
tokenizer = tokenization.FullTokenizer(vocab_file="vocab.txt", do_lower_case=True)

Transformers:

from transformers import BertConfig, BertTokenizer

config = BertConfig.from_pretrained("bert-base-uncased")
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")

The Transformers library offers a more user-friendly API with pre-trained models readily available, while BERT requires more manual setup and configuration.

30,829

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Pros of fairseq

  • More focused on sequence-to-sequence tasks and machine translation
  • Offers advanced features for distributed training and mixed precision
  • Includes implementations of cutting-edge research papers from FAIR

Cons of fairseq

  • Steeper learning curve and less beginner-friendly documentation
  • Smaller community and fewer pre-trained models compared to Transformers
  • Less frequent updates and maintenance

Code Comparison

fairseq:

model = TransformerModel.build_model(args, task)
loss = model(src_tokens, src_lengths, prev_output_tokens, tgt)
loss.backward()

Transformers:

model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
outputs = model(input_ids=input_ids, labels=labels)
loss = outputs.loss
loss.backward()

Both libraries provide high-level APIs for working with transformer models, but Transformers offers a more streamlined approach with its AutoModel classes and easier access to pre-trained models. fairseq provides more flexibility in model architecture and training configurations, which can be beneficial for advanced users and researchers.

31,212

💫 Industrial-strength Natural Language Processing (NLP) in Python

Pros of spaCy

  • Lightweight and efficient, optimized for production use
  • Comprehensive linguistic features (tokenization, POS tagging, dependency parsing)
  • Easy-to-use API with built-in visualizers

Cons of spaCy

  • Limited support for deep learning models compared to Transformers
  • Smaller community and ecosystem
  • Less flexibility for custom model architectures

Code Comparison

spaCy:

import spacy

nlp = spacy.load("en_core_web_sm")
doc = nlp("Apple is looking at buying U.K. startup for $1 billion")
for ent in doc.ents:
    print(ent.text, ent.label_)

Transformers:

from transformers import pipeline

ner = pipeline("ner", model="dbmdz/bert-large-cased-finetuned-conll03-english")
text = "Apple is looking at buying U.K. startup for $1 billion"
results = ner(text)
for result in results:
    print(f"{result['word']} - {result['entity']}")

Both libraries offer NLP capabilities, but spaCy is more focused on efficient, production-ready processing of linguistic features, while Transformers provides a wider range of state-of-the-art deep learning models for various NLP tasks.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Hugging Face Transformers Library

Checkpoints on Hub Build GitHub Documentation GitHub release Contributor Covenant DOI

English | 简体中文 | 繁體中文 | 한국어 | Español | 日本語 | हिन्दी | Русский | Рortuguês | తెలుగు | Français | Deutsch | Tiếng Việt | العربية | اردو |

State-of-the-art pretrained models for inference and training

Transformers is a library of pretrained text, computer vision, audio, video, and multimodal models for inference and training. Use Transformers to fine-tune models on your data, build inference applications, and for generative AI use cases across multiple modalities.

There are over 500K+ Transformers model checkpoints on the Hugging Face Hub you can use.

Explore the Hub today to find a model and use Transformers to help you get started right away.

Installation

Transformers works with Python 3.9+ PyTorch 2.0+, TensorFlow 2.6+, and Flax 0.4.1+.

Create and activate a virtual environment with venv or uv, a fast Rust-based Python package and project manager.

# venv
python -m venv .my-env
source .my-env/bin/activate

# uv
uv venv .my-env
source .my-env/bin/activate

Install Transformers in your virtual environment.

# pip
pip install transformers

# uv
uv pip install transformers

Install Transformers from source if you want the latest changes in the library or are interested in contributing. However, the latest version may not be stable. Feel free to open an issue if you encounter an error.

git clone https://github.com/huggingface/transformers.git
cd transformers
pip install .

Quickstart

Get started with Transformers right away with the Pipeline API. The Pipeline is a high-level inference class that supports text, audio, vision, and multimodal tasks. It handles preprocessing the input and returns the appropriate output.

Instantiate a pipeline and specify model to use for text generation. The model is downloaded and cached so you can easily reuse it again. Finally, pass some text to prompt the model.

from transformers import pipeline

pipeline = pipeline(task="text-generation", model="Qwen/Qwen2.5-1.5B")
pipeline("the secret to baking a really good cake is ")
[{'generated_text': 'the secret to baking a really good cake is 1) to use the right ingredients and 2) to follow the recipe exactly. the recipe for the cake is as follows: 1 cup of sugar, 1 cup of flour, 1 cup of milk, 1 cup of butter, 1 cup of eggs, 1 cup of chocolate chips. if you want to make 2 cakes, how much sugar do you need? To make 2 cakes, you will need 2 cups of sugar.'}]

To chat with a model, the usage pattern is the same. The only difference is you need to construct a chat history (the input to Pipeline) between you and the system.

[!TIP] You can also chat with a model directly from the command line.

transformers-cli chat --model_name_or_path Qwen/Qwen2.5-0.5B-Instruct
import torch
from transformers import pipeline

chat = [
    {"role": "system", "content": "You are a sassy, wise-cracking robot as imagined by Hollywood circa 1986."},
    {"role": "user", "content": "Hey, can you tell me any fun things to do in New York?"}
]

pipeline = pipeline(task="text-generation", model="meta-llama/Meta-Llama-3-8B-Instruct", torch_dtype=torch.bfloat16, device_map="auto")
response = pipeline(chat, max_new_tokens=512)
print(response[0]["generated_text"][-1]["content"])

Expand the examples below to see how Pipeline works for different modalities and tasks.

Automatic speech recognition
from transformers import pipeline

pipeline = pipeline(task="automatic-speech-recognition", model="openai/whisper-large-v3")
pipeline("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
{'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'}
Image classification

from transformers import pipeline

pipeline = pipeline(task="image-classification", model="facebook/dinov2-small-imagenet1k-1-layer")
pipeline("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'label': 'macaw', 'score': 0.997848391532898},
 {'label': 'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita',
  'score': 0.0016551691805943847},
 {'label': 'lorikeet', 'score': 0.00018523589824326336},
 {'label': 'African grey, African gray, Psittacus erithacus',
  'score': 7.85409429227002e-05},
 {'label': 'quail', 'score': 5.502637941390276e-05}]
Visual question answering

from transformers import pipeline

pipeline = pipeline(task="visual-question-answering", model="Salesforce/blip-vqa-base")
pipeline(
    image="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg",
    question="What is in the image?",
)
[{'answer': 'statue of liberty'}]

Why should I use Transformers?

  1. Easy-to-use state-of-the-art models:

    • High performance on natural language understanding & generation, computer vision, audio, video, and multimodal tasks.
    • Low barrier to entry for researchers, engineers, and developers.
    • Few user-facing abstractions with just three classes to learn.
    • A unified API for using all our pretrained models.
  2. Lower compute costs, smaller carbon footprint:

    • Share trained models instead of training from scratch.
    • Reduce compute time and production costs.
    • Dozens of model architectures with 1M+ pretrained checkpoints across all modalities.
  3. Choose the right framework for every part of a models lifetime:

    • Train state-of-the-art models in 3 lines of code.
    • Move a single model between PyTorch/JAX/TF2.0 frameworks at will.
    • Pick the right framework for training, evaluation, and production.
  4. Easily customize a model or an example to your needs:

    • We provide examples for each architecture to reproduce the results published by its original authors.
    • Model internals are exposed as consistently as possible.
    • Model files can be used independently of the library for quick experiments.
Hugging Face Enterprise Hub

Why shouldn't I use Transformers?

  • This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.
  • The training API is optimized to work with PyTorch models provided by Transformers. For generic machine learning loops, you should use another library like Accelerate.
  • The example scripts are only examples. They may not necessarily work out-of-the-box on your specific use case and you'll need to adapt the code for it to work.

100 projects using Transformers

Transformers is more than a toolkit to use pretrained models, it's a community of projects built around it and the Hugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyone else to build their dream projects.

In order to celebrate Transformers 100,000 stars, we wanted to put the spotlight on the community with the awesome-transformers page which lists 100 incredible projects built with Transformers.

If you own or use a project that you believe should be part of the list, please open a PR to add it!

Example models

You can test most of our models directly on their Hub model pages.

Expand each modality below to see a few example models for various use cases.

Audio
Computer vision
Multimodal
  • Audio or text to text with Qwen2-Audio
  • Document question answering with LayoutLMv3
  • Image or text to text with Qwen-VL
  • Image captioning BLIP-2
  • OCR-based document understanding with GOT-OCR2
  • Table question answering with TAPAS
  • Unified multimodal understanding and generation with Emu3
  • Vision to text with Llava-OneVision
  • Visual question answering with Llava
  • Visual referring expression segmentation with Kosmos-2
NLP
  • Masked word completion with ModernBERT
  • Named entity recognition with Gemma
  • Question answering with Mixtral
  • Summarization with BART
  • Translation with T5
  • Text generation with Llama
  • Text classification with Qwen

Citation

We now have a paper you can cite for the 🤗 Transformers library:

@inproceedings{wolf-etal-2020-transformers,
    title = "Transformers: State-of-the-Art Natural Language Processing",
    author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    month = oct,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
    pages = "38--45"
}

NPM DownloadsLast 30 Days