PaddleFormers
PaddleFormers is an easy-to-use library of pre-trained large language model zoo based on PaddlePaddle.
Top Related Projects
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Ongoing research training transformer models at scale
An open-source NLP research library, built on PyTorch.
TensorFlow code and pre-trained models for BERT
Quick Overview
PaddleFormers is an open-source library for natural language processing (NLP) tasks based on the PaddlePaddle deep learning framework. It provides a collection of pre-trained models and tools for various NLP applications, including text classification, named entity recognition, and machine translation.
Pros
- Offers a wide range of pre-trained models for different NLP tasks
- Built on PaddlePaddle, which provides efficient deep learning capabilities
- Includes easy-to-use APIs for quick implementation of NLP solutions
- Supports both Chinese and English language processing
Cons
- Less popular compared to other NLP libraries like Hugging Face Transformers
- Documentation and community support may be limited compared to more established libraries
- Primarily focused on PaddlePaddle ecosystem, which may limit integration with other frameworks
- Learning curve may be steeper for those unfamiliar with PaddlePaddle
Code Examples
- Text Classification:
from paddlenlp.transformers import ErnieForSequenceClassification, ErnieTokenizer
model = ErnieForSequenceClassification.from_pretrained('ernie-1.0', num_classes=2)
tokenizer = ErnieTokenizer.from_pretrained('ernie-1.0')
text = "This is a great movie!"
inputs = tokenizer(text)
outputs = model(**inputs)
print(outputs)
- Named Entity Recognition:
from paddlenlp.transformers import ErnieForTokenClassification, ErnieTokenizer
model = ErnieForTokenClassification.from_pretrained('ernie-1.0')
tokenizer = ErnieTokenizer.from_pretrained('ernie-1.0')
text = "Steve Jobs was the co-founder of Apple Inc."
inputs = tokenizer(text)
outputs = model(**inputs)
print(outputs)
- Machine Translation:
from paddlenlp.transformers import MBartForConditionalGeneration, MBartTokenizer
model = MBartForConditionalGeneration.from_pretrained('mbart-large-cc25')
tokenizer = MBartTokenizer.from_pretrained('mbart-large-cc25')
src_text = "Hello, how are you?"
inputs = tokenizer(src_text, return_tensors="pd")
outputs = model.generate(**inputs)
translated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print(translated_text)
Getting Started
To get started with PaddleFormers:
- Install PaddlePaddle and PaddleNLP:
pip install paddlepaddle paddlenlp
- Import the required modules:
from paddlenlp.transformers import *
- Load a pre-trained model and tokenizer:
model = ErnieForSequenceClassification.from_pretrained('ernie-1.0')
tokenizer = ErnieTokenizer.from_pretrained('ernie-1.0')
- Process your text and get predictions:
text = "Your input text here"
inputs = tokenizer(text)
outputs = model(**inputs)
For more detailed instructions and examples, refer to the PaddleNLP documentation and examples in the GitHub repository.
Competitor Comparisons
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Pros of transformers
- Larger community and more extensive documentation
- Supports multiple deep learning frameworks (PyTorch, TensorFlow, JAX)
- More comprehensive model zoo with pre-trained models
Cons of transformers
- Can be more complex for beginners due to its extensive features
- Potentially slower inference speed compared to PaddleFormers
- Larger package size and dependencies
Code Comparison
transformers:
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
PaddleFormers:
from paddlenlp.transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
The code usage is quite similar between the two libraries, with the main difference being the import statement. transformers uses the transformers
package, while PaddleFormers uses paddlenlp.transformers
. Both libraries provide similar APIs for loading pre-trained models and tokenizers, making it relatively easy for users to switch between them if needed.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Pros of DeepSpeed
- More extensive optimization techniques, including ZeRO-Offload and 3D parallelism
- Better support for large-scale distributed training across multiple GPUs and nodes
- More active development and frequent updates
Cons of DeepSpeed
- Steeper learning curve due to more advanced features
- Primarily focused on PyTorch, while PaddleFormers supports PaddlePaddle framework
- May require more fine-tuning for optimal performance in specific use cases
Code Comparison
DeepSpeed:
import deepspeed
model_engine, optimizer, _, _ = deepspeed.initialize(args=args, model=model, model_parameters=params)
for step, batch in enumerate(data_loader):
loss = model_engine(batch)
model_engine.backward(loss)
model_engine.step()
PaddleFormers:
import paddle
from paddlenlp.transformers import ErnieForSequenceClassification
model = ErnieForSequenceClassification.from_pretrained('ernie-1.0')
optimizer = paddle.optimizer.AdamW(learning_rate=0.0001, parameters=model.parameters())
for batch in train_data_loader:
loss = model(input_ids=batch['input_ids'], labels=batch['labels'])
loss.backward()
optimizer.step()
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Pros of fairseq
- More extensive documentation and examples
- Larger community and more frequent updates
- Supports a wider range of NLP tasks and architectures
Cons of fairseq
- Steeper learning curve for beginners
- Requires more computational resources for some models
- Less integrated with other deep learning frameworks
Code Comparison
PaddleFormers:
import paddle
from paddlenlp.transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_classes=2)
fairseq:
from fairseq.models.roberta import RobertaModel
roberta = RobertaModel.from_pretrained('/path/to/roberta/model', checkpoint_file='model.pt')
Both repositories provide high-level APIs for working with transformer models, but fairseq offers more flexibility in model architecture and training options. PaddleFormers is more tightly integrated with the PaddlePaddle ecosystem, making it easier to use for those already familiar with that framework.
fairseq has a larger collection of pre-implemented models and supports more advanced features like distributed training and mixed precision. However, PaddleFormers may be more accessible for users in certain regions due to its origins in the Chinese tech industry.
Ultimately, the choice between these repositories depends on the specific requirements of your project, your familiarity with the underlying frameworks, and the level of customization you need.
Ongoing research training transformer models at scale
Pros of Megatron-LM
- Optimized for NVIDIA GPUs, offering better performance on NVIDIA hardware
- Supports larger model sizes and distributed training across multiple GPUs
- More extensive documentation and examples for various model architectures
Cons of Megatron-LM
- Limited to NVIDIA hardware, reducing flexibility for users with different setups
- Steeper learning curve due to its focus on large-scale models and distributed training
- Less integration with other deep learning frameworks compared to PaddleFormers
Code Comparison
Megatron-LM (model initialization):
model = get_language_model(
attention_mask_func, num_tokentypes=num_tokentypes,
add_pooler=add_pooler, init_method=init_method,
scaled_init_method=scaled_init_method)
PaddleFormers (model initialization):
model = AutoModelForSequenceClassification.from_pretrained(
model_name_or_path,
num_classes=num_classes)
Both repositories provide powerful tools for working with transformer-based models, but they cater to different use cases. Megatron-LM is more focused on large-scale models and distributed training, while PaddleFormers offers a more user-friendly approach with easier integration into existing workflows. The choice between the two depends on the specific requirements of your project and the available hardware resources.
An open-source NLP research library, built on PyTorch.
Pros of AllenNLP
- More extensive documentation and tutorials
- Larger community and ecosystem of pre-built models
- Better integration with PyTorch and other popular NLP libraries
Cons of AllenNLP
- Steeper learning curve for beginners
- Less focus on performance optimization compared to PaddleFormers
- More complex setup and configuration process
Code Comparison
AllenNLP:
from allennlp.data import DatasetReader, Instance
from allennlp.data.fields import TextField
from allennlp.data.token_indexers import SingleIdTokenIndexer
class MyDatasetReader(DatasetReader):
def _read(self, file_path: str) -> Iterable[Instance]:
with open(file_path, "r") as f:
for line in f:
yield self.text_to_instance(line.strip())
PaddleFormers:
from paddlenlp.datasets import MapDataset
class MyDataset(MapDataset):
def __init__(self, data_path):
with open(data_path, 'r', encoding='utf-8') as f:
lines = f.readlines()
super().__init__(lines)
def __getitem__(self, idx):
return {"text": self.data[idx].strip()}
TensorFlow code and pre-trained models for BERT
Pros of BERT
- Widely adopted and well-documented, with extensive research and community support
- Provides pre-trained models for various languages and tasks
- Offers a straightforward implementation of the BERT architecture
Cons of BERT
- Limited to BERT-specific models and tasks
- Less flexibility for customization and experimentation with different architectures
- Older codebase with fewer recent updates
Code Comparison
BERT:
import tensorflow as tf
from bert import modeling
bert_config = modeling.BertConfig.from_json_file("bert_config.json")
model = modeling.BertModel(config=bert_config, is_training=True, input_ids=input_ids)
PaddleFormers:
import paddle
from paddlenlp.transformers import BertModel
model = BertModel.from_pretrained('bert-base-uncased')
input_ids = paddle.to_tensor([[1, 2, 3, 4, 5, 6]])
output = model(input_ids)
PaddleFormers offers a more modern and flexible approach, supporting various transformer architectures beyond BERT. It provides easier integration with the PaddlePaddle framework and includes more recent advancements in NLP. However, BERT remains a solid choice for those specifically focused on BERT-based models and looking for a well-established implementation.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
News | Highlights | Installation | Quickstart | Community
PaddleFormers is a Transformer model library built on the PaddlePaddle deep learning framework, delivering both ease of use and high-performance capabilities. It provides a unified model definition interface, modular training components, and comprehensive distributed training strategies specifically designed for large language model development pipelines. This enables developers to train large models efficiently with minimal complexity, making it suitable for diverse scenarios ranging from academic research to industrial applications.
News
[2025/06/28] ð PaddleFormers 0.1 is officially released! This initial version supports SFT/DPO training paradigms, configurable distributed training via unified Trainer API, and integrates PEFT, MergeKit, and Quantization APIs for diverse LLM applications.
Highlights
âï¸ Simplified Distributed Training
Implements 4D parallel strategies through unified Trainer API, lowering the barrier to distributed LLM training.
ð Efficient Post-Training
Integrates Packing dataflow and FlashMask operators for SFT/DPO training, eliminating padding waste and boosting throughput.
ð¾ Industrial Storage Solution
Features Unified Checkpoint storage tools for LLMs, enabling training resumption and dynamic resource scaling. Additionally implements asynchronous storage (up to 95% faster) and Optimizer State Quantization (78% storage reduction), ensuring industrial training meets both efficiency and stability requirements.
Installation
Requires Python 3.8+ and PaddlePaddle 3.1+.
# Install via pip
pip install paddleformers
# Install development version
git clone https://github.com/PaddlePaddle/PaddleFormers.git
cd PaddleFormers
pip install -e .
Quickstart
Text Generation
This example shows how to load Qwen model for text generation with PaddleFormers Auto API
:
from paddleformers.transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B")
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B", dtype="bfloat16")
input_features = tokenizer("Give me a short introduction to large language model.", return_tensors="pd")
outputs = model.generate(**input_features, max_new_tokens=128)
print(tokenizer.batch_decode(outputs[0], skip_special_tokens=True))
SFT Training
Getting started with supervised fine-tuning (SFT) using PaddleFormers:
from paddleformers.trl import SFTConfig, SFTTrainer
from datasets import load_dataset
dataset = load_dataset("ZHUI/alpaca_demo", split="train")
training_args = SFTConfig(output_dir="Qwen/Qwen2.5-0.5B-SFT", device="gpu")
trainer = SFTTrainer(
args=training_args,
model="Qwen/Qwen2.5-0.5B-Instruct",
train_dataset=dataset,
)
trainer.train()
Community
We welcome all contributions! See CONTRIBUTING.md for guidelines.
License
This repository's source code is available under the Apache 2.0 License.
Top Related Projects
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Ongoing research training transformer models at scale
An open-source NLP research library, built on PyTorch.
TensorFlow code and pre-trained models for BERT
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot