text-to-text-transfer-transformer
Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
Top Related Projects
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
An open-source NLP research library, built on PyTorch.
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
Quick Overview
The Text-to-Text Transfer Transformer (T5) is a unified framework for natural language processing tasks developed by Google Research. It treats every text-based language problem as a "text-to-text" task, allowing for a single model architecture and training procedure to be used across a wide range of NLP applications.
Pros
- Versatile: Can be applied to various NLP tasks without task-specific architectures
- State-of-the-art performance: Achieves excellent results on many benchmarks
- Pre-trained models available: Offers pre-trained models of different sizes for various use cases
- Open-source: Allows for community contributions and improvements
Cons
- Resource-intensive: Requires significant computational resources for training and fine-tuning
- Complex setup: Initial setup and configuration can be challenging for beginners
- Limited documentation: Some aspects of the project may lack detailed explanations
- Large model sizes: Pre-trained models can be very large, making deployment challenging in resource-constrained environments
Code Examples
- Loading a pre-trained T5 model:
import tensorflow as tf
import t5
model = t5.models.MtfModel(
model_dir="gs://t5-data/pretrained_models/base",
tpu=None
)
- Performing text generation:
inputs = ["translate English to German: The house is wonderful."]
outputs = model.predict(inputs)
print(outputs[0]) # Output: "Das Haus ist wunderbar."
- Fine-tuning T5 on a custom dataset:
import functools
def dataset_fn(split, shuffle_files=False):
return tf.data.TextLineDataset(
["path/to/your/dataset.txt"]
).map(functools.partial(
t5.data.preprocessors.parse_tsv,
field_names=["inputs", "targets"]
))
model.finetune(
mixture_or_task_name="custom_task",
dataset_fn=dataset_fn,
train_steps=1000
)
Getting Started
To get started with T5:
- Install the required packages:
pip install t5[gcp]
- Download a pre-trained model:
import t5
import tensorflow as tf
model = t5.models.MtfModel(
model_dir="gs://t5-data/pretrained_models/small",
tpu=None
)
- Use the model for inference:
inputs = ["summarize: article goes here"]
outputs = model.predict(inputs)
print(outputs[0])
For more detailed instructions and advanced usage, refer to the project's GitHub repository and documentation.
Competitor Comparisons
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Pros of transformers
- Broader coverage of transformer models and tasks
- More active community and frequent updates
- Easier integration with popular deep learning frameworks
Cons of transformers
- Potentially more complex for beginners due to extensive options
- May have higher computational requirements for some models
Code Comparison
text-to-text-transfer-transformer:
import t5
model = t5.models.MtfModel(model_dir="t5-base", batch_size=4)
inputs = ["translate English to German: Hello, how are you?"]
outputs = model.predict(inputs)
transformers:
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained("t5-base")
tokenizer = T5Tokenizer.from_pretrained("t5-base")
input_ids = tokenizer("translate English to German: Hello, how are you?", return_tensors="pt").input_ids
outputs = model.generate(input_ids)
Both repositories provide powerful tools for working with transformer models, but transformers offers a wider range of models and tasks. text-to-text-transfer-transformer focuses specifically on T5 models and may be more straightforward for users primarily interested in T5-based tasks. The code comparison shows that both libraries allow for easy model loading and inference, with transformers requiring separate tokenizer initialization.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Pros of DeepSpeed
- Offers more advanced optimization techniques like ZeRO (Zero Redundancy Optimizer) for efficient large-scale model training
- Provides a more comprehensive suite of tools for distributed training and inference
- Supports a wider range of hardware configurations and cloud platforms
Cons of DeepSpeed
- Has a steeper learning curve due to its more complex architecture and features
- May require more setup and configuration for optimal performance
- Less focused on specific NLP tasks compared to T5's pre-trained models
Code Comparison
T5:
import t5
model = t5.models.MtfModel(
model_dir="gs://t5-data/pretrained_models/base",
tpu=None
)
DeepSpeed:
import deepspeed
model_engine, optimizer, _, _ = deepspeed.initialize(
args=args,
model=model,
model_parameters=params
)
Both repositories aim to improve the efficiency of large-scale model training, but they approach the problem differently. T5 focuses on transfer learning for NLP tasks, while DeepSpeed provides a more general-purpose optimization toolkit for distributed deep learning. The choice between them depends on the specific requirements of your project and the level of control you need over the training process.
An open-source NLP research library, built on PyTorch.
Pros of AllenNLP
- More comprehensive NLP toolkit with a wider range of pre-built models and tasks
- Easier to use for researchers and practitioners with less deep learning experience
- Better documentation and tutorials for getting started quickly
Cons of AllenNLP
- Less flexible for advanced users who want to customize the underlying architecture
- May have slower performance for some tasks compared to T5's efficient implementation
- Smaller community and fewer pre-trained models available
Code Comparison
AllenNLP:
from allennlp.predictors import Predictor
predictor = Predictor.from_path("https://storage.googleapis.com/allennlp-public-models/bert-base-srl-2020.03.24.tar.gz")
result = predictor.predict(sentence="Did Uriah honestly think he could beat the game in under three hours?")
Text-to-Text Transfer Transformer:
import t5
import tensorflow.compat.v1 as tf
model = t5.models.MtfModel(model_dir="t5-base", batch_size=1, tpu=None)
inputs = ["translate English to German: That is good."]
outputs = model.predict(inputs, output_length=50)
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Pros of fairseq
- Broader scope: Supports a wide range of sequence-to-sequence tasks beyond just text-to-text
- More flexible architecture: Allows for easier customization and extension of models
- Extensive documentation and examples: Provides comprehensive guides and tutorials
Cons of fairseq
- Steeper learning curve: Requires more in-depth understanding of NLP concepts
- Less focus on transfer learning: Not specifically designed for zero-shot task adaptation
- Potentially more complex setup: May require additional dependencies and configuration
Code Comparison
fairseq:
from fairseq.models.transformer import TransformerModel
model = TransformerModel.from_pretrained('/path/to/model', checkpoint_file='model.pt')
translations = model.translate(['Hello world!'])
text-to-text-transfer-transformer:
import t5
model = t5.load_model('t5-base', 'cpu')
input_ids = t5.data.encode_text('translate English to German: Hello world!')
outputs = model.generate(input_ids=input_ids)
Both repositories provide powerful tools for natural language processing tasks, with fairseq offering more flexibility and a broader scope, while text-to-text-transfer-transformer focuses on simplicity and transfer learning capabilities.
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Pros of tensor2tensor
- Broader scope, covering a wide range of machine learning tasks beyond text-to-text
- More established project with a larger community and extensive documentation
- Includes pre-trained models and datasets for various tasks
Cons of tensor2tensor
- Less focused on text-to-text tasks specifically
- May be more complex to use for newcomers due to its broader scope
- Slower development cycle compared to text-to-text-transfer-transformer
Code Comparison
text-to-text-transfer-transformer:
import t5
model = t5.models.MtfModel(
model_dir="model_dir",
tpu="tpu_name"
)
tensor2tensor:
from tensor2tensor.utils import trainer_lib
from tensor2tensor.utils import registry
problem = registry.problem("translate_ende_wmt32k")
hparams = trainer_lib.create_hparams("transformer_base")
The code snippets show that text-to-text-transfer-transformer focuses on a simpler API for text-to-text tasks, while tensor2tensor offers a more general-purpose approach with problem definitions and hyperparameters.
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
Pros of gpt-neox
- Designed specifically for training large language models, optimized for scale
- Implements advanced techniques like ZeRO-3 and 3D parallelism for efficient training
- Provides tools for dataset preparation and tokenization
Cons of gpt-neox
- More complex setup and configuration compared to T5
- Less versatile, primarily focused on autoregressive language modeling
- May require more computational resources for training
Code Comparison
T5 (text-to-text-transfer-transformer):
import t5
model = t5.models.MtfModel(
model_dir="path/to/model",
tpu="tpu_name"
)
gpt-neox:
from megatron.neox_arguments import NeoXArgs
from megatron.global_vars import set_global_variables
args = NeoXArgs.from_ymls("path/to/config.yml")
set_global_variables(args)
Both repositories offer powerful tools for working with large language models, but they have different focuses. T5 is more versatile and easier to use for various text-to-text tasks, while gpt-neox is optimized for training very large autoregressive models efficiently. The choice between them depends on the specific requirements of your project and the available computational resources.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
T5: Text-To-Text Transfer Transformer
As of July 2022, we recommend using T5X:
T5X is the new and improved implementation of T5 (and more) in JAX and Flax. T5 on Tensorflow with MeshTF is no longer actively developed. If you are new to T5, we recommend starting with T5X.
The t5
library serves primarily as code for reproducing the experiments in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. In the paper, we demonstrate how to achieve state-of-the-art results on multiple NLP tasks using a text-to-text transformer pre-trained on a large text corpus.
The bulk of the code in this repository is used for loading, preprocessing, mixing, and evaluating datasets. It also provides a way to fine-tune the pre-trained models released alongside the publication.
The t5
library can be used for future model development by providing useful modules for training and fine-tuning (potentially huge) models on mixtures of text-to-text tasks.
Table of Contents
Library
t5.data
t5.data
is a package for defining Task
objects that provide tf.data.Dataset
s.
Each Task
is made up of:
- a data source
- text preprocessor function(s)
- a SentencePiece model
- metric function(s)
Additionally, you may optionally provide:
- token preprocessor function(s)
- postprocess function(s)
The data source can be an arbitrary function that provides a tf.data.Dataset
, but we also provide simpler wrappers for datasets available in TensorFlow Datasets (TFDS) (a TfdsTask
) or stored as text files with one example per line (a TextLineTask
).
The text preprocessor converts the examples in the source dataset into the appropriate format for a text-to-text model with fields for inputs
and targets
. For example, the predefined t5.data.preprocessors.translate
preprocessor converts inputs in the form
{'de': 'Das ist gut.', 'en': 'That is good.'}
to the form
{'inputs': 'translate German to English: Das ist gut.', 'targets': 'That is good.'}
In addition to text preprocessing, you can also use one or more token preprocessors to modify the inputs post-tokenization. We implemented our unsupervised pre-training objectives using these token preprocessors.
We provide many predefined preprocessors in t5.data.preprocessors
, but you may also define your own.
The SentencePiece model is used to tokenize the input strings and decode the output tokens. You can create your own model with the google/sentencepiece library, or use our default one at t5.data.DEFAULT_SPM_PATH
. If you create your own, you must use the flags --pad_id=0 --eos_id=1 --unk_id=2 --bos_id=-1
with spm_train
to be compatible with our model code.
The metric function returns a score given the target and prediction from the model. You may also define a postprocess function to convert the target and prediction text to another format before calling the metric. We provide some predefined metrics in t5.evaluation.metrics
.
Finally, t5.data
contains a Mixture
class that can be instantiated to combine multiple Task
datasets for multi-task training using various functions for specifying the mixture rates.
t5.evaluation
t5.evaluation
contains two core components:
- metrics to be used during evaluation
- utilities for applying these metrics at evaluation time
t5.models
t5.models
contains shims for connecting T5 Tasks
and Mixtures
to a model implementation for training, evaluation, and inference.
Currently there are two shims available: One for the Mesh TensorFlow Transformer that we used in our paper and another for the Hugging Face Transformers library.
The Hugging Face API is currently experimental and subject to change, but provides a simple and easy way to load, fine-tune, and evaluate our pre-trained models using PyTorch on a single GPU.
If you want to use our largest models on TPUs and/or reproduce the results in our paper, you should use the MtfModel API and the t5_mesh_transformer
binary.
If you are interested fine-tuning our models on a GPU in PyTorch, you should try the HfPyTorchModel API.
Since the HfPyTorchModel is experimental, the remainder of this README assumes usage of the MtfModel and its associated binary.
A usage example of HfPyTorchModel is available here.
Usage
The easiest way to try out T5 is with a free TPU in our Colab Tutorial.
Below we provide examples for how to pre-train, fine-tune, evaluate, and decode from a model from the command-line with our codebase. You can use these instructions to reproduce our results, fine-tune one of our released checkpoints with your own data and/or hyperparameters, or pre-train a model from scratch.
Dataset Preparation
You may either use a new or pre-existing Task
, or you may load examples from a preprocessed TSV file.
Using a Task
Depending on your data source (see above), you will need to prepare your data appropriately.
Task
If using a vanilla task, just make sure any file(s) loaded by your dataset_fn
are accessible to the TPU (i.e., are in a GCS bucket), and you should be good to go!
TfdsTask
Most of our predefined Task
s use TensorFlow Datasets (TFDS) as their data source. When you run our training binary (see instructions below) with a TfdsTask
, the dataset will automatically be downloaded and prepared on its first use. After preparation is complete, the dataset is cached to your local storage to avoid this overhead in future runs. If working in the cloud, we recommend you set the --t5_tfds_data_dir
flag to point to a persistent storage location, such as a GCS bucket. This is a requirement when training on TPU.
C4
The C4 dataset we created for unsupervised pre-training is available in TensorFlow Datasets, but it requires a significant amount of bandwidth for downloading the raw Common Crawl scrapes (~7 TB) and compute for its preparation (~335 CPU-days). We suggest you take advantage of the Apache Beam support in TFDS, which enables distributed preprocessing of the dataset and can be run on Google Cloud Dataflow. With 500 workers, the job should complete in ~16 hours.
After defining MY_PROJECT
and MY_BUCKET
appropriately, you can build the dataset in DataFlow from GCP using the following commands:
pip install tfds-nightly[c4]
echo 'tfds-nightly[c4]' > /tmp/beam_requirements.txt
python -m tensorflow_datasets.scripts.download_and_prepare \
--datasets=c4/en \
--data_dir=gs://$MY_BUCKET/tensorflow_datasets \
--beam_pipeline_options="project=$MY_PROJECT,job_name=c4,staging_location=gs://$MY_BUCKET/binaries,temp_location=gs://$MY_BUCKET/temp,runner=DataflowRunner,requirements_file=/tmp/beam_requirements.txt,experiments=shuffle_mode=service,region=$MY_REGION"
Read more in the TFDS Beam instructions.
TextLineTask
A TextLineTask
is useful when your data source is a text file (or files) with one example per line. You can then use a text preprocessor to convert each line into a dictionary of inputs and targets.
Make sure your files are accessible to the TPU (i.e., are in a GCS bucket), and you should be good to go!
Using a TSV File Directly
Instead of defining a new Task
, you may use a TSV file (or files) directly as your dataset where each line is formatted as <input>\t<target>
.
However, there are a couple of caveats:
- There is no way to define a text processor, so the TSV will need to contain your data in a preprocessed format.
- There is also currently no way to set a token preprocessor, postprocess function, or metric function for evaluation when using a TSV file directly.
If you need any of these features, you must define a new Task
, TfdsTask
, or TextLineTask
.
Similar to the above cases, your TSV file(s) must be accessible to the TPU (i.e., are in a GCS bucket).
Installation
To install the T5 package, simply run:
pip install t5[gcp]
Setting up TPUs on GCP
You will first need to launch a Virtual Machine (VM) on Google Cloud. Details about launching the VM can be found at the Google Cloud Documentation.
In order to run training or eval on Cloud TPUs, you must set up the following variables based on your project, zone and GCS bucket appropriately. Please refer to the Cloud TPU Quickstart guide for more details.
export PROJECT=your_project_name
export ZONE=your_project_zone
export BUCKET=gs://yourbucket/
export TPU_NAME=t5-tpu
export TPU_SIZE=v3-8
export DATA_DIR="${BUCKET}/your_data_dir"
export MODEL_DIR="${BUCKET}/your_model_dir"
Please use the following command to create a TPU device in the Cloud VM.
ctpu up --name=$TPU_NAME --project=$PROJECT --zone=$ZONE --tpu-size=$TPU_SIZE \
--tpu-only --noconf
Training
In the command below, we train a model on the GLUE Benchmark MRPC task from scratch. You can change the MIXTURE_NAME
gin parameter to use any of the tasks or mixtures provided in our package.
t5_mesh_transformer \
--tpu="${TPU_NAME}" \
--gcp_project="${PROJECT}" \
--tpu_zone="${ZONE}" \
--model_dir="${MODEL_DIR}" \
--t5_tfds_data_dir="${DATA_DIR}" \
--gin_file="dataset.gin" \
--gin_file="models/bi_v1.gin" \
--gin_param="utils.tpu_mesh_shape.model_parallelism = 1" \
--gin_param="utils.tpu_mesh_shape.tpu_topology = '${TPU_SIZE}'" \
--gin_param="MIXTURE_NAME = 'glue_mrpc_v002'"
The full list of tasks and mixtures can be obtained by running:
python -c "import t5; print(t5.data.MixtureRegistry.names())"
You may also define additional tasks and mixtures in a new file and import it using the --module_import
flag.
Alternatively, you could train with a TSV file where each line is formatted as <input>\t<target>
(see above).
Fine-tuning
In order to fine-tune one of our pre-trained models, you need to pass the operative config of the pre-trained model to the training script. The operative config should be passed in as a gin_file
flag. It specifies the model architecture and other hyperparameters. In addition, you need to specify the mixture to fine-tune on. For example, to fine-tune the T5-small model on the glue_mrpc_v002
mixture, please run:
t5_mesh_transformer \
--tpu="${TPU_NAME}" \
--gcp_project="${PROJECT}" \
--tpu_zone="${ZONE}" \
--model_dir="${MODEL_DIR}" \
--t5_tfds_data_dir="${DATA_DIR}" \
--gin_file="dataset.gin" \
--gin_param="utils.tpu_mesh_shape.model_parallelism = 1" \
--gin_param="utils.tpu_mesh_shape.tpu_topology = '${TPU_SIZE}'" \
--gin_param="MIXTURE_NAME = 'glue_mrpc_v002'" \
--gin_file="gs://t5-data/pretrained_models/small/operative_config.gin"
The correct pre-trained checkpoint path is included in the operative config.
You may also define additional tasks and mixtures in a new file and import it using the --module_import
flag.
Alternatively, you could fine-tune with a TSV file where each line is formatted as <input>\t<target>
(see above). For example, you could try one of the paired translation datasets from WMT '19 News Commentary 14 training set
(e.g., English-French). When using a TSV file, you would replace the MIXTURE_NAME
flag with:
--gin_param="utils.run.train_dataset_fn = @t5.models.mesh_transformer.tsv_dataset_fn"
--gin_param="tsv_dataset_fn.filename = 'gs:/path/to/tsv'"
To fine-tune with the same hyperparameters we used in the paper (using a constant learning rate of 0.001), you can pass in this gin file which is included in the T5 package:
--gin_file="learning_rate_schedules/constant_0_001.gin"
The operative config for the pre-trained models are set so that there is effectively no limit on the number of train steps. If you'd like to train for a specific number of steps, you'll need to pass that in. Since the pre-trained model has already been trained for 1,000,000 steps, you should specify the total number of steps after pre-training and fine-tuning. For example, if you want to fine-tune for an additional 10,000 steps, you should pass
--gin_param="run.train_steps = 1010000"
You can also use a different batch size for fine-tuning. We set the batch size according to the total number of tokens in a batch. By default, a batch uses a sequence length of 512. To set the number of tokens in a batch, you should set
--gin_param = "tokens_per_batch=1048576"
Eval
In order to evaluate a model in the T5 framework, you need to use the eval.gin
file, specify the model directory, decoding method, and which checkpoint step(s) to evaluate. So, to evaluate on the GLUE MRPC task using beam search on all checkpoints, use the following command:
t5_mesh_transformer \
--tpu="${TPU_NAME}" \
--gcp_project="${PROJECT}" \
--tpu_zone="${ZONE}" \
--model_dir="${MODEL_DIR}" \
--gin_file="${MODEL_DIR}/operative_config.gin" \
--t5_tfds_data_dir=${DATA_DIR} \
--gin_file="eval.gin" \
--gin_file="beam_search.gin" \
--gin_param="run.dataset_split = 'validation'" \
--gin_param="utils.tpu_mesh_shape.tpu_topology = '${TPU_SIZE}'" \
--gin_param="MIXTURE_NAME = 'glue_mrpc_v002'" \
--gin_param="eval_checkpoint_step = 'all'"
To evaluate a specific checkpoint, simply set the eval_checkpoint_step
parameter to appropriate checkpoint.
--gin_param="eval_checkpoint_step = 100000"
You can also use greedy_decode.gin
or sample_decode.gin
instead of beam_search.gin
in the command above.
Decode
In order to produce predictions from a model in the T5 framework, you need to specify the model directory, decoding method, and which checkpoint step(s) to use for decoding. Assuming you have a text file of input sequences stored at /path/to/inputs.txt
, an example command would be:
t5_mesh_transformer \
--tpu="${TPU_NAME}" \
--gcp_project="${PROJECT}" \
--tpu_zone="${ZONE}" \
--model_dir="${MODEL_DIR}" \
--gin_file="${MODEL_DIR}/operative_config.gin" \
--gin_file="infer.gin" \
--gin_file="sample_decode.gin" \
--gin_param="input_filename = '/path/to/inputs.txt'"\
--gin_param="output_filename = '/tmp/outputs.txt'"\
--gin_param="utils.tpu_mesh_shape.tpu_topology = '${TPU_SIZE}'"\
--gin_param="infer_checkpoint_step = 'all'"
To predict with a specific checkpoint, simply set the infer_checkpoint_step
parameter to appropriate checkpoint.
--gin_param="infer_checkpoint_step = 100000"
You can also use beam_search.gin
or greedy_decode.gin
instead of sample_decode.gin
in the command above.
Export
You may also want to export a SavedModel
, which is useful for serving your trained model, (e.g., when deploying with ML Engine or in a Docker image).
t5_mesh_transformer \
--gcp_project="${PROJECT}" \
--tpu_zone="${ZONE}" \
--model_dir="${MODEL_DIR}" \
--use_model_api \
--mode="export_predict" \
--export_dir="/path/to/export/dir"
The command above exports the latest checkpoint in the model directory. To export a particular checkpoint, add the following flags:
--checkpoint_mode="specific" \
--checkpoint_steps=1000000
The t5-deploy notebook demonstrates exporting a SavedModel
and packaging it in a Docker image for serving.
GPU Usage
If you would like to use GPU instead of TPUs, you can modify the above commands by removing TPU-specific flags (--tpu
, --tpu_zone
, --gcp_project
) and setting the gin params for mesh_shape
and mesh_devices
based on your desired setup.
For example, if your machine has access to 6 GPUs and you'd like to do 3-way model parallelism and 2-way data parallelism, the fine-tuning command above would become:
t5_mesh_transformer \
--model_dir="${MODEL_DIR}" \
--t5_tfds_data_dir="${DATA_DIR}" \
--gin_file="dataset.gin" \
--gin_param="utils.run.mesh_shape = 'model:3,batch:2'" \
--gin_param="utils.run.mesh_devices = ['gpu:0','gpu:1','gpu:2','gpu:3','gpu:4','gpu:5']" \
--gin_param="MIXTURE_NAME = 'glue_mrpc_v002'" \
--gin_file="gs://t5-data/pretrained_models/small/operative_config.gin"
With a single GPU, the command is:
t5_mesh_transformer \
--model_dir="${MODEL_DIR}" \
--t5_tfds_data_dir="${DATA_DIR}" \
--gin_file="dataset.gin" \
--gin_param="utils.run.mesh_shape = 'model:1,batch:1'" \
--gin_param="utils.run.mesh_devices = ['gpu:0']" \
--gin_param="MIXTURE_NAME = 'glue_mrpc_v002'" \
--gin_file="gs://t5-data/pretrained_models/small/operative_config.gin"
Reproducing our experiments
We provide operative configs for all of the experiments in the paper in gs://t5-data/experiments.
The experiments
folder has different subdirectories corresponding to the different sections in our paper.
For example, gs://t5-data/experiments/objectives contains the experiments from Section 3.3 ("Unsupervised objectives").
Each subdirectory of the objectives
folder contains operative configs for some particular experiment (where loosely speaking an "experiment" is one of the rows in one of the tables in our paper).
Let's say you want to reproduce the results for the "Prefix language modeling" objective (the first row in Table 4). The operative configs for that experiment live in gs://t5-data/experiments/objectives/obj-prefix_lm. In the base directory, there is an operative config for pre-training the model (gs://t5-data/experiments/objectives/obj-prefix_lm/operative_config.gin). Then, there are subdirectories for each of the downstream fine-tuning mixtures we consider, each of which has its own operative config (for example, gs://t5-data/experiments/objectives/obj-prefix_lm/cnn_dailymail_v002/operative_config.gin). To run this experiment, first pre-train a model with the pre-training operative config:
export PRETRAIN_MODEL_DIR="${BUCKET}/obj-prefix_lm"
t5_mesh_transformer \
--tpu="${TPU_NAME}" \
--gcp_project="${PROJECT}" \
--tpu_zone="${ZONE}" \
--model_dir="${PRETRAIN_MODEL_DIR}" \
--gin_file="gs://t5-data/experiments/objectives/obj-prefix_lm/operative_config.gin" \
--gin_param="utils.tpu_mesh_shape.model_parallelism = 1" \
--gin_param="utils.tpu_mesh_shape.tpu_topology = '${TPU_SIZE}'"
Then, you can fine-tune the pre-trained model on CNN/Daily Mail like so:
export FINETUNE_MODEL_DIR="${BUCKET}/obj-prefix_lm/cnn_dailymail_v002"
t5_mesh_transformer \
--tpu="${TPU_NAME}" \
--gcp_project="${PROJECT}" \
--tpu_zone="${ZONE}" \
--model_dir="${FINETUNE_MODEL_DIR}" \
--gin_file="gs://t5-data/experiments/objectives/obj-prefix_lm/cnn_dailymail_v002/operative_config.gin" \
--gin_param="init_checkpoint = '${PRETRAIN_MODEL_DIR}/model.ckpt-524288'" \
--gin_param="utils.tpu_mesh_shape.model_parallelism = 1" \
--gin_param="utils.tpu_mesh_shape.tpu_topology = '${TPU_SIZE}'"
Useful Options
Some training variants need multiple flags to be set at the same time. For each
of the below variants, add the group of flags to
./third_party/py/t5/google/scripts/run_finetune.sh
.
Deterministic training
--train_gin_param="mesh_train_dataset_fn.seed=${SEED}" \
--train_gin_param="utils.run.skip_seen_data = True" \
Language model
--objective="lm" \
--train_gin_param="utils.run.model_type = \"lm\"" \
Released Model Checkpoints
We have released the following checkpoints for pre-trained models described in our paper:
- T5-Small (60 million parameters): gs://t5-data/pretrained_models/small
- T5-Base (220 million parameters): gs://t5-data/pretrained_models/base
- T5-Large (770 million parameters): gs://t5-data/pretrained_models/large
- T5-3B (3 billion parameters): gs://t5-data/pretrained_models/3B
- T5-11B (11 billion parameters): gs://t5-data/pretrained_models/11B
See here for a list of additional experimental pre-trained model checkpoints.
How to Cite
If you extend or use this work, please cite the paper where it was introduced:
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
Top Related Projects
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
An open-source NLP research library, built on PyTorch.
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot