Convert Figma logo to code with AI

microsoft logoDialoGPT

Large-scale pretraining for dialogue

2,342
341
2,342
64

Top Related Projects

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

10,465

A framework for training and evaluating AI models on a variety of openly available dialogue datasets.

30,129

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

1,864

Conditional Transformer Language Model for Controllable Generation

8,204

An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.

Quick Overview

DialoGPT is a large-scale pre-trained dialogue response generation model developed by Microsoft. It is designed to engage in open-domain conversations and can be fine-tuned for various dialogue-related tasks, such as chatbots, virtual assistants, and interactive storytelling.

Pros

  • Versatile: DialoGPT can be fine-tuned for a wide range of dialogue-related tasks, making it a versatile tool for developers.
  • Scalable: The model is pre-trained on a large corpus of conversational data, allowing it to generate coherent and contextually relevant responses.
  • Open-source: The project is open-source, allowing developers to access the model, fine-tune it, and contribute to its development.
  • Multilingual: DialoGPT supports multiple languages, including English, Chinese, and others, making it accessible to a global audience.

Cons

  • Potential Biases: Like any large language model, DialoGPT may inherit biases present in its training data, which could lead to undesirable or unethical outputs.
  • Computational Complexity: Fine-tuning and deploying DialoGPT may require significant computational resources, which could be a barrier for some developers.
  • Limited Contextual Understanding: While DialoGPT is designed for open-domain conversations, its understanding of complex contextual information may be limited compared to human-level dialogue.
  • Lack of Emotional Intelligence: DialoGPT may struggle to understand and respond to the emotional nuances of human conversation, which could limit its effectiveness in certain applications.

Code Examples

Since DialoGPT is a pre-trained model, the primary use case is to fine-tune and integrate it into your own applications. Here are a few examples of how you can use DialoGPT:

from transformers import DialoGPTTokenizer, DialoGPTModel

# Load the pre-trained DialoGPT model and tokenizer
tokenizer = DialoGPTTokenizer.from_pretrained('microsoft/DialoGPT-medium')
model = DialoGPTModel.from_pretrained('microsoft/DialoGPT-medium')

# Encode the input text
input_ids = tokenizer.encode("Hello, how are you?", return_tensors='pt')

# Generate a response
output = model.generate(input_ids, max_length=50, num_return_sequences=1, do_sample=True, top_k=50, top_p=0.95, num_beams=5)
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)

This example demonstrates how to load the pre-trained DialoGPT model and tokenizer, encode an input text, and generate a response using the model.

from transformers import DialoGPTTokenizer, DialoGPTForSequenceClassification

# Load the fine-tuned DialoGPT model and tokenizer
tokenizer = DialoGPTTokenizer.from_pretrained('path/to/fine-tuned-model')
model = DialoGPTForSequenceClassification.from_pretrained('path/to/fine-tuned-model')

# Encode the input text
input_ids = tokenizer.encode("What is the weather like today?", return_tensors='pt')

# Classify the input text
output = model(input_ids)[0]
predicted_class = output.argmax().item()
print(f"Predicted class: {predicted_class}")

This example demonstrates how to load a fine-tuned DialoGPT model for a sequence classification task, encode an input text, and classify the input using the model.

from transformers import DialoGPTTokenizer, DialoGPTForConditionalGeneration

# Load the fine-tuned DialoGPT model and tokenizer
tokenizer = DialoGPTTokenizer.from_pretrained('path/to/fine-tuned-model')
model = DialoGPTForConditionalGeneration.from_pretrained('path/to/fine-tuned-model')

# Encode the input text
input_ids = tokenizer.encode("Tell me a story about a brave knight.",

Competitor Comparisons

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Pros of transformers

  • Broader scope: Supports a wide range of NLP tasks and models beyond dialogue generation
  • More active development: Frequent updates and contributions from the community
  • Extensive documentation and examples for various use cases

Cons of transformers

  • Steeper learning curve due to its comprehensive nature
  • May require more setup and configuration for specific dialogue tasks
  • Potentially higher resource requirements for running multiple models

Code Comparison

DialoGPT:

from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")

transformers:

from transformers import pipeline
generator = pipeline("text-generation", model="gpt2")
response = generator("Hello, how are you?", max_length=50)

DialoGPT focuses specifically on dialogue generation, while transformers provides a more versatile toolkit for various NLP tasks. DialoGPT may be easier to use for dialogue-specific applications, but transformers offers greater flexibility and a wider range of pre-trained models. The code examples demonstrate the simplicity of DialoGPT for dialogue tasks, while transformers showcases its versatility with the pipeline API for different NLP tasks.

10,465

A framework for training and evaluating AI models on a variety of openly available dialogue datasets.

Pros of ParlAI

  • Broader scope: Supports a wide range of dialogue tasks and datasets
  • More flexible: Allows for easy integration of custom models and tasks
  • Active community: Regular updates and contributions from researchers

Cons of ParlAI

  • Steeper learning curve: More complex architecture due to its versatility
  • Potentially slower: May have higher computational overhead for simple tasks

Code Comparison

ParlAI example:

from parlai.core.agents import Agent
from parlai.core.worlds import DialogPartnerWorld

class MyAgent(Agent):
    def act(self):
        return {'text': 'Hello, how are you?'}

world = DialogPartnerWorld(opt, [MyAgent(opt), MyAgent(opt)])
world.parley()

DialoGPT example:

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")

input_ids = tokenizer.encode("Hello, how are you?", return_tensors="pt")
response = model.generate(input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
30,129

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Pros of fairseq

  • More versatile, supporting a wide range of NLP tasks beyond dialogue generation
  • Actively maintained with frequent updates and contributions
  • Extensive documentation and examples for various use cases

Cons of fairseq

  • Steeper learning curve due to its broader scope and complexity
  • May require more computational resources for training and inference
  • Less specialized for dialogue-specific tasks compared to DialoGPT

Code Comparison

DialoGPT:

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")

fairseq:

from fairseq.models.transformer_lm import TransformerLanguageModel

model = TransformerLanguageModel.from_pretrained('/path/to/model', 'checkpoint_best.pt')
model.eval()

Both repositories provide powerful tools for natural language processing tasks. DialoGPT focuses specifically on dialogue generation, offering a more streamlined approach for conversational AI. fairseq, on the other hand, offers a broader range of NLP capabilities, making it suitable for various tasks beyond dialogue generation. While fairseq provides more flexibility and options, it may require more setup and configuration compared to the more specialized DialoGPT.

Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

Pros of text-to-text-transfer-transformer

  • More versatile, capable of handling various NLP tasks beyond dialogue generation
  • Utilizes a unified text-to-text framework, simplifying the approach to multiple tasks
  • Offers pre-trained models on a diverse range of datasets, enhancing transfer learning capabilities

Cons of text-to-text-transfer-transformer

  • May require more computational resources due to its larger size and broader scope
  • Less specialized for dialogue tasks compared to DialoGPT
  • Potentially more complex to fine-tune for specific dialogue applications

Code Comparison

text-to-text-transfer-transformer:

import t5
model = t5.models.MT5ForConditionalGeneration.from_pretrained("t5-base")
input_ids = tokenizer.encode("translate English to German: Hello, how are you?", return_tensors="pt")
outputs = model.generate(input_ids)

DialoGPT:

from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
input_ids = tokenizer.encode("Hello, how are you?", return_tensors="pt")
outputs = model.generate(input_ids)
1,864

Conditional Transformer Language Model for Controllable Generation

Pros of CTRL

  • More versatile with control codes for different tasks and styles
  • Larger model size (1.63B parameters) potentially enabling better performance
  • Supports a wider range of applications beyond dialogue

Cons of CTRL

  • Less specialized for dialogue tasks compared to DialoGPT
  • May require more fine-tuning for specific use cases
  • Potentially higher computational requirements due to larger model size

Code Comparison

DialoGPT:

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")

input_ids = tokenizer.encode("Hello, how are you?", return_tensors="pt")
output = model.generate(input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)

CTRL:

from transformers import CTRLTokenizer, CTRLLMHeadModel

tokenizer = CTRLTokenizer.from_pretrained("ctrl")
model = CTRLLMHeadModel.from_pretrained("ctrl")

input_text = "Links\n\nHere are some useful links:"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(input_ids, max_length=100, temperature=0.7)

Both repositories offer pre-trained language models, but they differ in their focus and capabilities. DialoGPT is specifically designed for dialogue tasks, while CTRL provides more flexibility with its control codes for various applications. The code examples demonstrate the different approaches to generating text using these models.

8,204

An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.

Pros of GPT-Neo

  • Larger model sizes available, potentially offering better performance
  • Open-source and more flexible for customization and fine-tuning
  • Supports a wider range of natural language processing tasks

Cons of GPT-Neo

  • Requires more computational resources due to larger model sizes
  • Less optimized for dialogue-specific tasks compared to DialoGPT
  • May require more extensive training data for optimal performance

Code Comparison

DialoGPT:

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")

input_ids = tokenizer.encode("Hello, how are you?", return_tensors="pt")
output = model.generate(input_ids, max_length=50, num_return_sequences=1)

GPT-Neo:

from transformers import GPTNeoForCausalLM, GPT2Tokenizer

model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")

input_ids = tokenizer.encode("The quick brown fox", return_tensors="pt")
output = model.generate(input_ids, max_length=50, num_return_sequences=1)

Both repositories use the Hugging Face Transformers library, but GPT-Neo offers more flexibility in model size and task adaptation, while DialoGPT is more focused on dialogue-specific applications.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

A State-of-the-Art Large-scale Pretrained Response Generation Model (DialoGPT)

This project page is no longer maintained as DialoGPT is superseded by GODEL, which outperforms DialoGPT according to the results of this paper. Unless you use DialoGPT for reproducibility reasons, we highly recommend you switch to GODEL.

This repository contains the source code and trained model for a large-scale pretrained dialogue response generation model. The human evaluation results indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.

The repository is based on huggingface pytorch-transformer and OpenAI GPT-2, containing data extraction script, model training code and pretrained small (117M) medium (345M) and large (762M) model checkpoint.

The model is trained on 147M multi-turn dialogue from Reddit discussion thread. The largest model can be trained in several hours on a 8 V100 machines (however this is not required), with distributed training and FP16 option.

The include script can be used to reproduce the results of DSTC-7 grounded dialogue generation challenge and a 6k multi-reference dataset created from Reddit data.

Project webpage: https://www.microsoft.com/en-us/research/project/large-scale-pretraining-for-response-generation/

ArXiv paper: https://arxiv.org/abs/1911.00536

News

(Update 07/09/2022) Changes on the files.pushshift.io/reddit server caused our data generation pipeline to break. These problems have now been fixed, and the steps explained in the Data Preparation subsection below should work again. Data is generated in about 10 hours with 8 processes (-j 8), and 800GB of temporary disk space is needed.

(Update 06/23/2021) We have released a retrieval-augmented/grounded version of DialoGPT (RetGen), please check out the RetGen repo and RetGen paper

(Update 05/20/2021) An awesome video walkthrough on YouTube for DialoGPT by Prakhar Mishra

(Update 03/31/2021) A 3rd party demo by AK391 using Gradio web demo try it out

(Update 09/15/2020) A set of large-scale dialog ranking models has been released!

DialoGPT generation is improved by integrating with our latest dialog ranking models, DialogRPT

(Update 07/08/2020) The 6K multi-ref test set has been released!

To generate the data, pleaser run demo.py and set the data option to 'full', the generated 6k multi-ref test set will be located at

./data/test.refs.txt

(Update 03/10/2020) Model cards available in Huggingface Transformers!

Please check out our model cards in huggingface Transformers repository. With several lines of code it should be pretty straighforward to play with the DialoGPT interactively.

small model: https://huggingface.co/microsoft/DialoGPT-small

medium model: https://huggingface.co/microsoft/DialoGPT-medium

large model: https://huggingface.co/microsoft/DialoGPT-large

(New) Ranking model: https://huggingface.co/microsoft/DialogRPT-updown

(Update 01/06/2020) Some third-party decoding script implementations:

Recommended Configuration

  • Linux Ubuntu 16.04
  • GPU with at least 12G memory

DialoGPT was developed entirely on Ubuntu 16.04, and -- depending on our availability -- we try to provide support if you experience difficulties running the code on the same configuration. However, we are unable to provide support for other distributions or operating systems. Portions of the code may run on other UNIX flavors (macOS, Windows subsystem for Linux, Cygwin, etc.), but it is recommended to use Ubuntu for the main training code.

The training code can be run on CPU, but it can be slow. We would recommend to use GPU to train and finetune all models. There is no minimal limit of the number of GPUs. However, if using distributed train for multiple GPUs configuration, the speed-up vs the number of GPUs is roughly sub-linear. To simulate the same batchsize when using less GPUs, please use a larger gradient_accumulation_steps in model training.

The 117M and 345M model can be loaded in a single GPU with 12G memory. The 762M model would require a single GPU that has greater than 16G memory for efficient training. The training speed on a benchmark data with 50M training instances and V100 GPUs:

n_gpuepoch time (h)token/sec
111810847
26220645
43437647
81871356

Fine-tuning from our pretrained model on a new dataset typically requires 1-2 epochs.

Setup & Installation (TL;DR)

We created a demo script demo.py to ease the difficulty of the deployment of this system. The demo.py contains a pipeline of model downloading, data extraction, data preprocessing and model training over a dummy dataset within one commandline.

Train model with Conda Environment

Please use the below commandlines to clone, install the requirements and load the Conda environment (Note that the Nvidia CUDA 10.0 developer toolkit is required):

sudo apt-get install -y make wget gzip bzip2 xz-utils zstd sed
git clone https://github.com/microsoft/DialoGPT.git
cd DialoGPT
conda env create -f LSP-linux.yml -n LSP
conda activate LSP

If you run this on an architecture other than Linux, please use LSP-generic.yml instead of LSP-linux.yml but please note that the generic one is not tested in all platform, so the stablity can not be gauranteed. To use fp16 training, please install apex by using commands below

conda activate LSP
git clone https://github.com/NVIDIA/apex
cd apex
git reset --hard 3d01e4a0a188cc8df54bc6e44cf5eb40ff6b4cc5
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" .
python3.6 demo.py

Train model with Docker environment

To start, first install the docker and Nvidia-docker from their official repos. The image environment for running the code can be loaded as below:

Nvidia-docker v2.*

$ docker run --gpus all --ipc=host --rm -it -v $PWD:/workspace --network=host icaruszyz/large-scale-training:dialogpt bash

Nvidia-docker v1.*

$ nvidia-docker --rm -it -v $PWD:/workspace --network=host icaruszyz/large-scale-training:dialogpt bash

Inside the docker container, run

python demo.py

Pipeline details

This section explains all components in the demo.py.

Data loading

Before running demo.py, you can set DATA_FOLDER (default value ./models) in demo.py as the place you want to download all the data and pretrained/fine-tuned models. Then simply run

python demo.py

to

  • automatically download models and data,
  • prepare raw data into db that is ready to use for the program,
  • generate a training scripts.

Note that by default the demo.py will use a dummy data, please specify the Reddit training data by using option --data. Three options are available:dummy,small and full.

python demo.py --data small
python demo.py --data full

The small Reddit data is around 140MB and the full Reddit data is more than 27GB. You can prepare a cup of coffee when processing with the full Reddit data because it takes a long time!

To generate the 6k multi-ref test set data, pleaser run demo.py and set the data option to 'full', the generation will be located at

./data/test.refs.txt

Pretrained model

The pretrained and fine-tuned models are available on azure blobstorage. Please run/see demo.py for more details about how to download/use those models. Or you could download directly by using the links in demo_utils.py.

Preparing data

First, use the prepare4db.sh to convert a tsv data file into the correct format that the following script can recognize. The trainig data need to be then processed into a database file with below commandline:

python prepro.py --corpus $DATA_PATH

Using the training script

The training script can be used in single GPU or multiple GPU settings (distributed training across multiple GPUs within a single node):

python ./LSP_train.py  # Single GPU training
python -m torch.distributed.launch --nproc_per_node=8 ./LSP_train.py  # Training on 8 GPUs

The training script accept several arguments to tweak the training:

ArgumentTypeDefault valueDescription
max_seq_lengthint128Maximum number of tokens for each training instance.
train_input_filestr""Path of the training dataset in a .db format
eval_input_filestr""Path of the validation set in a tsv format
continue_fromint0Resuming the training after a specified number of steps
fp16booleanTrueWhether to use 16-bits floating point for model training.
train_batch_sizeint4Batch size for training
valid_batch_sizeint4Batch size for validation
gradient_accumulation_stepsint2Accumulate gradients on several steps
learning_ratefloat1e-5Learning rate
lr_schedulestrnoamLearning rate schedule can be chosen from [noam, noamwd, BERT, None]
num_optim_stepsint1000000Number of training optimization steps
no_token_idbooleanTrueIf set True, using all-zeros token-type embedding.

During the training, two log files will be updated. The train_log.txt and eval_log.txt contains the model loss, perplexity and training speed (tokens/sec) statistics for the training and dev set.

The log file and saved model checkpoint can be found in ./models/output_model

Model decoding

We note that even with properly filtered Reddit dataset, sometimes our model can still generate moderately toxic/inappropriate responses. Due to this reason, we are unable to provide the decoding script at this time (The live demo and decoding script access is upon invitation only now ). We are currently still working on a controlled decoding method to prevent this system from toxic generation. Please stay tuned.

See issues #3 and Reddit discussions for some discussions on third-party decoding methods.

See below for some third-party decoding methods:

Models

We release 6 fine-tuned models which can be further fine-tuned on low-resource user-customized dataset. The total parameters in these models range from 117M to 762M, in accord with OpenAI GPT-2 model sizes.

ModelFine-tuned from GPT-2Trained from scratch
DialoGPT 762M model[link] [huggingface model card][link]
DialoGPT 345M model[link] [huggingface model card][link]
DialoGPT 117M model[link] [huggingface model card][link]
DialoGPT 345M model (reverse, for MMI)link-
DialogRPT (new ranking models)link-

The model files can be loaded exactly as the GPT-2 model checkpoints from Huggingface's Transformers. You can find the corresponding configuration files (merges.txt, config.json, vocab.json) in DialoGPT's repo in ./configs/*.

The reverse model is predicting the source from the target. This model is used for MMI reranking.

The DialogRPT models our recently proposed ranking models used to predict the human feedback (upvotes, replies) of the responses. These models can be used to improve the DialoGPT generation quality (see our EMNLP paper for details).

Retraining full models

Data Preparation

The first step to retrain the full models is to generate the aforementioned 27GB Reddit dataset. This involves downloading full Reddit submission and comments dumps from https://files.pushshift.io/reddit and creating intermediate files, which overall require 700GB of local disk space. Downloading and processing the full data requires about 1-2 days, depending on your (CPU) compute capabilties (e.g., ~24 hours with 8 cores on a recent computer). Assuming you ran the above setup and installation steps (conda activate LSP, etc.), you can create the full dataset by running either:

python demo.py --data full

or

cd reddit_extractor; SIZE=full make -j 8; cd ..

The former command calls the latter, so the two methods are equivalent. We recommend the former, as the latter is mostly useful if you run into any problem or want to customize any arguments (e.g., the make command lets you build only a subset of the data). Note that the downloading phase can be error prone, for example based on your geolocation (firewall, etc.). If the above commands fail to generate data/train.tsv, or if that file is not anywhere close to 27GB, it means something went wrong. In that case, you may want to inspect reddit_extractor/wget-log and reddit_extractor/logs/*.log for any obvious error (e.g., wget unable to download from pushshift.io). If error messages don't make sense to you, feel free to contact us. If so, please be sure to include any error messages gathered from these log files.

Training data statistics: the generated training tsv file should be roughly 26.8 GB uncompressed, with 146.8M training instances, 3.87B source tokens, and 2.14B target tokens (including utterance-level 0/1 weights). The resulting train.tsv file should contain 146,846,215 lines.

Training

We recommand generating the above data using the demo.py --data full, as it (1) generates the data, (2) converts it into DB format, and (3) trains a model using python LSP_train.py. Please directly edit demo.py if you want to customize any of the hyperparameters.

Evaluations

DSTC-7 challenge

Our model achieved the state-of-the-art results in DSTC-7 Challenge response generation task.

ExperimentNIST2NIST4BLEU2BLEU4METEORENT-4DIST-1DIST-2Avg. Len
Human response2.622.6512.35%3.13%8.31%10.4516.66%67.01%18.8
DSTC-7 Winner2.512.5214.35%1.83%8.07%9.0310.89%32.49%15.1
DialoGPT 345M2.802.8214.16%2.31%8.51%10.089.13%39.73%16.9
DialoGPT 345M (BS)2.922.9719.18%6.05%9.29%9.5715.73%51.03%14.2

where ENT represents the Entropy score, and DIST represents the Distinct score. For all metrics except the average length, larger are better.

Note that the superior automatic evaluation comparing to human responses does not necessary imply that our model achieves human parity. Please check out our paper for more detailed analysis.

To fine-tune the 345M DialoGPT model on the DSTC-7 challenge data on a server with 8 V100 GPUs, please run the following commandline (The DSTC data can be found at DSTC-7 repo):

python3 -m torch.distributed.launch --nproc_per_node=8 train_LSP.py --init_checkpoint ./models/medium/medium_ft.pkl --train_input_file ./data/DSTC_train.db --eval_input_file ./data/DSTC_valid.tsv --model_name_or_path ./model/medium/ --learning_rate 1e-4  --train_batch_size 64 --eval_batch_size 64 --no_token_id

The trained model can be found at DSTC medium model

Evaluation

  1. Please downloads the following 3rd-party packages and save into the empty folder 3rdparty:

  2. Please follow the DSTC-7 official repo to extract the data, and put data-official-test/test.refs.txt into ./dstc/data/ folder.

  3. Run the extraction script below to produce the human response hypothesis file human.resp.txt:

    python extract_human.py
    
  4. Finally, to reproduce the results of human hypothesis on DSTC dataset, please run following commands under the repo folder:

    python batch_eval.py
    

The evaluation results will be generated in the folder ./dstc/eval/

6K multi-ref dataset result

Automatic evaluation

We test on 6K multi-ref dataset from Reddit. The results are summarized in below

ExperimentNIST2NIST4BLEU2BLEU4METEORENT-4DIST-1DIST-2Avg. Len
Human response3.414.2517.90%7.48%10.64%1114.50%63.00%13.1
DialoGPT 117M2.392.4110.54%1.55%7.53%10.788.60%39.90%12.8
DialoGPT 345M33.0616.96%4.56%9.81%9.136.80%26.30%12.2
DialoGPT 762M2.842.918.66%5.25%9.66%9.727.76%29.93%11.2
DialoGPT 345M (BS)3.43.521.76%7.92%10.74%10.4812.38%48.74%11.3
DialoGPT 345M (w/MMI)3.283.3315.68%3.94%11.23%11.259.39%45.55%17.2

Human evaluation

We further conduct human evaluations (6K examples for each methods, each example is evaluated by 3 human judges). The results show a strong evidence that our generation quality is towards approaching the quality of real human responses, under this non-interactive Turing test:

Relevance: A and B, which one is more relevant to the source prompt.

System AA Wins (%)Ties (%)B Wins (%)System B
DialoGPT 345M2671 (45%)513 (9%)2816 (47%)Human responses
DialoGPT 345M3281 (72%)394 (9%)882 (19%)PersonalityChat
DialoGPT 345M w/ MMI2871 (48%)522 (9%)2607 (43%)Human responses

Informativeness: A and B, which one is more contentful and informative.

System AA Wins (%)Ties (%)B Wins (%)System B
DialoGPT 345M2722 (45%)234 (4%)3044 (51%)Human responses
DialoGPT 345M3490 (77%)206 (5%)861 (19%)PersonalityChat
DialoGPT 345M w/ MMI3011 (50%)234 (4%)2755 (46%)Human responses

Human-Like: A and B, which one do you think is more likely to be generated by Human.

System AA Wins (%)Ties (%)B Wins (%)System B
DialoGPT 345M2716 (45%)263 (4%)3021 (50%)Human responses
DialoGPT 345M3462 (76%)196 (4%)899 (20%)PersonalityChat
DialoGPT 345M w/ MMI2978 (50%)241 (4%)2781 (46%)Human responses

Please see full details in our arxiv paper.

Generated Examples

  • Multi-turn generation examples from an interactive environment:
RoleResponse
UserDoes money buy happiness?
BotDepends how much money you spend on it .
UserWhat is the best way to buy happiness ?
BotYou just have to be a millionaire by your early 20s, then you can be happy .
UserThis is so difficult !
BotYou have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money
  • Multi-turn self-playing dialogue with user prompt
RoleResponse
Userwhat is the meaning of life ?
BotThe meaning is to be with your family and friends .
BotI’m going to guess : It means that your parents and friends have loved you . That’s the meaning , I believe .
BotI think the meaning is to make sure you’re alive when you die . So you’ll be with your family , friends , and yourself .
BotIt’s not , but thanks for trying to help .
  • Commonsense question answering examples
UserBot
who is the first president of the United StatesGeorge Washington
what is the boiling point of water?I think it's about 212 F.
which one is bigger, sun or moon?The sun .
which animal has black and white stripes?A black and white striped cat .
  • Handling context that does not appear in training examples (note that training examples are sampled from 2005 to 2017)
UserBot
The trading war between China and US is still happening .I'm sure the Chinese government and the U . S . will be fine with that as long as they keep their arms on the table and the money flowing .
who won the world cup in 2018 ?Germany , I believe .
Nvidia's Titan RTX is really good .I have this card and i love it . No complaints .
Can Porsche beat Tesla with its new Taycan EV ?I don't think so . The Tesla is more powerful than the porsche .
  • Our bot's response to philosophy questions
UserBot
What is the meaning of a good life ?I think it's that we're all connected to our past lives and the meaning of life is to live the way you want to and how you want to .
Which is more important -- being successful or being happy ?Being successful is just endless . Being happy is the only thing that matters .
Is peace the only way to stop war ?Not if the world is not willing to work with you . Peace is the ultimate goal , but peace requires more than that .
How much freedom should people have ?The right amount of freedom is the freedom to do as you please , as long as you don't hurt people or property .
  • Finally
UserBot
What do you think of the DialoGPT repo ?It's pretty good , but it's very easy to find a bug .

Please start a issue if you spot any :)

Try our system

The live demo and decoding script access is upon invitation only now. Please stayed tuned for the full release.

Related Project

  • RetGen: https://github.com/dreasysnail/RetGen. Retrieval-augmented/grounded DialoGPT and beyond. RetGen is a joint training framework that simultaneously optimizes a dense passage retriever and a knowledge-grounded text generator in an end-to-end fashion.

  • Microsoft ICECAPS: https://github.com/microsoft/icecaps.

    As an orthogonal repository of this project, Microsoft Icecaps is an open-source toolkit (in tensorflow) for building neural conversational systems. Icecaps provides an array of tools from recent conversation modeling and general NLP literature within a flexible paradigm that enables complex multi-task learning setups.

  • Pretrained UniLM: https://github.com/microsoft/unilm

  • MT-DNN: https://github.com/namisan/mt-dnn

  • A chinese counterpart of DialoGPT by yangjianxin1. https://github.com/yangjianxin1/GPT2-chitchat. We are glad to see that the MMI strategy that we used in DialoGPT has also improved the performance for this project as well!

Contact

Please contact DialoGPT@microsoft.com if you have any questions/suggestions. However, the response will be sporadic. Please expect delay.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Disclaimer

This repository aims to facilitate research in large-scale pretraining for conversational data. This toolkit contains only part of the modeling machinery needed to actually produce a model weight file in a running dialog. On its own, this model provides only information about the weights of various text spans; in order for a researcher to actually use it, they will need to bring conversational data of their own and decode the response generation from the pretrained system. Microsoft is not responsible for any generation from the 3rd party utilization of the pretrained system.

Citation

If you use this code in your research, you can cite our arxiv paper:

@inproceedings{zhang2019dialogpt,
    title={DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation},
    author={Yizhe Zhang and Siqi Sun and Michel Galley and Yen-Chun Chen and Chris Brockett and Xiang Gao and Jianfeng Gao and Jingjing Liu and Bill Dolan},
    year={2020},
    booktitle={ACL, system demonstration}
}