Top Related Projects
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Inference code for Llama models
Quick Overview
GLM-130B is an open bilingual (English and Chinese) pre-trained language model with 130 billion parameters. It is designed for natural language processing tasks and aims to provide high-quality performance in both languages. The model is developed by the Zhipu AI team and is available for research purposes.
Pros
- Large-scale bilingual model with 130 billion parameters
- Open-source and available for research purposes
- Supports both English and Chinese languages
- Demonstrates strong performance on various NLP tasks
Cons
- Requires significant computational resources for training and inference
- Limited documentation and examples available
- May have biases inherent in large language models
- Potential ethical concerns related to the use of large language models
Code Examples
# Load the GLM-130B model
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-130b")
model = AutoModelForCausalLM.from_pretrained("THUDM/glm-130b")
# Generate text using GLM-130B
prompt = "Translate the following English text to Chinese: 'Hello, how are you?'"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
output = model.generate(input_ids, max_length=100)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
# Fine-tune GLM-130B on a custom dataset
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=4,
save_steps=10_000,
save_total_limit=2,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
data_collator=data_collator,
)
trainer.train()
Getting Started
To get started with GLM-130B, follow these steps:
-
Install the required dependencies:
pip install transformers torch
-
Load the model and tokenizer:
from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-130b") model = AutoModelForCausalLM.from_pretrained("THUDM/glm-130b")
-
Use the model for text generation or other NLP tasks:
prompt = "Translate: 'Hello, world!' to Chinese" input_ids = tokenizer.encode(prompt, return_tensors="pt") output = model.generate(input_ids, max_length=50) generated_text = tokenizer.decode(output[0], skip_special_tokens=True) print(generated_text)
Note: Due to the large size of the model, ensure you have sufficient computational resources and storage available.
Competitor Comparisons
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
Pros of gpt-neox
- More extensive documentation and usage guides
- Larger community and more active development
- Better support for distributed training across multiple GPUs/nodes
Cons of gpt-neox
- Higher computational requirements for training
- Less focus on multilingual capabilities
- More complex setup process for beginners
Code Comparison
GLM-130B:
from glm import GLM
model = GLM.from_pretrained("THUDM/glm-130b")
response, history = model.chat(tokenizer, "Hello, how are you?", history=[])
print(response)
gpt-neox:
from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast
model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b")
tokenizer = GPTNeoXTokenizerFast.from_pretrained("EleutherAI/gpt-neox-20b")
input_ids = tokenizer.encode("Hello, how are you?", return_tensors="pt")
output = model.generate(input_ids, max_length=50)
print(tokenizer.decode(output[0]))
The code snippets show that GLM-130B has a more straightforward chat interface, while gpt-neox requires more setup but offers more fine-grained control over the generation process.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Pros of DeepSpeed
- Broader focus on optimizing deep learning training and inference across various models
- More extensive documentation and tutorials for implementation
- Active development with frequent updates and new features
Cons of DeepSpeed
- Requires more setup and configuration compared to GLM-130B's streamlined approach
- May have a steeper learning curve for users new to deep learning optimization techniques
Code Comparison
GLM-130B:
from glm_130b import GLM130B
model = GLM130B.from_pretrained("path/to/model")
output = model.generate("Input text", max_length=100)
DeepSpeed:
import deepspeed
import torch
model, optimizer, _, _ = deepspeed.initialize(args=args, model=model, model_parameters=params)
output = model(input_ids, attention_mask=attention_mask)
Summary
DeepSpeed offers a more versatile and actively developed solution for optimizing deep learning models, with extensive documentation and frequent updates. However, it may require more setup and have a steeper learning curve compared to GLM-130B. GLM-130B provides a more streamlined approach specifically for large language models, while DeepSpeed caters to a broader range of deep learning applications.
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Pros of ChatGLM-6B
- Smaller model size (6B parameters) makes it more accessible for deployment on consumer hardware
- Faster inference time due to reduced complexity
- Lower computational and memory requirements for training and fine-tuning
Cons of ChatGLM-6B
- Potentially lower performance on complex tasks compared to the larger GLM-130B model
- May have a more limited knowledge base and understanding of context
- Possibly less capable of handling nuanced or specialized queries
Code Comparison
While both repositories are based on the GLM architecture, their implementations differ due to model size and specific optimizations. Here's a brief comparison of model initialization:
ChatGLM-6B:
model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()
GLM-130B:
model = GLM130B.from_pretrained(model_path, dtype=torch.float16, device_map="auto")
The ChatGLM-6B code is simpler and uses the AutoModel class, while GLM-130B requires a custom GLM130B class and specifies device mapping for distributed inference.
Inference code for Llama models
Pros of Llama
- Developed by Meta, benefiting from extensive resources and research
- Widely adopted and supported by a large community
- Offers a range of model sizes for different use cases
Cons of Llama
- More restrictive licensing terms
- Limited multilingual capabilities compared to GLM-130B
- Less focus on Chinese language support
Code Comparison
GLM-130B:
from glm import GLMModel
model = GLMModel.from_pretrained("GLM-130B")
output = model.generate("Hello, how are you?")
print(output)
Llama:
from llama import LlamaModel
model = LlamaModel.from_pretrained("llama-7b")
output = model.generate("Hello, how are you?")
print(output)
Both repositories provide large language models, but they have different focuses and strengths. GLM-130B emphasizes multilingual capabilities, especially for Chinese, while Llama offers a range of model sizes and benefits from Meta's extensive research. The code usage is similar, with minor differences in import statements and model names. Choose the model that best fits your specific requirements and use case.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME

ð Blog ⢠⬠Download Model ⢠𪧠Demo ⢠âï¸ Email ⢠ð Paper [ICLR 2023]
ð¬ Google Group (Updates) or Wechat Group or Slack channel (Discussions)
GLM-130B: An Open Bilingual Pre-Trained Model
GLM-130B is an open bilingual (English & Chinese) bidirectional dense model with 130 billion parameters, pre-trained using the algorithm of General Language Model (GLM). It is designed to support inference tasks with the 130B parameters on a single A100 (40G * 8) or V100 (32G * 8) server. With INT4 quantization, the hardware requirements can further be reduced to a single server with 4 * RTX 3090 (24G) with almost no performance degradation. As of July 3rd, 2022, GLM-130B has been trained on over 400 billion text tokens (200B each for Chinese and English) and it has the following unique features:
- Bilingual: supports both English and Chinese.
- Performance (EN): better than GPT-3 175B (+4.0%), OPT-175B (+5.5%), and BLOOM-176B (+13.0%) on LAMBADA and slightly better than GPT-3 175B (+0.9%) on MMLU.
- Performance (CN): significantly better than ERNIE TITAN 3.0 260B on 7 zero-shot CLUE datasets (+24.26%) and 5 zero-shot FewCLUE datasets (+12.75%).
- Fast Inference: supports fast inference on both SAT and FasterTransformer (up to 2.5X faster) with a single A100 server.
- Reproducibility: all results (30+ tasks) can be easily reproduced with open-sourced code and model checkpoints.
- Cross-Platform: supports training and inference on NVIDIA, Hygon DCU, Ascend 910, and Sunway (Will be released soon).
This repository mainly focus on the evaluation of GLM-130B. If you find our work and our open-sourced efforts useful, âï¸ to encourage our following development! :)
News
- [2023.06.25] Release ChatGLM2-6B, an updated version of ChatGLM-6B which introduces Stronger Performance (MMLU (+23%), CEval (+33%), GSM8K (+571%), BBH (+60%)), Longer Context (from 2K in ChatGLM-6B to 32K, and trained with a context length of 8K during the dialogue alignment), and More Efficient Inference (speeds up by 42% under the official implementation; the dialogue length supported by 6G GPU memory has increased from 1K to 8K). More details please refer to ChatGLM2-6Bã
- [2023.06.14] We release the research WebGLM, which enables efficient and accurate web-enhanced question answering. All code and data are released!
- [2023.03.14] We are happy to introduce ChatGLM, a bilingual dialogue language model based on GLM-130B, and its open-sourced version ChatGLM-6B which can be run under only 6GB GPU memory!
- [2023.01.21] GLM-130B has been accepted to ICLR 2023!
- [2022.10.06] Our paper for GLM-130B is out!
- [2022.08.24] We are proud to publish the quantized version for GLM-130B. While preserving the activation precision as FP16, the model weights can be quantized to as low as INT4 with almost no degradation of performance, further reducing the hardware requirements of the GLM-130B to a single server with 4 * RTX 3090 (24G)! See Quantization of GLM-130B for details.
For smaller models, please find monolingual GLMs (English: 10B/2B/515M/410M/335M/110M, Chinese: 10B/335M) and an 1B multilingual GLM (104 languages).
Getting Started
Environment Setup
Hardware
Hardware | GPU Memory | Quantization | Weight Offload |
---|---|---|---|
8 * A100 | 40 GB | No | No |
8 * V100 | 32 GB | No | Yes (BMInf) |
8 * V100 | 32 GB | INT8 | No |
8 * RTX 3090 | 24 GB | INT8 | No |
4 * RTX 3090 | 24 GB | INT4 | No |
8 * RTX 2080 Ti | 11 GB | INT4 | No |
It is recommended to use the an A100 (40G * 8) server, as all GLM-130B evaluation results (~30 tasks) reported can be easily reproduced with a single A100 server in about half a day. With INT8/INT4 quantization, efficient inference on a single server with 4 * RTX 3090 (24G) is possible, see Quantization of GLM-130B for details. Combining quantization and weight offloading techniques, GLM-130B can also be inferenced on servers with even smaller GPU memory, see Low-Resource Inference for details.
Software
The GLM-130B code is built on the top of SAT. We recommend using Miniconda to manage your environment and installing additional dependencies via pip install -r requirements.txt
. Here are the recommended environment configurations:
- Python 3.9+ / CUDA 11+ / PyTorch 1.10+ / DeepSpeed 0.6+ / Apex (installation with CUDA and C++ extensions is required, see here)
- SwissArmyTransformer>=0.2.11 is required for quantization
Model weights
Download the GLM-130Bâs model checkpoint from here, make sure all 60 chunks are downloaded completely, then use the following command to merge them into a single archive file and extract it:
cat glm-130b-sat.tar.part_* > glm-130b-sat.tar
tar xvf glm-130b-sat.tar
Set CHECKPOINT_PATH
in configs/model_glm_130b.sh
to the path of the extracted folder. Since the checkpoint file is up to 260G, it is recommended to use the SSD or RAM disk to reduce the checkpoint loading time. Since the checkpoint we distribute is in 8-way tensor parallel, a conversion scripts is also provided if you need to change the tensor parallel dimension.
python tools/convert_tp.py \
--input-folder <SRC_CKPT_PATH> \
--output-folder <DST_CKPT_PATH> \
--target-tp <TARGET_TP>
Left-To-Right Generation / Blank Filling
bash scripts/generate.sh --input-source interactive
You can also specify an input file by --input-source input.txt
.
GLM-130B uses two different mask tokens: [MASK]
for short blank filling and [gMASK]
for left-to-right long text generation. When the input does not contain any MASK token, [gMASK]
will be automatically appended to the end of the text.
Left-To-Right Generation Examples
Example 1
Input: Who is the greatest artist? The greatest artist is [gMASK]
Output: Who is the greatest artist? The greatest artist is not the one who paints the most beautiful picture, or writes the best poem, or plays the best symphony. The greatest artist is the one whose work has the greatest impact on the lives of others. The Greatest Artist is the artist who inspires us to be our best selves. The Great Artist is not the artist whose work is flawless. The great artist is not one who is perfect. The true great artist has the ability to make us feel something. The artist who makes us feel is the great artist. The art of the great artists is to make you feel something that you have never felt before. The greatness of an artist is in the ability of their work to touch the lives and hearts of others, and to make them feel something they have never experienced before. This is the true greatness of the artist. 2. The Significance of the Great Artist The significance of the greatest artist cannot be overstated. The significance and impact of the art of a great artist cannot even be measured. The impact of a single work of art can change the course of history. A single work can change a personâs life. A work of great art can make you laugh, make you cry, make your heart race, or make you think. A great work of artistic expression has the power to change the way you look at the world. The work of a truly great artist can change your life forever. 3. The Impact of the Greatest Artist The greatest artists have the ability and the power of their art to change peopleâs lives. They have the power and the ability, through their art, to make people feel something new. The power of the work of the truly great artists cannot be measured, but it can be felt. The greater the impact of an artists work, the greater the power it has to change lives. 4. The Influence of the Art of the Artist The art and the work created by the greatest artists has the potential to change a life. The influence of the artists work cannot be quantified, but is felt by those who experience it. The truly great work has an ability to touch peopleâ lives in a way that no other work can.
Example 2 (Chinese)
Input: é®é¢ï¼å¬å¤©ï¼ä¸å½åªåº§å叿éåé¿å¯ï¼é®é¢æè¿°ï¼è½æ¨èä¸äºå½å éåå¬å¤©é¿å¯çåå¸åï¼åçç¨æ·ï¼æ 游ç±å¥½è åçï¼ [gMASK]
Output: é®é¢:å¬å¤©,ä¸å½åªåº§å叿éåé¿å¯?é®é¢æè¿°:è½æ¨èä¸äºå½å éåå¬å¤©é¿å¯çåå¸å?åçç¨æ·:æ æ¸¸ç±å¥½è åç:é¿å¯çè¯,å»ºè®®å»æµ·åä¸äºã ä¸äºä½äºæµ·åå²çæå端,æ¯ä¸å½æåé¨ççå¸¦æ»¨æµ·æ æ¸¸åå¸,ä¸å½æµ·æ»¨åå¸,æ¯ä¸å½ç©ºæ°è´¨éæå¥½çåå¸ãå ¨å½æé¿å¯¿å°åº(å¹³å寿å½80å²)ãä¸äºå¸å«ç§°é¹¿å,å被称为"䏿¹å¤å¨å¤·",ä½å± ä¸å½å大ä¸çº¿æ 游åå¸"ä¸å¨æå¦"ä¹é¦,æ¥æå ¨å²æç¾ä¸½ç海滨é£å ãä¸äºæ¥æè¢«æ æ°åå¸å«å¦çæ¸ æ°ç©ºæ°,æå海滩ã卿²æ»©æ 鲿£æ¥ãæ²æµ´åææ¸©åé³å ,卿µ·è¾¹ç©è,å¨é¨æéé¢å¼å¸å¥åº·,欣èµèªç¶å¥è§,ä¸å齿¯é£ä¹ä»¤äººäº«åã æ°å ä¸äºå°å¤ç带,å±ç带å£é£æ°ååºå,ç»å¹´æ°æ¸©é«,坿ååä¸å¤§,å¹´å¹³åæ°æ¸©25.5度ãä¸äºæççæä»½æ¯6æ,å¹³åæ°æ¸©ä¸º28.5°C,æç«¯æé«æ°æ¸©ä¸º35.7°C ,ä¸äºæ²¡æææ¾çå¬å£,èä¸å¬å£æ´æ¯å°æµ·æ»¨æ 游çæå¥½æ¶å,æä»¥ä¸äºæ¯å ¨å¹´å ¨å¤©åé¿å¯ãæ¶æã度åãæ æ¸¸ç好尿¹ãä¸è¿,ä¸äºææºçæ æ¸¸æ¶é´æ¯ä»10æä»½å°å忥è,è¿ä¸ªæ¶åç人ç¹å«å¤,æ¿é´ä¹ä¸å¥½è®¢,建议æå¥½é¿å¼é«å³°ãä¸äºçä¸ä¸ªæ 游é«å³°ææ¯"äºä¸"ã"åä¸"ã"æ¥è",è¿ä¸ä¸ªèæ¥æé´çæ¿ä»·é½ä¼ä¸æµ®å å,å¦æä½ éæ©äºè¿ä¸ä¸ªåææ¥ä¸äºçè¯è¦æåä¸ä¸ªææå®æ¿åå¨è¯¢ã
Blank Filling Examples
Example 1
Input: Ng is an adjunct professor at [MASK] (formerly associate professor and Director of its Stanford AI Lab or SAIL ). Also a pioneer in online education, Ng co-founded Coursera and deeplearning.ai.
Output: Ng is an adjunct professor at Stanford University (formerly associate professor and Director of its Stanford AI Lab or SAIL ). Also a pioneer in online education, Ng co-founded Coursera and deeplearning.ai.
Example 2 (Chinese)
Input: 坿é¨ä½äºæå¤§å©ç±³å °å¸å¤åå ¡æã1807年为纪念[MASK]è建ï¼é¨é«25ç±³ï¼é¡¶ä¸çç«ä¸¤æ¦å£«ééå¤å µè½¦é¸åã
Output: 坿é¨ä½äºæå¤§å©ç±³å °å¸å¤åå ¡æã1807年为纪念æ¿ç ´ä»èå©è建,é¨é«25ç±³,é¡¶ä¸çç«ä¸¤æ¦å£«ééå¤å µè½¦é¸åã
Arguments Useful in Generation
--input-source [path] or "interactive"
The input file's path. It can also be "interactive", which will launch a CLI.â-output-path [path]
The folder containing the results.â-out-seq-length [int]
The maximum sequence length for generation (including context).â-min-gen-length [int]
The minimum generation length for each MASK.â-sampling-strategy "BaseStrategy" or "BeamSearchStrategy"
. The sampling strategy used.- For BeamSearchStrategy:
â-num-beams [int]
The number of beams.â-length-penalty [float]
The maximum sequence length for generation (including context).â-no-repeat-ngram-size [int]
Prohibit repeated n-gram generation.â-print-all-beam
Print the generated results for all beams.
- For BaseStrategy:
â-top-k [int]
Top k sampling.â-top-p [float]
Top p sampling.â-temperature [float]
The sampling temperature.
- For BeamSearchStrategy:
Evaluation
We use the YAML file to define tasks. Specifically, you can add multiple tasks or folders at a time for evaluation, and the evaluation script will automatically collect all YAML files under those folders recursively.
bash scripts/evaluate.sh task1.yaml task2.yaml dir1 dir2 ...
Download our evaluation dataset here, and set DATA_PATH
in scripts/evaluate.sh
to your local dataset directory. The task folder contains the YAML files for 30+ tasks we evaluated for GLM-130B. Take the CoLA task for example, run bash scripts/evaluate.sh tasks/bloom/glue_cola.yaml
, which outputs an accuracy of ~65% for the best prompt and ~57% for the median.
Expected Output
MultiChoiceTaskConfig(name='glue_cola', type=<TaskType.MULTICHOICE: 'mul'>, path='/thudm/LargeScale/data/zeroshot/bloom/glue_cola', module=None, metrics=['Accuracy'], use_task_mask=False, use_multitask_encoding=False, unidirectional=False, max_seq_length=2048, file_pattern={'validation': '**/validation.jsonl'}, micro_batch_size=8)
Evaluating task glue_cola:
Evaluating group validation:
Finish Following_sentence_acceptable/mul/validation.jsonl, Accuracy = 42.665
Finish Make_sense_yes_no/mul/validation.jsonl, Accuracy = 56.951
Finish Previous_sentence_acceptable/mul/validation.jsonl, Accuracy = 65.197
Finish editing/mul/validation.jsonl, Accuracy = 57.622
Finish is_this_correct/mul/validation.jsonl, Accuracy = 65.197
Evaluation results of task glue_cola:
Group validation Accuracy: max = 65.197, median = 57.622, average = 57.526
Finish task glue_cola in 101.2s.
Multi-node evaluation can be configured by setting HOST_FILE_PATH
(required by the DeepSpeed lanucher) in scripts/evaluate_multiple_node.sh
. Set DATA_PATH
in scripts/evaluate_multiple_node.sh
and run the following command to evaluate all the tasks in ./task
directory.
bash scripts/evaluate_multiple_node.sh ./tasks
See Evaluate Your Own Tasks for details on how to add new tasks.
2.5X faster Inference using FasterTransformer
By adapting the GLM-130B model to FasterTransfomer, a highly optimized transformer model library by NVIDIA, we can reach up to 2.5X speedup on generation, see Inference with FasterTransformer for details.
License
This repository is licensed under the Apache-2.0 license. The use of GLM-130B model weights is subject to the Model License.
Citation
If you find our work useful, please consider citing GLM-130B:
@article{zeng2022glm,
title={Glm-130b: An open bilingual pre-trained model},
author={Zeng, Aohan and Liu, Xiao and Du, Zhengxiao and Wang, Zihan and Lai, Hanyu and Ding, Ming and Yang, Zhuoyi and Xu, Yifan and Zheng, Wendi and Xia, Xiao and others},
journal={arXiv preprint arXiv:2210.02414},
year={2022}
}
You may also consider GLM's original work in your reference:
@inproceedings{du2022glm,
title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={320--335},
year={2022}
}
Top Related Projects
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Inference code for Llama models
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot