Convert Figma logo to code with AI

zai-org logoGLM-130B

GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)

7,692
605
7,692
124

Top Related Projects

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries

39,112

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型

58,578

Inference code for Llama models

Quick Overview

GLM-130B is an open bilingual (English and Chinese) pre-trained language model with 130 billion parameters. It is designed for natural language processing tasks and aims to provide high-quality performance in both languages. The model is developed by the Zhipu AI team and is available for research purposes.

Pros

  • Large-scale bilingual model with 130 billion parameters
  • Open-source and available for research purposes
  • Supports both English and Chinese languages
  • Demonstrates strong performance on various NLP tasks

Cons

  • Requires significant computational resources for training and inference
  • Limited documentation and examples available
  • May have biases inherent in large language models
  • Potential ethical concerns related to the use of large language models

Code Examples

# Load the GLM-130B model
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-130b")
model = AutoModelForCausalLM.from_pretrained("THUDM/glm-130b")
# Generate text using GLM-130B
prompt = "Translate the following English text to Chinese: 'Hello, how are you?'"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
output = model.generate(input_ids, max_length=100)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
# Fine-tune GLM-130B on a custom dataset
from transformers import Trainer, TrainingArguments

training_args = TrainingArguments(
    output_dir="./results",
    num_train_epochs=3,
    per_device_train_batch_size=4,
    save_steps=10_000,
    save_total_limit=2,
)

trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    data_collator=data_collator,
)

trainer.train()

Getting Started

To get started with GLM-130B, follow these steps:

  1. Install the required dependencies:

    pip install transformers torch
    
  2. Load the model and tokenizer:

    from transformers import AutoTokenizer, AutoModelForCausalLM
    
    tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-130b")
    model = AutoModelForCausalLM.from_pretrained("THUDM/glm-130b")
    
  3. Use the model for text generation or other NLP tasks:

    prompt = "Translate: 'Hello, world!' to Chinese"
    input_ids = tokenizer.encode(prompt, return_tensors="pt")
    output = model.generate(input_ids, max_length=50)
    generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
    print(generated_text)
    

Note: Due to the large size of the model, ensure you have sufficient computational resources and storage available.

Competitor Comparisons

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries

Pros of gpt-neox

  • More extensive documentation and usage guides
  • Larger community and more active development
  • Better support for distributed training across multiple GPUs/nodes

Cons of gpt-neox

  • Higher computational requirements for training
  • Less focus on multilingual capabilities
  • More complex setup process for beginners

Code Comparison

GLM-130B:

from glm import GLM

model = GLM.from_pretrained("THUDM/glm-130b")
response, history = model.chat(tokenizer, "Hello, how are you?", history=[])
print(response)

gpt-neox:

from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast

model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b")
tokenizer = GPTNeoXTokenizerFast.from_pretrained("EleutherAI/gpt-neox-20b")
input_ids = tokenizer.encode("Hello, how are you?", return_tensors="pt")
output = model.generate(input_ids, max_length=50)
print(tokenizer.decode(output[0]))

The code snippets show that GLM-130B has a more straightforward chat interface, while gpt-neox requires more setup but offers more fine-grained control over the generation process.

39,112

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Pros of DeepSpeed

  • Broader focus on optimizing deep learning training and inference across various models
  • More extensive documentation and tutorials for implementation
  • Active development with frequent updates and new features

Cons of DeepSpeed

  • Requires more setup and configuration compared to GLM-130B's streamlined approach
  • May have a steeper learning curve for users new to deep learning optimization techniques

Code Comparison

GLM-130B:

from glm_130b import GLM130B

model = GLM130B.from_pretrained("path/to/model")
output = model.generate("Input text", max_length=100)

DeepSpeed:

import deepspeed
import torch

model, optimizer, _, _ = deepspeed.initialize(args=args, model=model, model_parameters=params)
output = model(input_ids, attention_mask=attention_mask)

Summary

DeepSpeed offers a more versatile and actively developed solution for optimizing deep learning models, with extensive documentation and frequent updates. However, it may require more setup and have a steeper learning curve compared to GLM-130B. GLM-130B provides a more streamlined approach specifically for large language models, while DeepSpeed caters to a broader range of deep learning applications.

ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型

Pros of ChatGLM-6B

  • Smaller model size (6B parameters) makes it more accessible for deployment on consumer hardware
  • Faster inference time due to reduced complexity
  • Lower computational and memory requirements for training and fine-tuning

Cons of ChatGLM-6B

  • Potentially lower performance on complex tasks compared to the larger GLM-130B model
  • May have a more limited knowledge base and understanding of context
  • Possibly less capable of handling nuanced or specialized queries

Code Comparison

While both repositories are based on the GLM architecture, their implementations differ due to model size and specific optimizations. Here's a brief comparison of model initialization:

ChatGLM-6B:

model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()

GLM-130B:

model = GLM130B.from_pretrained(model_path, dtype=torch.float16, device_map="auto")

The ChatGLM-6B code is simpler and uses the AutoModel class, while GLM-130B requires a custom GLM130B class and specifies device mapping for distributed inference.

58,578

Inference code for Llama models

Pros of Llama

  • Developed by Meta, benefiting from extensive resources and research
  • Widely adopted and supported by a large community
  • Offers a range of model sizes for different use cases

Cons of Llama

  • More restrictive licensing terms
  • Limited multilingual capabilities compared to GLM-130B
  • Less focus on Chinese language support

Code Comparison

GLM-130B:

from glm import GLMModel

model = GLMModel.from_pretrained("GLM-130B")
output = model.generate("Hello, how are you?")
print(output)

Llama:

from llama import LlamaModel

model = LlamaModel.from_pretrained("llama-7b")
output = model.generate("Hello, how are you?")
print(output)

Both repositories provide large language models, but they have different focuses and strengths. GLM-130B emphasizes multilingual capabilities, especially for Chinese, while Llama offers a range of model sizes and benefits from Meta's extensive research. The code usage is similar, with minor differences in import statements and model names. Choose the model that best fits your specific requirements and use case.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

🌐 Blog • ⏬ Download Model • 🪧 Demo • ✉️ Email • 📃 Paper [ICLR 2023]

💬 Google Group (Updates) or Wechat Group or Slack channel (Discussions)

GLM-130B: An Open Bilingual Pre-Trained Model

GLM-130B is an open bilingual (English & Chinese) bidirectional dense model with 130 billion parameters, pre-trained using the algorithm of General Language Model (GLM). It is designed to support inference tasks with the 130B parameters on a single A100 (40G * 8) or V100 (32G * 8) server. With INT4 quantization, the hardware requirements can further be reduced to a single server with 4 * RTX 3090 (24G) with almost no performance degradation. As of July 3rd, 2022, GLM-130B has been trained on over 400 billion text tokens (200B each for Chinese and English) and it has the following unique features:

  • Bilingual: supports both English and Chinese.
  • Performance (EN): better than GPT-3 175B (+4.0%), OPT-175B (+5.5%), and BLOOM-176B (+13.0%) on LAMBADA and slightly better than GPT-3 175B (+0.9%) on MMLU.
  • Performance (CN): significantly better than ERNIE TITAN 3.0 260B on 7 zero-shot CLUE datasets (+24.26%) and 5 zero-shot FewCLUE datasets (+12.75%).
  • Fast Inference: supports fast inference on both SAT and FasterTransformer (up to 2.5X faster) with a single A100 server.
  • Reproducibility: all results (30+ tasks) can be easily reproduced with open-sourced code and model checkpoints.
  • Cross-Platform: supports training and inference on NVIDIA, Hygon DCU, Ascend 910, and Sunway (Will be released soon).

This repository mainly focus on the evaluation of GLM-130B. If you find our work and our open-sourced efforts useful, ⭐️ to encourage our following development! :)

News

  • [2023.06.25] Release ChatGLM2-6B, an updated version of ChatGLM-6B which introduces Stronger Performance (MMLU (+23%), CEval (+33%), GSM8K (+571%), BBH (+60%)), Longer Context (from 2K in ChatGLM-6B to 32K, and trained with a context length of 8K during the dialogue alignment), and More Efficient Inference (speeds up by 42% under the official implementation; the dialogue length supported by 6G GPU memory has increased from 1K to 8K). More details please refer to ChatGLM2-6B。
  • [2023.06.14] We release the research WebGLM, which enables efficient and accurate web-enhanced question answering. All code and data are released!
  • [2023.03.14] We are happy to introduce ChatGLM, a bilingual dialogue language model based on GLM-130B, and its open-sourced version ChatGLM-6B which can be run under only 6GB GPU memory!
  • [2023.01.21] GLM-130B has been accepted to ICLR 2023!
  • [2022.10.06] Our paper for GLM-130B is out!
  • [2022.08.24] We are proud to publish the quantized version for GLM-130B. While preserving the activation precision as FP16, the model weights can be quantized to as low as INT4 with almost no degradation of performance, further reducing the hardware requirements of the GLM-130B to a single server with 4 * RTX 3090 (24G)! See Quantization of GLM-130B for details.

For smaller models, please find monolingual GLMs (English: 10B/2B/515M/410M/335M/110M, Chinese: 10B/335M) and an 1B multilingual GLM (104 languages).

Getting Started

Environment Setup

Hardware

HardwareGPU MemoryQuantizationWeight Offload
8 * A10040 GBNoNo
8 * V10032 GBNoYes (BMInf)
8 * V10032 GBINT8No
8 * RTX 309024 GBINT8No
4 * RTX 309024 GBINT4No
8 * RTX 2080 Ti11 GBINT4No

It is recommended to use the an A100 (40G * 8) server, as all GLM-130B evaluation results (~30 tasks) reported can be easily reproduced with a single A100 server in about half a day. With INT8/INT4 quantization, efficient inference on a single server with 4 * RTX 3090 (24G) is possible, see Quantization of GLM-130B for details. Combining quantization and weight offloading techniques, GLM-130B can also be inferenced on servers with even smaller GPU memory, see Low-Resource Inference for details.

Software

The GLM-130B code is built on the top of SAT. We recommend using Miniconda to manage your environment and installing additional dependencies via pip install -r requirements.txt. Here are the recommended environment configurations:

  • Python 3.9+ / CUDA 11+ / PyTorch 1.10+ / DeepSpeed 0.6+ / Apex (installation with CUDA and C++ extensions is required, see here)
  • SwissArmyTransformer>=0.2.11 is required for quantization

Model weights

Download the GLM-130B’s model checkpoint from here, make sure all 60 chunks are downloaded completely, then use the following command to merge them into a single archive file and extract it:

cat glm-130b-sat.tar.part_* > glm-130b-sat.tar
tar xvf glm-130b-sat.tar

Set CHECKPOINT_PATH in configs/model_glm_130b.sh to the path of the extracted folder. Since the checkpoint file is up to 260G, it is recommended to use the SSD or RAM disk to reduce the checkpoint loading time. Since the checkpoint we distribute is in 8-way tensor parallel, a conversion scripts is also provided if you need to change the tensor parallel dimension.

python tools/convert_tp.py \
    --input-folder <SRC_CKPT_PATH>  \
    --output-folder <DST_CKPT_PATH> \
    --target-tp <TARGET_TP>

Left-To-Right Generation / Blank Filling

bash scripts/generate.sh --input-source interactive

You can also specify an input file by --input-source input.txt.

GLM-130B uses two different mask tokens: [MASK] for short blank filling and [gMASK] for left-to-right long text generation. When the input does not contain any MASK token, [gMASK] will be automatically appended to the end of the text.

Left-To-Right Generation Examples

Example 1

Input: Who is the greatest artist? The greatest artist is [gMASK]

Output: Who is the greatest artist? The greatest artist is not the one who paints the most beautiful picture, or writes the best poem, or plays the best symphony. The greatest artist is the one whose work has the greatest impact on the lives of others. The Greatest Artist is the artist who inspires us to be our best selves. The Great Artist is not the artist whose work is flawless. The great artist is not one who is perfect. The true great artist has the ability to make us feel something. The artist who makes us feel is the great artist. The art of the great artists is to make you feel something that you have never felt before. The greatness of an artist is in the ability of their work to touch the lives and hearts of others, and to make them feel something they have never experienced before. This is the true greatness of the artist. 2. The Significance of the Great Artist The significance of the greatest artist cannot be overstated. The significance and impact of the art of a great artist cannot even be measured. The impact of a single work of art can change the course of history. A single work can change a person’s life. A work of great art can make you laugh, make you cry, make your heart race, or make you think. A great work of artistic expression has the power to change the way you look at the world. The work of a truly great artist can change your life forever. 3. The Impact of the Greatest Artist The greatest artists have the ability and the power of their art to change people’s lives. They have the power and the ability, through their art, to make people feel something new. The power of the work of the truly great artists cannot be measured, but it can be felt. The greater the impact of an artists work, the greater the power it has to change lives. 4. The Influence of the Art of the Artist The art and the work created by the greatest artists has the potential to change a life. The influence of the artists work cannot be quantified, but is felt by those who experience it. The truly great work has an ability to touch people’ lives in a way that no other work can.

Example 2 (Chinese)

Input: 问题:冬天,中国哪座城市最适合避寒?问题描述:能推荐一些国内适合冬天避寒的城市吗?回答用户:旅游爱好者 回答: [gMASK]

Output: 问题:冬天,中国哪座城市最适合避寒?问题描述:能推荐一些国内适合冬天避寒的城市吗?回答用户:旅游爱好者 回答:避寒的话,建议去海南三亚。 三亚位于海南岛的最南端,是中国最南部的热带滨海旅游城市,中国海滨城市,是中国空气质量最好的城市、全国最长寿地区(平均寿命80岁)。三亚市别称鹿城,又被称为"东方夏威夷",位居中国四大一线旅游城市"三威杭厦"之首,拥有全岛最美丽的海滨风光。三亚拥有被无数城市嫉妒的清新空气,柔和海滩。在沙滩悠闲散步、沐浴傍晚温和阳光,在海边玩耍,在雨林里面呼吸健康,欣赏自然奇观,一切都是那么令人享受。 气候 三亚地处热带,属热带季风气候区域,终年气温高,寒暑变化不大,年平均气温25.5度。三亚最热的月份是6月,平均气温为28.5°C,极端最高气温为35.7°C ,三亚没有明显的冬季,而且冬季更是到海滨旅游的最好时候,所以三亚是全年全天候避寒、消暑、度假、旅游的好地方。不过,三亚最旺的旅游时间是从10月份到农历春节,这个时候的人特别多,房间也不好订,建议最好避开高峰。三亚的三个旅游高峰期是"五一"、"十一"、"春节",这三个节日期间的房价都会上浮几倍,如果你选择了这三个假期来三亚的话要提前一个星期定房和咨询。

Blank Filling Examples

Example 1

Input: Ng is an adjunct professor at [MASK] (formerly associate professor and Director of its Stanford AI Lab or SAIL ). Also a pioneer in online education, Ng co-founded Coursera and deeplearning.ai.

Output: Ng is an adjunct professor at Stanford University (formerly associate professor and Director of its Stanford AI Lab or SAIL ). Also a pioneer in online education, Ng co-founded Coursera and deeplearning.ai.

Example 2 (Chinese)

Input: 凯旋门位于意大利米兰市古城堡旁。1807年为纪念[MASK]而建,门高25米,顶上矗立两武士青铜古兵车铸像。

Output: 凯旋门位于意大利米兰市古城堡旁。1807年为纪念拿破仑胜利而建,门高25米,顶上矗立两武士青铜古兵车铸像。

Arguments Useful in Generation
  • --input-source [path] or "interactive" The input file's path. It can also be "interactive", which will launch a CLI.
  • —-output-path [path] The folder containing the results.
  • —-out-seq-length [int] The maximum sequence length for generation (including context).
  • —-min-gen-length [int] The minimum generation length for each MASK.
  • —-sampling-strategy "BaseStrategy" or "BeamSearchStrategy". The sampling strategy used.
    • For BeamSearchStrategy:
      • —-num-beams [int] The number of beams.
      • —-length-penalty [float] The maximum sequence length for generation (including context).
      • —-no-repeat-ngram-size [int] Prohibit repeated n-gram generation.
      • —-print-all-beam Print the generated results for all beams.
    • For BaseStrategy:
      • —-top-k [int] Top k sampling.
      • —-top-p [float] Top p sampling.
      • —-temperature [float] The sampling temperature.

Evaluation

We use the YAML file to define tasks. Specifically, you can add multiple tasks or folders at a time for evaluation, and the evaluation script will automatically collect all YAML files under those folders recursively.

bash scripts/evaluate.sh task1.yaml task2.yaml dir1 dir2 ...

Download our evaluation dataset here, and set DATA_PATH in scripts/evaluate.sh to your local dataset directory. The task folder contains the YAML files for 30+ tasks we evaluated for GLM-130B. Take the CoLA task for example, run bash scripts/evaluate.sh tasks/bloom/glue_cola.yaml, which outputs an accuracy of ~65% for the best prompt and ~57% for the median.

Expected Output
MultiChoiceTaskConfig(name='glue_cola', type=<TaskType.MULTICHOICE: 'mul'>, path='/thudm/LargeScale/data/zeroshot/bloom/glue_cola', module=None, metrics=['Accuracy'], use_task_mask=False, use_multitask_encoding=False, unidirectional=False, max_seq_length=2048, file_pattern={'validation': '**/validation.jsonl'}, micro_batch_size=8)
Evaluating task glue_cola:
  Evaluating group validation:
      Finish Following_sentence_acceptable/mul/validation.jsonl, Accuracy = 42.665
      Finish Make_sense_yes_no/mul/validation.jsonl, Accuracy = 56.951
      Finish Previous_sentence_acceptable/mul/validation.jsonl, Accuracy = 65.197
      Finish editing/mul/validation.jsonl, Accuracy = 57.622
      Finish is_this_correct/mul/validation.jsonl, Accuracy = 65.197
Evaluation results of task glue_cola:
  Group validation Accuracy: max = 65.197, median = 57.622, average = 57.526
Finish task glue_cola in 101.2s. 

Multi-node evaluation can be configured by setting HOST_FILE_PATH(required by the DeepSpeed lanucher) in scripts/evaluate_multiple_node.sh. Set DATA_PATH in scripts/evaluate_multiple_node.sh and run the following command to evaluate all the tasks in ./task directory.

bash scripts/evaluate_multiple_node.sh ./tasks

See Evaluate Your Own Tasks for details on how to add new tasks.

2.5X faster Inference using FasterTransformer

By adapting the GLM-130B model to FasterTransfomer, a highly optimized transformer model library by NVIDIA, we can reach up to 2.5X speedup on generation, see Inference with FasterTransformer for details.

License

This repository is licensed under the Apache-2.0 license. The use of GLM-130B model weights is subject to the Model License.

Citation

If you find our work useful, please consider citing GLM-130B:

@article{zeng2022glm,
  title={Glm-130b: An open bilingual pre-trained model},
  author={Zeng, Aohan and Liu, Xiao and Du, Zhengxiao and Wang, Zihan and Lai, Hanyu and Ding, Ming and Yang, Zhuoyi and Xu, Yifan and Zheng, Wendi and Xia, Xiao and others},
  journal={arXiv preprint arXiv:2210.02414},
  year={2022}
}

You may also consider GLM's original work in your reference:

@inproceedings{du2022glm,
  title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
  author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
  booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
  pages={320--335},
  year={2022}
}