Convert Figma logo to code with AI

microsoft logoLoRA

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

10,276
655
10,276
100

Top Related Projects

15,745

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

Instruct-tune LLaMA on consumer hardware

A Gradio web UI for Large Language Models.

56,019

Inference code for Llama models

30,331

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries

Quick Overview

LoRA (Low-Rank Adaptation) is a technique for efficient fine-tuning of large language models. It reduces the number of trainable parameters by adding pairs of rank decomposition matrices to existing weights, enabling faster and more memory-efficient adaptation of pre-trained models to specific tasks or domains.

Pros

  • Significantly reduces memory usage and training time compared to full fine-tuning
  • Maintains model quality while using fewer parameters
  • Allows for easy switching between different adaptations of the same base model
  • Compatible with various model architectures and tasks

Cons

  • May not capture all nuances of full fine-tuning for some complex tasks
  • Requires careful tuning of hyperparameters for optimal performance
  • Limited to linear transformations, which may not be suitable for all adaptation scenarios
  • Potential for overfitting if not properly regularized

Code Examples

  1. Applying LoRA to a pre-trained model:
import torch
from transformers import AutoModelForCausalLM
from peft import get_peft_model, LoraConfig, TaskType

model = AutoModelForCausalLM.from_pretrained("gpt2")
peft_config = LoraConfig(
    task_type=TaskType.CAUSAL_LM,
    r=8,
    lora_alpha=32,
    lora_dropout=0.1
)
model = get_peft_model(model, peft_config)
  1. Training a LoRA-adapted model:
from transformers import Trainer, TrainingArguments

trainer = Trainer(
    model=model,
    args=TrainingArguments(output_dir="./lora_gpt2"),
    train_dataset=train_dataset,
    eval_dataset=eval_dataset
)
trainer.train()
  1. Merging LoRA weights with the base model:
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained("gpt2")
peft_model = PeftModel.from_pretrained(base_model, "path/to/lora/weights")
merged_model = peft_model.merge_and_unload()

Getting Started

To use LoRA with the PEFT library:

  1. Install the required packages:
pip install transformers peft torch
  1. Load a pre-trained model and apply LoRA:
from transformers import AutoModelForCausalLM
from peft import get_peft_model, LoraConfig, TaskType

model = AutoModelForCausalLM.from_pretrained("gpt2")
peft_config = LoraConfig(task_type=TaskType.CAUSAL_LM, r=8, lora_alpha=32, lora_dropout=0.1)
peft_model = get_peft_model(model, peft_config)
  1. Use the adapted model for fine-tuning or inference as needed.

Competitor Comparisons

15,745

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

Pros of PEFT

  • Broader range of parameter-efficient fine-tuning techniques (LoRA, Prefix Tuning, P-Tuning, Prompt Tuning, etc.)
  • Seamless integration with Hugging Face's Transformers library
  • Active development and community support

Cons of PEFT

  • May have a steeper learning curve due to more options
  • Potentially slower inference compared to LoRA's specialized implementation

Code Comparison

LoRA:

import lora

model = lora.apply_lora(base_model, r=8, alpha=16)
lora.mark_only_lora_as_trainable(model)

PEFT:

from peft import get_peft_model, LoraConfig

peft_config = LoraConfig(r=8, lora_alpha=16, task_type="CAUSAL_LM")
model = get_peft_model(model, peft_config)

Both libraries offer similar functionality for applying LoRA, but PEFT provides a more standardized interface across different parameter-efficient fine-tuning methods. LoRA's implementation might be more optimized for its specific use case, while PEFT offers greater flexibility and integration with the broader Hugging Face ecosystem.

Instruct-tune LLaMA on consumer hardware

Pros of Alpaca-LoRA

  • Specifically designed for fine-tuning language models like LLaMA
  • Includes pre-built datasets and scripts for easy fine-tuning
  • Focuses on instruction-following tasks and conversational AI

Cons of Alpaca-LoRA

  • Limited to LLaMA-based models, less versatile than LoRA
  • Smaller community and fewer resources compared to LoRA
  • May require more computational resources for fine-tuning

Code Comparison

LoRA:

import loralib as lora

# Apply LoRA to a linear layer
lora_layer = lora.Linear(in_features, out_features, r=rank)

Alpaca-LoRA:

from peft import LoraConfig, get_peft_model

config = LoraConfig(r=8, lora_alpha=32, target_modules=["q_proj", "v_proj"])
model = get_peft_model(model, config)

Both repositories implement the LoRA technique, but Alpaca-LoRA is tailored for fine-tuning LLaMA models with a focus on instruction-following tasks. LoRA offers a more general implementation that can be applied to various neural network architectures. Alpaca-LoRA provides a higher-level interface with pre-configured settings, while LoRA allows for more fine-grained control over the low-rank adaptation process.

A Gradio web UI for Large Language Models.

Pros of text-generation-webui

  • User-friendly web interface for text generation
  • Supports multiple models and fine-tuning techniques
  • Extensive customization options for generation parameters

Cons of text-generation-webui

  • Less focused on specific fine-tuning methods like LoRA
  • May have higher resource requirements due to its comprehensive features
  • Potentially steeper learning curve for advanced usage

Code Comparison

LoRA (Python):

lora_model = LoRAModel(base_model, rank=4)
lora_model.train()
output = lora_model(input_ids)

text-generation-webui (Python):

model = load_model("gpt2")
generate_params = {
    "max_new_tokens": 50,
    "temperature": 0.7,
}
output = model.generate(prompt, **generate_params)

Summary

LoRA focuses on efficient fine-tuning of large language models, while text-generation-webui provides a comprehensive web interface for text generation with various models. LoRA is more specialized and potentially more efficient for specific fine-tuning tasks, whereas text-generation-webui offers a broader range of features and model support at the cost of potentially higher resource usage and complexity.

56,019

Inference code for Llama models

Pros of Llama

  • Offers a complete language model architecture, not just a fine-tuning method
  • Provides pre-trained models with varying sizes and capabilities
  • Designed for efficient inference on consumer hardware

Cons of Llama

  • Requires more computational resources for training and fine-tuning
  • Limited to specific model architectures and sizes
  • May have licensing restrictions for commercial use

Code Comparison

LoRA (adapter implementation):

class LoRALayer(nn.Module):
    def __init__(self, in_features, out_features, rank=4):
        super().__init__()
        self.lora_A = nn.Parameter(torch.randn(rank, in_features))
        self.lora_B = nn.Parameter(torch.zeros(out_features, rank))

    def forward(self, x):
        return (x @ self.lora_A.T) @ self.lora_B.T

Llama (model initialization):

class LlamaForCausalLM(LlamaPreTrainedModel):
    def __init__(self, config):
        super().__init__(config)
        self.model = LlamaModel(config)
        self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)

Note: LoRA is a fine-tuning method that can be applied to various models, while Llama is a specific model architecture. The code comparison shows how LoRA implements its adapter and how Llama initializes its model structure.

30,331

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Pros of fairseq

  • Comprehensive toolkit for sequence modeling tasks
  • Supports a wide range of architectures and tasks
  • Extensive documentation and examples

Cons of fairseq

  • Steeper learning curve due to its complexity
  • Potentially heavier resource requirements
  • Less focused on specific fine-tuning techniques

Code Comparison

fairseq:

from fairseq.models.transformer import TransformerModel
model = TransformerModel.from_pretrained('/path/to/model')
translated = model.translate('Hello world!')

LoRA:

from transformers import AutoModelForCausalLM
from peft import LoraConfig, get_peft_model

model = AutoModelForCausalLM.from_pretrained("gpt2")
lora_config = LoraConfig(r=8, lora_alpha=32, target_modules=["q_proj", "v_proj"])
model = get_peft_model(model, lora_config)

Key Differences

  • fairseq is a broader toolkit for various NLP tasks, while LoRA focuses on efficient fine-tuning
  • LoRA offers a more lightweight approach to model adaptation
  • fairseq provides more built-in models and tasks, whereas LoRA is designed to work with existing models
  • LoRA is generally easier to integrate into existing workflows for fine-tuning large language models

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries

Pros of gpt-neox

  • Designed specifically for training large language models
  • Includes optimizations for distributed training across multiple GPUs
  • Provides a complete pipeline for data preprocessing, training, and evaluation

Cons of gpt-neox

  • More complex setup and configuration compared to LoRA
  • Requires significant computational resources for training
  • Less flexible for fine-tuning pre-trained models on specific tasks

Code Comparison

gpt-neox:

from gpt_neox import GPTNeoX
model = GPTNeoX.from_pretrained("EleutherAI/gpt-neox-20b")
output = model.generate("Once upon a time", max_length=100)

LoRA:

from transformers import AutoModelForCausalLM
from peft import LoraConfig, get_peft_model

model = AutoModelForCausalLM.from_pretrained("gpt2")
lora_config = LoraConfig(r=8, lora_alpha=32, target_modules=["q_proj", "v_proj"])
model = get_peft_model(model, lora_config)

The code snippets demonstrate the different approaches of the two repositories. gpt-neox focuses on loading and using a pre-trained large language model, while LoRA showcases its ability to apply efficient fine-tuning to existing models.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

LoRA: Low-Rank Adaptation of Large Language Models

This repo contains the source code of the Python package loralib and several examples of how to integrate it with PyTorch models, such as those in Hugging Face. We only support PyTorch for now. See our paper for a detailed description of LoRA.

LoRA: Low-Rank Adaptation of Large Language Models
Edward J. Hu*, Yelong Shen*, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen
Paper: https://arxiv.org/abs/2106.09685
Video explainer: https://www.youtube.com/watch?v=DhRoTONcyZE

Update 2/2023: LoRA is now supported by the State-of-the-art Parameter-Efficient Fine-Tuning (PEFT) library by Hugging Face.

LoRA reduces the number of trainable parameters by learning pairs of rank-decompostion matrices while freezing the original weights. This vastly reduces the storage requirement for large language models adapted to specific tasks and enables efficient task-switching during deployment all without introducing inference latency. LoRA also outperforms several other adaptation methods including adapter, prefix-tuning, and fine-tuning.

We obtain result comparable or superior to full finetuning on the GLUE benchmark using RoBERTa (Liu et al., 2019) base and large and DeBERTa (He et al., 2020) XXL 1.5B, while only training and storing a fraction of the parameters. Click the numbers below to download the RoBERTa and DeBERTa LoRA checkpoints.

RoBERTa base
Fine-tune
RoBERTa base
LoRA
DeBERTa XXL
Fine-tune
DeBERTa XXL
LoRA
# of Trainable Params.125M0.8M1.5B4.7M
MNLI (m-Acc/mm-Acc)87.687.5±.3/86.9±.391.7/91.991.9±.1/91.9±.2
SST2 (Acc)94.895.1±.297.296.9±.2
MRPC (Acc)90.289.7±.792.092.6±.6
CoLA (Matthew's Corr)63.663.4±1.272.072.4±1.1
QNLI (Acc)92.893.3±.396.096.0±.1
QQP (Acc)91.990.8±.192.792.9±.1
RTE (Acc)78.786.6±.793.994.9±.4
STSB (Pearson/Spearman Corr)91.291.5±.2/91.3±.292.9/92.693.0±.2/92.9±.3
Average86.4087.2491.0691.32

Note: You still need the original pre-trained checkpoint from Hugging Face to use the LoRA checkpoints.

Fine-tuning numbers are taken from Liu et al. (2019) and He et al. (2020). We include confidence intervals on results from our experiments. Please follow the instructions in examples/NLU/ to reproduce our results.

On GPT-2, LoRA compares favorably to both full finetuning and other efficient tuning methods, such as adapter (Houlsby et al., 2019) and prefix tuning (Li and Liang, 2021). We evaluated on E2E NLG Challenge, DART, and WebNLG:

Method# of Trainable ParamsE2E (BLEU)DART (BLEU)WebNLG (BLEU-U/S/A)
GPT-2 M (Fine-Tune)354.92M68.246.030.4/63.2/47.6
GPT-2 M (Adapter)0.37M66.342.445.1/54.5/50.2
GPT-2 M (Prefix)0.35M69.745.744.1/63.1/54.4
GPT-2 M (LoRA)0.35M70.4±.147.1±.246.7±.4/62.1±.2/55.3±.2
GPT-2 L (Fine-Tune)774.03M68.546.541.7/64.6/54.2
GPT-2 L (Adapter)0.88M69.1±.145.7±.149.8±.0/61.1±.0/56.0±.0
GPT-2 L (Prefix)0.77M70.346.547.0/64.2/56.4
GPT-2 L (LoRA)0.77M70.4±.147.5±.148.4±.3/64.0±.3/57.0±.1

Non-LoRA baselines, except for adapter on GPT-2 large, are taken from Li and Liang (2021). We include confidence intervals on results from our experiments.

Download the GPT-2 LoRA checkpoints:

Please follow the instructions in examples/NLG/ to reproduce our result.

Repository Overview

(The initial release of this repo has been archived in the branch "snapshot-9-15-2021")

There are several directories in this repo:

  • loralib/ contains the source code for the package loralib, which needs to be installed to run the examples we provide;
  • examples/NLG/ contains an example implementation of LoRA in GPT-2 using our package, which can be used to reproduce the result in our paper;
  • examples/NLU/ contains an example implementation of LoRA in RoBERTa and DeBERTa using our package, which produces competitive results on the GLUE benchmark;
  • See how we use loralib in GPT-2, RoBERTa, and DeBERTa v2

Quickstart

  1. Installing loralib is simply
pip install loralib
# Alternatively
# pip install git+https://github.com/microsoft/LoRA
  1. You can choose to adapt some layers by replacing them with counterparts implemented in loralib. We only support nn.Linear, nn.Embedding, and nn.Conv2d for now. We also support a MergedLinear for cases where a single nn.Linear represents more than one layers, such as in some implementations of the attention qkv projection (see Additional Notes for more).
# ===== Before =====
# layer = nn.Linear(in_features, out_features)

# ===== After ======
import loralib as lora
# Add a pair of low-rank adaptation matrices with rank r=16
layer = lora.Linear(in_features, out_features, r=16)
  1. Before the training loop begins, mark only LoRA parameters as trainable.
import loralib as lora
model = BigModel()
# This sets requires_grad to False for all parameters without the string "lora_" in their names
lora.mark_only_lora_as_trainable(model)
# Training loop
for batch in dataloader:
   ...
  1. When saving a checkpoint, generate a state_dict that only contains LoRA parameters.
# ===== Before =====
# torch.save(model.state_dict(), checkpoint_path)
# ===== After =====
torch.save(lora.lora_state_dict(model), checkpoint_path)
  1. When loading a checkpoint using load_state_dict, be sure to set strict=False.
# Load the pretrained checkpoint first
model.load_state_dict(torch.load('ckpt_pretrained.pt'), strict=False)
# Then load the LoRA checkpoint
model.load_state_dict(torch.load('ckpt_lora.pt'), strict=False)

Now training can proceed as usual.

Additional Notes

  1. While we focus on a simple yet effect setup, namely adapting only the q and v projection in a Transformer, in our examples, LoRA can be apply to any subsets of pre-trained weights. We encourage you to explore different configurations, such as adapting the embedding layer by replacing nn.Embedding with lora.Embedding and/or adapting the MLP layers. It's very likely that the optimal configuration varies for different model architectures and tasks.

  2. Some Transformer implementation uses a single nn.Linear for the projection matrices for query, key, and value. If one wishes to constrain the rank of the updates to the individual matrices, one has to either break it up into three separate matrices or use lora.MergedLinear. Make sure to modify the checkpoint accordingly if you choose to break up the layer.

# ===== Before =====
# qkv_proj = nn.Linear(d_model, 3*d_model)
# ===== After =====
# Break it up (remember to modify the pretrained checkpoint accordingly)
q_proj = lora.Linear(d_model, d_model, r=8)
k_proj = nn.Linear(d_model, d_model)
v_proj = lora.Linear(d_model, d_model, r=8)
# Alternatively, use lora.MergedLinear (recommended)
qkv_proj = lora.MergedLinear(d_model, 3*d_model, r=8, enable_lora=[True, False, True])
  1. Training bias vectors in tandem with LoRA might be a cost-efficient way to squeeze out extra task performance (if you tune the learning rate carefully). While we did not study its effect thoroughly in our paper, we make it easy to try in lora. You can mark some biases as trainable by passing "all" or "lora_only" to bias= when calling mark_only_lora_as_trainable. Remember to pass the corresponding bias= argument to lora_state_dict when saving a checkpoint.
# ===== Before =====
# lora.mark_only_lora_as_trainable(model) # Not training any bias vectors
# ===== After =====
# Training all bias vectors associated with modules we apply LoRA to 
lora.mark_only_lora_as_trainable(model, bias='lora_only')
# Alternatively, we can train *all* bias vectors in the model, including LayerNorm biases
lora.mark_only_lora_as_trainable(model, bias='all')
# When saving a checkpoint, use the same bias= ('all' or 'lora_only')
torch.save(lora.lora_state_dict(model, bias='all'), checkpoint_path)
  1. Calling model.eval() will trigger the merging of LoRA parameters with the corresponding pretrained ones, which eliminates additional latency for subsequent forward passes. Calling model.train() again will undo the merge. This can be disabled by passing merge_weights=False to LoRA layers.

Contact

Please contact us or post an issue if you have any questions.

For questions related to the package loralib:

The GPT-2 example:

The RoBERTa/DeBERTa example:

Acknowledgements

We thank in alphabetical order Jianfeng Gao, Jade Huang, Jiayuan Huang, Lisa Xiang Li, Xiaodong Liu, Yabin Liu, Benjamin Van Durme, Luis Vargas, Haoran Wei, Peter Welinder, and Greg Yang for providing valuable feedback.

Citation

@inproceedings{
hu2022lora,
title={Lo{RA}: Low-Rank Adaptation of Large Language Models},
author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=nZeVKeeFYf9}
}

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.