Convert Figma logo to code with AI

allenai logoOLMo

Modeling, training, eval, and inference code for OLMo

5,429
580
5,429
100

Top Related Projects

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

38,880

TensorFlow code and pre-trained models for BERT

37,573

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

30,829

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

9,360

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

Quick Overview

OLMo (Open Language Model) is an open-source language model and toolkit developed by AI2 (Allen Institute for AI). It aims to provide a fully open, reproducible, and customizable foundation for large language models, including pre-training, fine-tuning, and inference capabilities.

Pros

  • Fully open-source, allowing for transparency and reproducibility in language model research
  • Supports both pre-training and fine-tuning, enabling customization for specific tasks
  • Includes a comprehensive toolkit for model development and experimentation
  • Designed with scalability in mind, supporting distributed training across multiple GPUs

Cons

  • Relatively new project, which may lead to potential instability or lack of extensive community support
  • Requires significant computational resources for pre-training and fine-tuning large models
  • Documentation may be less comprehensive compared to more established language model frameworks
  • Limited pre-trained model options compared to some commercial alternatives

Code Examples

  1. Loading a pre-trained OLMo model:
from olmo import OLMoForCausalLM, OLMoTokenizer

model = OLMoForCausalLM.from_pretrained("allenai/OLMo-7B")
tokenizer = OLMoTokenizer.from_pretrained("allenai/OLMo-7B")
  1. Generating text with OLMo:
prompt = "The future of artificial intelligence is"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
  1. Fine-tuning OLMo on a custom dataset:
from olmo import OLMoForCausalLM, Trainer, TrainingArguments

model = OLMoForCausalLM.from_pretrained("allenai/OLMo-7B")
trainer = Trainer(
    model=model,
    args=TrainingArguments(output_dir="./olmo-finetuned", num_train_epochs=3),
    train_dataset=your_custom_dataset,
)
trainer.train()

Getting Started

To get started with OLMo, follow these steps:

  1. Install OLMo using pip:

    pip install olmo
    
  2. Load a pre-trained model and tokenizer:

    from olmo import OLMoForCausalLM, OLMoTokenizer
    
    model = OLMoForCausalLM.from_pretrained("allenai/OLMo-7B")
    tokenizer = OLMoTokenizer.from_pretrained("allenai/OLMo-7B")
    
  3. Generate text:

    prompt = "Hello, world!"
    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(**inputs, max_length=50)
    generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
    print(generated_text)
    

For more advanced usage, including pre-training and fine-tuning, refer to the official documentation and examples in the OLMo repository.

Competitor Comparisons

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries

Pros of GPT-NeoX

  • More extensive documentation and examples for training and fine-tuning
  • Broader community support and contributions
  • Designed for distributed training across multiple GPUs

Cons of GPT-NeoX

  • Higher computational requirements for training
  • Less focus on interpretability and analysis tools
  • More complex setup process for beginners

Code Comparison

OLMo:

from olmo import OLMo

model = OLMo.from_pretrained("olmo-1b")
output = model.generate("Hello, world!")

GPT-NeoX:

from gpt_neox import GPTNeoX

model = GPTNeoX.from_pretrained("gpt-neox-20b")
output = model.generate("Hello, world!")

Both repositories provide similar high-level APIs for loading and using pre-trained models. However, GPT-NeoX offers more advanced features for distributed training and customization, while OLMo focuses on simplicity and ease of use for researchers and developers.

OLMo emphasizes interpretability and analysis tools, making it more suitable for research-oriented tasks. GPT-NeoX, on the other hand, is designed for large-scale training and deployment, making it a better choice for production environments and projects requiring significant computational resources.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Pros of transformers

  • Extensive model support: Includes a wide range of pre-trained models and architectures
  • Active community: Large user base and frequent updates
  • Comprehensive documentation: Detailed guides and examples for various tasks

Cons of transformers

  • Complexity: Can be overwhelming for beginners due to its extensive features
  • Resource intensive: Some models require significant computational resources

Code comparison

OLMo

from olmo import OLMoTokenizer, OLMoForCausalLM

tokenizer = OLMoTokenizer.from_pretrained("allenai/olmo-7b")
model = OLMoForCausalLM.from_pretrained("allenai/olmo-7b")

transformers

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2")

Key differences

  • OLMo focuses specifically on the OLMo model, while transformers supports a wide range of models
  • transformers uses a more generalized Auto class for model and tokenizer loading
  • OLMo's API is tailored for its specific architecture, while transformers provides a unified interface for various models
38,880

TensorFlow code and pre-trained models for BERT

Pros of BERT

  • Well-established and widely adopted in the NLP community
  • Extensive documentation and pre-trained models available
  • Proven performance on various NLP tasks

Cons of BERT

  • Older architecture compared to more recent language models
  • Limited context window size (typically 512 tokens)
  • Requires fine-tuning for specific tasks

Code Comparison

BERT example:

from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')

OLMo example:

from olmo import OLMoTokenizer, OLMoForCausalLM
tokenizer = OLMoTokenizer.from_pretrained("allenai/OLMo-7B")
model = OLMoForCausalLM.from_pretrained("allenai/OLMo-7B")

Key Differences

  • OLMo is a more recent model with potential for improved performance
  • BERT uses bidirectional training, while OLMo is a unidirectional (left-to-right) model
  • OLMo is designed for open-ended text generation, while BERT excels in understanding context

Use Cases

  • BERT: Sentiment analysis, named entity recognition, question answering
  • OLMo: Text generation, language modeling, conversational AI
37,573

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Pros of DeepSpeed

  • More comprehensive optimization toolkit for deep learning
  • Supports a wider range of models and frameworks
  • Offers advanced features like ZeRO-Offload and 3D parallelism

Cons of DeepSpeed

  • Steeper learning curve due to more complex features
  • May require more configuration for optimal performance
  • Less focused on specific language model architectures

Code Comparison

OLMo:

from olmo import OLMo

model = OLMo.from_pretrained("olmo-1b")
output = model.generate("The capital of France is")

DeepSpeed:

import deepspeed
import torch

model, optimizer, _, _ = deepspeed.initialize(
    model=model,
    model_parameters=model.parameters(),
    config=ds_config
)

Key Differences

  • OLMo is specifically designed for large language models, while DeepSpeed is a more general-purpose optimization library
  • DeepSpeed offers more advanced parallelism and optimization techniques
  • OLMo provides a simpler API for working with pre-trained language models
  • DeepSpeed requires more setup but offers greater flexibility and performance potential

Use Cases

  • OLMo: Ideal for researchers and developers working specifically with large language models
  • DeepSpeed: Better suited for projects requiring advanced optimization across various deep learning tasks and model architectures
30,829

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Pros of fairseq

  • More established and mature project with a larger community
  • Supports a wider range of NLP tasks and models
  • Extensive documentation and examples

Cons of fairseq

  • Larger codebase, potentially more complex to navigate
  • May have more dependencies and setup requirements
  • Less focused on specific language model architectures

Code Comparison

OLMo example:

from olmo import OLMo

model = OLMo.from_pretrained("olmo-1b")
output = model.generate("The quick brown fox")

fairseq example:

from fairseq.models.transformer_lm import TransformerLanguageModel

model = TransformerLanguageModel.from_pretrained("transformer_lm.gpt2.large")
output = model.generate("The quick brown fox", beam=5, sampling=True)

Both repositories provide high-level APIs for loading and using pre-trained models. OLMo appears to have a more streamlined interface specifically for language models, while fairseq offers more flexibility and options for various NLP tasks.

fairseq's codebase is more extensive, covering a broader range of models and tasks. OLMo, being more focused on large language models, may have a simpler structure for those specifically interested in LLMs.

Overall, the choice between these repositories depends on the specific requirements of the project and the desired balance between flexibility and specialization in language modeling.

9,360

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

Pros of Petals

  • Focuses on distributed inference, allowing users to run large language models collaboratively
  • Supports a wider range of models, including BLOOM and LLaMA
  • Offers a unique approach to democratizing access to large language models

Cons of Petals

  • Less emphasis on model training and fine-tuning compared to OLMo
  • May have higher latency due to its distributed nature
  • Potentially more complex setup for individual users

Code Comparison

OLMo:

from olmo import OLMo

model = OLMo.from_pretrained("allenai/olmo-7b")
output = model.generate("Hello, world!")

Petals:

from petals import AutoDistributedModelForCausalLM

model = AutoDistributedModelForCausalLM.from_pretrained("bigscience/bloom")
output = model.generate("Hello, world!")

Both repositories provide easy-to-use interfaces for working with large language models. OLMo focuses on a specific model and offers more control over training and fine-tuning, while Petals emphasizes distributed inference across a network of contributors. The code examples show similar usage patterns, but Petals' approach is geared towards distributed computing.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

OLMo Logo

OLMo: Open Language Model

GitHub License GitHub release Paper URL Playground Discord

OLMo is a repository for training and using AI2's state-of-the-art open language models. It is designed by scientists, for scientists.

Installation

First, install PyTorch following the instructions specific to your operating system.

For training and fine-tuning, we recommend installing from source:

git clone https://github.com/allenai/OLMo.git
cd OLMo
pip install -e .[all]

You can also install from PyPI with:

pip install ai2-olmo

Pretraining

OLMo pretraining follows a two-stage training procedure. In the first stage, we train on large amounts of mostly web-based data: OLMo-mix-1124 In the second stage, we train on a smaller amount of high-quality, targeted data: Dolmino-mix-1124

You can find all the checkpoints, at minimum every 1000 training steps in OLMo core and Hugging Face format:

VariantOLMo Format (Stage 1)OLMo Format (Stage 2)Hugging Face Format
OLMo-2 7BOLMo-2 7BOLMo-2 7BHugging Face for the 7B variant
OLMo-2 13BOLMo-2 13BOLMo-2 13BHugging Face for the 13B variant
OLMo-2 32BOLMo-2 32BOLMo-2 32BHugging Face for the 32B variant

Note: The 32B variant was trained on our new trainer. To train or fine-tune OLMo-2 32B, visit OLMo-core.

Steps to reproduce

To reproduce any of the training processes described below, run this:

torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config}

For the training config, use any of the configs listed below.

If you want to override any of the settings in the training config without having to write a new config every time, you can do this:

torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \
  --setting1=value \
  --setting2=value \
  --setting3.subsetting1=value

The training configs below refer to training data that gets streamed in live over HTTP. To reproduce at large scale, we recommend downloading the files locally and changing the paths to point to your local file system.

To run on Mac silicon devices:

python scripts/train.py {path_to_train_config}

Example:

python scripts/train.py configs/tiny/OLMo-20M.yaml --save_overwrite

Note: You need to upgrade PyTorch to 2.5.x to run.

Stage 1

Stage 1 is the biggest stage, where we train on 4T or 5T tokens on largely web-based data.

OLMo2 7BOLMo2 13B
Number of tokens4 Trillion5 Trillion
Checkpointstage1-step928646-tokens3896Bstage1-step596057-tokens5001B
Training configOLMo2-7B-stage1.yamlOLMo2-13B-stage1.yaml
WandBwandb.ai/OLMo2-7Bwandb.ai/OLMo2-13B

Stage 2 for the 7B

For the 7B model, we train three times with different data order on 50B high quality tokens, and then average ("soup") the models.

CheckpointTraining configWandB
random seed 42stage2-ingredient1-step11931-tokens50BOLMo2-7B-stage2-seed42.yamlwandb.ai/OLMo2-7B
random seed 42069stage2-ingredient2-step11931-tokens50BOLMo2-7B-stage2-seed42069.yamlwandb.ai/OLMo2-7B
random seed 666stage2-ingredient3-step11931-tokens50BOLMo2-7B-stage2-seed666.yamlwandb.ai/OLMo2-7B
final souped modelmainno config, we just averaged the weights in Python

The training configs linked here are set up to download the latest checkpoint after stage 1, and start training from there.

Stage 2 for the 13B

For the 13B model, we train three times with different data order on 100B high quality tokens, and one more time on 300B high quality tokens. Then we average ("soup") the models.

CheckpointTraining configWandB
random seed 1110, 100Bstage2-ingredient1-step11931-tokens100BOLMo2-13B-stage2-seed1110-100B.yamlwandb.ai/OLMo2-13B
random seed 2662, 100Bstage2-ingredient2-step11931-tokens100BOLMo2-13B-stage2-seed2662-100B.yamlwandb.ai/OLMo2-13B
random seed 6209, 100Bstage2-ingredient3-step11931-tokens100BOLMo2-13B-stage2-seed6209-100B.yamlwandb.ai/OLMo2-13B
random seed 2662, 300Bstage2-ingredient4-step11931-tokens300BOLMo2-13B-stage2-seed2662-300B.yamlwandb.ai/OLMo2-13B
final souped modelmainno config, we just averaged the weights in Python

The training configs linked here are set up to download the latest checkpoint after stage 1, and start training from there.

Note: You can find all the information about the 32B in the OLMo-core repository.

Instruction tuned variants

For instruction tuned variants of these models, go to

Inference

You can use our Hugging Face integration to run inference on the OLMo Transformers checkpoints:

from transformers import AutoModelForCausalLM, AutoTokenizer
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-7B")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-2-1124-7B")
message = ["Language modeling is "]
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
# optional verifying cuda
# inputs = {k: v.to('cuda') for k,v in inputs.items()}
# olmo = olmo.to('cuda')
response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])

Alternatively, with the Hugging Face pipeline abstraction:

from transformers import pipeline
olmo_pipe = pipeline("text-generation", model="allenai/OLMo-2-1124-7B")
print(olmo_pipe("Language modeling is"))

Quantization

olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-7B", torch_dtype=torch.float16, load_in_8bit=True)  # requires bitsandbytes

The quantized model is sensitive to input types and CUDA handling. To avoid potential issues, we recommend explicitly converting input IDs to CUDA using: inputs.input_ids.to('cuda')

Evaluation

Additional tools for evaluating OLMo models are available at the OLMo Eval and olmes repositories.

Modal.com Hosting

An example script is provided for hosting an OLMo 2 model on Modal.com using the OpenAI API in ./scripts/olmo2_modal_openai.py. To run that:

  1. Follow the instructions under Getting Started in the Modal.com Guide to install the Modal library and command line tools.
  2. Follow the instructions under Secrets in the Modal.com Guide to create a Modal secret named "example-secret-token" that defines a value for the variable MODAL_TOKEN for your server.
  3. Then run
modal deploy ./scripts/olmo2_modal_openai.py

You can check your endpoint using curl similar to the following:

curl -X POST \
  -H "Authorization: Bearer [the secret token from above]" \
  -H "Content-Type: application/json" \
  -d @body.json \
  https://[the web endpoint modal creates above]/v1/chat/completions

where body.json is of the form:

{
    "model": "OLMo-2-1124-13B-Instruct",
    "messages": [
        {
            "role": "user",
            "content": "Who was Alan Turing?"
        }
      ],
    "max_tokens": 100,
    "temperature": 0.9,
    "stream": true
}

Citing

@misc{olmo20242olmo2furious,
      title={2 OLMo 2 Furious}, 
      author={Team OLMo and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},
      year={2024},
      eprint={2501.00656},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2501.00656}, 
}