Convert Figma logo to code with AI

EleutherAI logopythia

The hub for EleutherAI's work on interpretability and learning dynamics

2,356
175
2,356
28

Top Related Projects

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

37,573

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Ongoing research training transformer models at scale

38,880

TensorFlow code and pre-trained models for BERT

30,829

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

57,265

Inference code for Llama models

Quick Overview

Pythia is a family of open-source language models developed by EleutherAI. These models are designed to be more efficient and controllable than traditional large language models, while still maintaining high performance across various natural language processing tasks.

Pros

  • Open-source and freely available for research and commercial use
  • Offers a range of model sizes to suit different computational requirements
  • Trained on a diverse and curated dataset, enhancing performance across various domains
  • Implements efficient architectures, allowing for better resource utilization

Cons

  • May not match the performance of the largest proprietary language models in some tasks
  • Requires significant computational resources for fine-tuning and deployment of larger variants
  • Limited documentation and community support compared to more established language models
  • Potential biases inherited from training data, as with all large language models

Code Examples

# Loading a Pythia model using the Hugging Face Transformers library
from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "EleutherAI/pythia-70m"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Generate text
input_text = "The quick brown fox"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(input_ids, max_length=50, num_return_sequences=1)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
# Fine-tuning Pythia on a custom dataset
from transformers import Trainer, TrainingArguments

# Assume dataset is prepared and loaded as 'train_dataset'
training_args = TrainingArguments(
    output_dir="./results",
    num_train_epochs=3,
    per_device_train_batch_size=8,
    save_steps=10_000,
    save_total_limit=2,
)

trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
)

trainer.train()
# Using Pythia for text classification
from transformers import AutoModelForSequenceClassification

model_name = "EleutherAI/pythia-160m"
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2)
tokenizer = AutoTokenizer.from_pretrained(model_name)

inputs = tokenizer("This is a positive review!", return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class = logits.argmax().item()
print(f"Predicted class: {predicted_class}")

Getting Started

To get started with Pythia, follow these steps:

  1. Install the required libraries:

    pip install transformers torch
    
  2. Load a Pythia model:

    from transformers import AutoTokenizer, AutoModelForCausalLM
    
    model_name = "EleutherAI/pythia-70m"
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    model = AutoModelForCausalLM.from_pretrained(model_name)
    
  3. Generate text:

    input_text = "Once upon a time"
    input_ids = tokenizer.encode(input_text, return_tensors="pt")
    output = model.generate(input_ids, max_length=100, num_return_sequences=1)
    generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
    print(generated_text)
    

Competitor Comparisons

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Pros of transformers

  • Extensive library with support for a wide range of models and tasks
  • Well-documented with comprehensive examples and tutorials
  • Large community support and frequent updates

Cons of transformers

  • Can be complex for beginners due to its extensive features
  • Larger package size and potential overhead for simple tasks

Code comparison

transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
inputs = tokenizer("Hello, I'm a language model,", return_tensors="pt")
outputs = model.generate(**inputs)

pythia:

from pythia import GPT2LMHeadModel, GPT2Tokenizer

model = GPT2LMHeadModel.from_pretrained("gpt2")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
inputs = tokenizer("Hello, I'm a language model,", return_tensors="pt")
outputs = model.generate(inputs.input_ids)

The code comparison shows that both libraries offer similar functionality for loading and using pre-trained models. However, transformers provides a more unified interface with the AutoModelForCausalLM and AutoTokenizer classes, while pythia uses model-specific classes. transformers also allows passing the entire inputs dictionary to the generate method, which can be more convenient in some cases.

37,573

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Pros of DeepSpeed

  • Focuses on optimizing large-scale model training and inference
  • Offers a comprehensive suite of optimization techniques
  • Provides extensive documentation and tutorials

Cons of DeepSpeed

  • Steeper learning curve due to its complexity
  • May require more setup and configuration for simpler projects
  • Less focused on specific language model architectures

Code Comparison

DeepSpeed:

import deepspeed
model_engine, optimizer, _, _ = deepspeed.initialize(
    args=args,
    model=model,
    model_parameters=params
)

Pythia:

from pythia.models import Pythia
model = Pythia.from_pretrained("pythia-70m")
output = model.generate("Hello, how are you?")

DeepSpeed is a more general-purpose optimization library for deep learning, while Pythia is specifically designed for language models. DeepSpeed offers advanced features for distributed training and inference optimization, making it suitable for large-scale projects. Pythia, on the other hand, provides a simpler interface for working with pre-trained language models, making it more accessible for quick experiments and smaller projects.

DeepSpeed's code focuses on initializing the training environment, while Pythia's code demonstrates easy model loading and text generation. This reflects their different primary use cases: DeepSpeed for optimizing training and Pythia for quick deployment of language models.

Ongoing research training transformer models at scale

Pros of Megatron-LM

  • Highly optimized for NVIDIA GPUs, offering superior performance on compatible hardware
  • Supports advanced features like model parallelism and pipeline parallelism
  • Extensive documentation and examples for various model architectures

Cons of Megatron-LM

  • Limited flexibility compared to Pythia, as it's primarily designed for NVIDIA hardware
  • Steeper learning curve due to its focus on advanced parallelism techniques
  • Less community-driven development, with updates primarily from NVIDIA

Code Comparison

Megatron-LM (model initialization):

model = MegatronModule(
    init_method=init_method,
    output_layer_init_method=scaled_init_method,
    num_tokentypes=num_tokentypes,
    parallel_output=parallel_output)

Pythia (model initialization):

model = GPTNeoXForCausalLM.from_pretrained(
    model_name,
    revision=revision,
    cache_dir=cache_dir,
    device_map="auto",
    trust_remote_code=True)

The code snippets highlight the difference in approach, with Megatron-LM focusing on parallelism and initialization methods, while Pythia emphasizes ease of use with pre-trained models and automatic device mapping.

38,880

TensorFlow code and pre-trained models for BERT

Pros of BERT

  • Well-established and widely adopted in industry and research
  • Extensive documentation and community support
  • Pre-trained models available for various languages and tasks

Cons of BERT

  • Older architecture compared to more recent transformer models
  • Limited context window size (typically 512 tokens)
  • Computationally expensive for fine-tuning on large datasets

Code Comparison

BERT example:

from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')

Pythia example:

from transformers import GPTNeoXTokenizerFast, GPTNeoXForCausalLM
tokenizer = GPTNeoXTokenizerFast.from_pretrained("EleutherAI/pythia-70m")
model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/pythia-70m")

Both repositories use the Hugging Face Transformers library for easy model loading and tokenization. BERT focuses on bidirectional encoding, while Pythia is designed for causal language modeling tasks. Pythia offers more recent architectures and larger model sizes, potentially providing better performance on certain tasks, but may require more computational resources.

30,829

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Pros of fairseq

  • More comprehensive and feature-rich, supporting a wider range of NLP tasks
  • Better documentation and examples for various use cases
  • Larger community and more frequent updates

Cons of fairseq

  • Steeper learning curve due to its complexity
  • Heavier and potentially slower for simpler tasks
  • More dependencies and setup requirements

Code Comparison

fairseq:

from fairseq.models.transformer import TransformerModel

model = TransformerModel.from_pretrained('/path/to/model', 'checkpoint.pt')
tokens = model.encode('Hello world!')
output = model.decode(tokens)

Pythia:

from pythia.models import Model

model = Model.from_pretrained('pythia-70m')
output = model.generate("Hello world!", max_new_tokens=50)

The code comparison shows that fairseq requires more setup and explicit encoding/decoding steps, while Pythia offers a simpler, more streamlined API for text generation tasks. fairseq's approach provides more control over the process, but Pythia's implementation is more user-friendly for quick text generation experiments.

57,265

Inference code for Llama models

Pros of Llama

  • Developed by Meta, leveraging extensive resources and expertise
  • Offers larger model sizes (up to 65B parameters) for more powerful language understanding
  • Provides pre-trained models with impressive performance across various NLP tasks

Cons of Llama

  • More restrictive licensing and access compared to open-source alternatives
  • Limited community contributions due to controlled development process
  • Requires significant computational resources for fine-tuning and deployment

Code Comparison

Pythia example:

from transformers import GPTNeoXForCausalLM, AutoTokenizer

model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/pythia-1.4b")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-1.4b")

Llama example:

from transformers import LlamaForCausalLM, LlamaTokenizer

model = LlamaForCausalLM.from_pretrained("meta-llama/Llama-2-7b")
tokenizer = LlamaTokenizer.from_pretrained("meta-llama/Llama-2-7b")

Both repositories provide pre-trained language models, but Pythia focuses on open-source development and community collaboration, while Llama offers more powerful models with restricted access. Pythia is more accessible for researchers and developers, whereas Llama is better suited for organizations with the resources to handle larger models and navigate licensing requirements.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Pythia: Interpreting Transformers Across Time and Scale

This repository is for EleutherAI's project Pythia which combines interpretability analysis and scaling laws to understand how knowledge develops and evolves during training in autoregressive transformers. For detailed info on the models, their training, and their properties, please see our paper Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling.

The Pythia suite was developed with the explicit purpose of enabling research in interpretability, learning dynamics, and ethics and transparency for which existing model suites were inadequate. The key features of the Pythia suite are:

  1. All models, data, and code used in the paper are publicly released, enabling full reproducibility of results. All results in our paper have been independently verified by at least one other lab.
  2. All models feature 154 checkpoints saved throughout training, enabling the study of learning dynamics of LLMs.
  3. All models were trained on the same data in the same order, enabling researchers to explore causal interventions on the training process.

At time of release, Pythia was the only model suite in the world to meet these desiderata. In fact, the 154 checkpoints we released for our 12B parameter models represented more partially trained checkpoints for each model than the rest of the world had ever released for all 12B+ models combined. Our work has inspired several others to create similar projects, including LLM360's Amber and K2-65B, AI2's OLMo, and Zyphra's BlackMamba.

Aside from the Pythia suite itself, this repository also acts as a hub containing information, code, and reproducibility instructions for the following papers:

  • Emergent and Predictable Memorization in Large Language Models [code] [paper]
  • PolyPythias: Stability and Outliers across Fifty Language Model Pre-Training Runs [code] [paper]

Changelog

[March 10, 2025] Added info for the PolyPythias paper.

[July 9, 2024] Substantially revamped the readme, including better historical contextualization and promoting lots of cool research people have done with Pythia. Also added links to subsequently trained models.

[November 2, 2023] We have added 14M and 31M models at the request of some researchers. We plan on training deduped versions of these models in the future.

[April 3, 2023] We have released a new version of all Pythia models, fixing various inconsistencies in the original suite. Please see Appendix B in the Pythia paper for details on the changes. The old models ("v0") remain available here and may be useful for ablation studies.

[January 20, 2023] On January 20, 2023, we chose to rename the Pythia model suite to include both embedding layer and unembedding layer parameters in our total parameter counts, in line with many other model suites and because we believe this convention better reflects the on-device memory usage of these models. We also discovered that due to a typo one of our models was smaller than we thought, and replaced it with a model of the intended size. See here for more details.

Table of contents

Models

We train and release a suite of 8 model sizes on the Pile (paper, datasheet) as well as the Pile with deduplication applied. All 8 model sizes are trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 ~= 300B tokens during training. This corresponds to just under 1 epoch on the Pile for "standard" models, and ~= 1.5 epochs on the deduped Pile (which contains 207B tokens in 1 epoch). All models are trained with mixed precision, using fp16 for all models except EleutherAI/pythia-1b which trained with bf16, because in fp16 the model experienced an irreconcilable loss spike late in training.

After our initial release, we trained 14M and 31M parameter models at the request of alignment researchers interested in scaling sparse autoencoders.

Paramsn_layersd_modeln_headsd_headBatch SizeLearning RateHugging Face Checkpoints
14M61284322M1.0e-3Standard
31M62568322M1.0e-3Standard
70M65128642M1.0e-3Standard, Deduped
160M1276812642M6.0e-4Standard, Deduped
410M24102416642M3.0e-4Standard, Deduped
1B16204882562M3.0e-4Standard, Deduped
1.4B242048161282M2.0e-4Standard, Deduped
2.8B32256032802M1.6e-4Standard, Deduped
6.9B324096321282M1.2e-4Standard, Deduped
12B365120401282M1.2e-4Standard, Deduped

To promote research on the learning dynamics of LLMs we make 154 checkpoints available for each model, representing steps 0 (initialization), 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1000, and then every 1,000 subsequent steps. We also upload the pre-tokenized data files and a script to reconstruct the dataloader as seen during training for all models. See Reproducing Training section for more details.

Config files used to train these models with the GPT-NeoX library can be found at the models/ directory within this repository, as well as in the GPT-NeoX library itself.

We made a mistake while originally training these models resulting in some inconsistencies across runs. We reran the entire model suite with these inconsistencies fixed and the original runs are available under the name EleutherAI/pythia-160m-v0. See the Pythia paper for further details on how the v0 models differ from the main suite.

The loss curves for all models are contained in our (messy!) wandb project here.

A rough and partial correspondence between models and wandb runs is given by:

ModelWandb
Pythia-2.8bLink
Pythia-2.8b-dedupedLink
Pythia-1bLink
Pythia-1.4bLink
Pythia-1.4b-dedupedLink
Pythia-160mLink
Pythia-160m-dedupedLink

Multiple random seeds

The random seed used to train the Pythia models is the GPT-NeoX default: 1234. To enable research into how randomness effects model behavior, we have been training more models with different random seeds. We have currently trained and released the following models using each random seed from 1 to 9.

  • Pythia 14M
  • Pythia 31M
  • Pythia 70M
  • Pythia 160M
  • Pythia 410M

All of these models are the standard Pythia models, not the ones trained on the deduplicated Pile. Combined with the originally released models they represent ten otherwise identical variants using different random seeds. They can be found on HuggingFace using the naming pattern https://huggingface.co/EleutherAI/pythia-[size]-seed[num]. For example, https://huggingface.co/EleutherAI/pythia-160m-seed7. Note that the models trained with seed 1234 do not have a seed specified in their url.

Runs replicating the smaller Pythia models across multiple seeds are at: https://wandb.ai/eleutherai/pythia-extra-seeds

Using Pythia

Quickstart

All Pythia models are hosted on the Huggingface hub. They can be loaded and used via the following code (shown for the 3000-step pythia-70M-deduped model checkpoint):

from transformers import GPTNeoXForCausalLM, AutoTokenizer

model = GPTNeoXForCausalLM.from_pretrained(
  "EleutherAI/pythia-70m-deduped",
  revision="step3000",
  cache_dir="./pythia-70m-deduped/step3000",
)

tokenizer = AutoTokenizer.from_pretrained(
  "EleutherAI/pythia-70m-deduped",
  revision="step3000",
  cache_dir="./pythia-70m-deduped/step3000",
)

inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])

All models were trained for the equivalent of 143000 steps at a batch size of 2,097,152 tokens. Revision/branch step143000 corresponds exactly to the model checkpoint on the main branch of each model.

We additionally have all model checkpoints in the format accepted by the GPT-NeoX library, with final-step checkpoints+optimizer states downloadable from the Hugging Face Hub at EleutherAI/neox-ckpt-pythia-xxx-deduped-v1 but do not serve them for all steps at scale due to size of optimizer states and anticipated lower demand. If you would like to perform analysis using the intermediate models within the GPT-NeoX codebase, or would like the optimizer states for other steps, please email hailey@eleuther.ai and stella@eleuther.ai.

❗ pythia-{size}-v0 models on Huggingface of sizes 160m, 410m, 1.4b were trained with a batch size of 4M tokens across 71500 steps and checkpointed every 500 steps. The step names on Huggingface for these v0 models are renamed for consistency with all 2M batch models so the model checkpointed labeled step1000 of pythia-1.4b-v0 was actually step 500, but has seen the same number of tokens as the other step1000 checkpoints.

Reproducing Training

(Expanded reproduction instructions provided by @BaruchG ).

We provide the training data for replication of our training runs. The GPT-NeoX library requires the pre-tokenized training data in the form of 2 memory-mapped numpy arrays: a .bin and .idx file. We provide these files via the Hugging Face hub. To download and use the deduplicated Pile training data:

git lfs clone https://huggingface.co/datasets/EleutherAI/pythia_deduped_pile_idxmaps

# Optionally, to ensure against corrupt files
python utils/checksum_shards.py

python utils/unshard_memmap.py --input_file ./pythia_deduped_pile_idxmaps/pile_0.87_deduped_text_document-00000-of-00082.bin --num_shards 83 --output_dir ./pythia_pile_idxmaps/

# The correct sha256 for the full file is 0cd548efd15974d5cca78f9baddbd59220ca675535dcfc0c350087c79f504693
# This can be checked with sha256sum ./pythia_pile_idxmaps/*

This will take over a day to run, though it should not require more than 5 GB of RAM. We recommend downloading this rather than retokenizing the Pile from scratch in order to guarantee preservation of the data order seen by the Pythia models. In addition to the training data, you will need to make a local copy of the tokenizer we used to train our models. You can find it here.

Next you will need to set up the training environment:

git clone https://github.com/EleutherAI/gpt-neox.git
cd gpt-neox
git checkout v1.0
pip install -r requirements/requirements-flashattention.txt
wget https://github.com/EleutherAI/pythia/blob/main/models/160M/pythia-160m-deduped.yml
docker build -t pythia:latest .

After the container finishes building, run the container using the following command (from the root of the GPT-NeoX repo with your pythia yaml accessible from within that folder):

docker run --runtime=nvidia --rm -it -e NVIDIA_VISIBLE_DEVICES=0,1,2,3 --shm-size=1g --ulimit memlock=-1 --mount type=bind,src=$PWD,dst=/gpt-neox -v $(pwd):/workspace/ pythia:latest bash

You can use the -v argument to add more connected volumes for the dataset and the Yaml file if is not accessible from within the docker container.

Change the lines of the data paths and tokenizer paths as follows:

  "train-data-paths": ["/fsx/pile/pile_20B_tokenizer_text_document"], #point this to your folder which was generated in step 1 containing the .bin and .idx file
  "valid-data-paths": ["/fsx/pile/pile_20B_tokenizer_text_document"], #point this to your folder which was generated in step 1 containing the .bin and .idx file
  "test-data-paths": ["/fsx/pile/pile_20B_tokenizer_text_document"], #point this to your folder which was generated in step 1 containing the .bin and .idx file

  "tokenizer-type": "HFTokenizer",
  "vocab-file": "/fsx/pile/20B_tokenizer.json", # point this to the tokenizer retrieved in step 2

Depending on how much VRAM you have available you may need to adjust the batch sizes. The total batch size is calculated via Total GPUs * train_micro_batch_size_per_gpu * gradient_accumulation_steps / (pipe-parallel-size * model-parallel-size) and needs to be kept at 1024 to match the Pythia training batch size. You

   "train_micro_batch_size_per_gpu": XXX, # make this a value that will fit within your GPU memory
   "gradient_accumulation_steps": 1, # make this a value to compensate to make the total batch size 1024.

If you would like your weights to be saved add that information to the yaml file as well. For example, to save in the checkpoints folder, at the bottom you can add:

  "launcher": "slurm",
  "deepspeed_slurm": false,

  "save": "checkpoints",
  "load": "checkpoints",
  "checkpoint_validation_with_forward_pass": False,
}

Make sure the paths are the paths from inside your docker container and if you want the weights to have persistence, make sure that they are accessible from outside the container, for example in /workspace/ .

You should now be able to start training your model by running:

python deepy.py train.py pythia-160m-deduped.yml  2>&1 | tee output.txt

the output will be saved to output.txt, if you don’t want that just delete the end.

In order to convert your model to the Hugging Face transformers format, you can use the script tools/convert_to_hf.py from within the GPT-NeoX library. You may have to add from typing import List to the type of the file and change the line here from list[torch.Tensor] to List[torch.Tensor]. You can then run the script like this to convert the weights at step 143000:

python tools/convert_to_hf.py --input_dir checkpoints/global_step143000/ --config_file checkpoints2/global_step 143000/configs/pythia-70m.yml --output_dir ./output/ 

This should output a file structure similar to the one found at https://huggingface.co/EleutherAI/pythia-70m-deduped/tree/main.

❗ Sometimes people find that they don't end up with the right tokenizer for reasons we have been unable to debug. If your tokenizer_config.json looks different than the one here and special_tokens_map.json look different than here you may need to replace them with the ones on Huggingface.

To run evaluations using our evaluation library, install the containers here (tested with the 4.28 and 4.29 versions). After setting up that docker container run:

git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .

as outlined in the Harness repository. You should then be able to run the benchmark by pointing it at your weights (which should be in your container) by running a command similar to this:

python3 main.py --model hf-causal-experimental  --model_args pretrained=../gpt-neox/output/ --tasks lambada_openai,piqa,winogrande,arc_easy,sciq,wikitext --device cuda:0

Exploring the Dataset

We provide a tool to view particular portions of the training dataloader used by all models during training, at utils/batch_viewer.py.

First, we need to clone the Pythia repository:

git clone https://github.com/EleutherAI/pythia

Next, we must install dependencies:

pip install torch==1.13.0+cu117 -f https://download.pytorch.org/whl/torch/
pip install numpy tqdm huggingface_hub

Next, we must download the appropriate dataset. We provide preshuffled versions of the duped and deduped pile. Download the appropriate one using Huggingface's utilities as follows:

Tip: Make sure to replace path/to/* to appropriate paths where you intend to save datasets downloaded from Huggingface.

  • To download standard version, use
    from huggingface_hub import hf_hub_download
    hf_hub_download(repo_id="EleutherAI/pile-standard-pythia-preshuffled", repo_type="dataset", cache_dir="path/to/local/folder")
    
  • To download the deduped version, use
    from huggingface_hub import hf_hub_download
    hf_hub_download(repo_id="EleutherAI/pile-deduped-pythia-preshuffled", repo_type="dataset", cache_dir="path/to/local/folder")
    

You can now merge the files by using the script utils/unshard_memmap.py :

python3 utils/unshard_memmap.py --input_file "path/to/local/folder/document-00000-of-00020.bin" --num_shards 21 --output_dir "path/to/merged/folder/"

Make sure to also copy index file to the merged folder, using the command

cp path/to/local/folder/document.idx path/to/merged/folder/document.idx

Now, we're all set up to run utils/batch_viewer.py !

python3 utils/batch_viewer.py \
  --start_iteration 0 \
  --end_iteration 1000 \
  --load_path path/to/merged/folder/document \
  --save_path path/to/save/folder/ \
  --conf_dir utils/dummy_config.yml 

This will save a separate file containing all the indicies as a numpy array.

You can now load this using numpy as

import numpy as np

indicies = np.load("path/to/save/folder/indicies.npy")

These indicies contain tokenized sequences of integers of size (None, 2049), where an integer corresponds to a unique token index. Note that documents are concatenated and saperated by an EOD token. Thus, each sample or batch may not start with an EOD token. During training, target tokens are left shifted by 1. Thus, a model of sequence length 2048 requires 2049 length sequences for training (For more info, refer to this comment)

Pythia Paper Replication

We provide further information for those interested in replicating the case studies performed in the Pythia suite paper in the case-studies/ folder of this repository.

Benchmark Scores

We also provide benchmark 0-shot and 5-shot results on a variety of NLP datasets:

  • ARC-challenge (arc_challenge)
  • ARC-easy (arc_easy)
  • BLiMP (blimp_*)
  • Lambada (lambada_openai)
  • LogiQA (logiqa)
  • MMLU (hendrycksTest*)
  • PiQA (piqa)
  • SciQ (sciq)
  • Wikitext (wikitext)
  • Winogrande (winogrande)
  • WSC (wsc)

Evaluations were performed in GPT-NeoX using the LM Evaluation Harness and are viewable by model and step at evals/pythia-v1/*/* in this repository. Warning: All evaluations were run using the to-do commit of the language model evaluation harness almost years ago and may not be reproducible by the current version.

Research Building on Pythia

Our primary goal with the Pythia project is to enable research on topics including interpretability and learning dynamics at EleutherAI and in the community writ large. Here we document select papers using our models, focusing on work that is uniquely empowered by the Pythia suite and would be less feasible or infeasible with models released by other organizations. For a larger list of papers citing Pythia, see here.

Language model internals

Learning dynamics

How training data determines model behavior

Security, auditing, and compliance research

Citation Details

If you use the Pythia models in your research, please cite our paper via:

@inproceedings{biderman2023pythia,
  title={Pythia: A suite for analyzing large language models across training and scaling},
  author={Biderman, Stella and Schoelkopf, Hailey and Anthony, Quentin Gregory and Bradley, Herbie and O’Brien, Kyle and Hallahan, Eric and Khan, Mohammad Aflah and Purohit, Shivanshu and Prashanth, USVSN Sai and Raff, Edward and others},
  booktitle={International Conference on Machine Learning},
  pages={2397--2430},
  year={2023},
  organization={PMLR}
}

If you use data or results from other papers found in this repository, please cite the corresponding papers. Citation information can be found in the respective README and are also reproduced below for convenience:

@inproceedings{biderman2023emergent,
      title={Emergent and Predictable Memorization in Large Language Models}, 
      author={Biderman, Stella and Prashanth, USVSN Sai and Sutawika, Lintang and Schoelkopf, Hailey and Anthony, Quentin and Purohit, Shivanshu and Raff, Edward},
      booktitle={Advances in Neural Information Processing Systems},
      year={2023}
}

@inproceedings{van2025polypythias,
      title={PolyPythias: Stability and Outliers across Fifty Language Model Pre-Training Runs},
      author={van der Wal, Oskar and Lesci, Pietro and M{\"u}ller-Eberstein, Max and Saphra, Naomi and Schoelkopf, Hailey and Zuidema, Willem and Biderman, Stella},
      booktitle={{The Thirteenth International Conference on Learning Representations}},
      year={2025}

If you are interested in citing our training data, training library, or evaluation library you can do so with the following:

@article{gao2020pile,
  title={The pile: An 800gb dataset of diverse text for language modeling},
  author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
  journal={arXiv preprint arXiv:2101.00027},
  year={2020}
}

@article{biderman2022datasheet,
  title={Datasheet for the pile},
  author={Biderman, Stella and Bicheno, Kieran and Gao, Leo},
  journal={arXiv preprint arXiv:2201.07311},
  year={2022}
}

@software{gpt-neox-library,
  title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}},
  author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Phang, Jason and Purohit, Shivanshu and Schoelkopf, Hailey and Stander, Dashiell and Songz, Tri and Tigges, Curt and Thérien, Benjamin and Wang, Phil and Weinbach, Samuel},
  url = {https://www.github.com/eleutherai/gpt-neox},
  doi = {10.5281/zenodo.5879544},
  month = {9},
  year = {2023},
  version = {2.0.0},
}

@misc{eval-harness,
  author       = {Gao, Leo and Tow, Jonathan and Abbasi, Baber and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and Le Noac'h, Alain and Li, Haonan and McDonell, Kyle and Muennighoff, Niklas and Ociepa, Chris and Phang, Jason and Reynolds, Laria and Schoelkopf, Hailey and Skowron, Aviya and Sutawika, Lintang and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy},
  title        = {A framework for few-shot language model evaluation},
  month        = sep,
  year         = 2021,
  publisher    = {Zenodo},
  version      = {v0.0.1},
  doi          = {10.5281/zenodo.5371628},
  url          = {https://doi.org/10.5281/zenodo.5371628}
}

License

The following license applies to all code in this GitHub repo, as well as the Pythia models and any other copyrightable artifacts contained in this repository.

   Copyright 2024 EleutherAI

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.