petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
Top Related Projects
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
An open-source NLP research library, built on PyTorch.
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
TensorFlow code and pre-trained models for BERT
Quick Overview
Petals is an open-source project that enables distributed inference and fine-tuning of large language models. It allows users to run models like BLOOM-176B and LLaMA collaboratively, distributing the computational load across multiple consumer-grade GPUs. This approach makes it possible to work with large models without requiring expensive hardware.
Pros
- Enables access to large language models on consumer hardware
- Supports distributed inference and fine-tuning
- Open-source and community-driven
- Integrates with popular libraries like Hugging Face Transformers
Cons
- May have higher latency compared to running models on a single powerful machine
- Requires coordination of multiple devices, which can be complex
- Performance can be affected by network conditions
- Limited to supported model architectures
Code Examples
- Basic inference with Petals:
from petals import AutoDistributedModelForCausalLM
import torch
model = AutoDistributedModelForCausalLM.from_pretrained("bigscience/bloom-petals")
inputs = model.tokenizer("Hello, my name is", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(model.tokenizer.decode(outputs[0]))
- Fine-tuning a model with Petals:
from petals import DistributedModelForCausalLM
from transformers import Trainer, TrainingArguments
model = DistributedModelForCausalLM.from_pretrained("bigscience/bloom-petals")
trainer = Trainer(
model=model,
args=TrainingArguments(output_dir="./results", num_train_epochs=3, per_device_train_batch_size=4),
train_dataset=your_dataset,
)
trainer.train()
- Using Petals with custom prompts:
from petals import AutoDistributedModelForCausalLM
model = AutoDistributedModelForCausalLM.from_pretrained("bigscience/bloom-petals")
prompt = "Translate the following English text to French: 'Hello, how are you?'"
inputs = model.tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(model.tokenizer.decode(outputs[0]))
Getting Started
To get started with Petals, follow these steps:
- Install Petals:
pip install petals
- Import and use Petals in your Python script:
from petals import AutoDistributedModelForCausalLM
model = AutoDistributedModelForCausalLM.from_pretrained("bigscience/bloom-petals")
inputs = model.tokenizer("Hello, world!", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(model.tokenizer.decode(outputs[0]))
For more detailed instructions and advanced usage, refer to the official Petals documentation.
Competitor Comparisons
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Pros of transformers
- Extensive model support: Covers a wide range of transformer-based models
- Well-documented and actively maintained by a large community
- Seamless integration with other Hugging Face libraries and tools
Cons of transformers
- Higher resource requirements for large models
- Limited support for distributed inference across multiple devices
Code comparison
transformers:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)
petals:
from petals import AutoDistributedModelForCausalLM
model = AutoDistributedModelForCausalLM.from_pretrained("bigscience/bloom")
outputs = model.generate("Hello, how are you?", max_new_tokens=50)
Key differences
- petals focuses on distributed inference for large language models
- transformers provides a more comprehensive toolkit for various NLP tasks
- petals simplifies the process of working with distributed models
- transformers offers more flexibility and customization options
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Pros of DeepSpeed
- More mature and widely adopted project with extensive documentation
- Supports a broader range of optimization techniques and hardware
- Offers integration with popular deep learning frameworks like PyTorch and TensorFlow
Cons of DeepSpeed
- Steeper learning curve for beginners
- Requires more configuration and setup for optimal performance
- May have higher overhead for smaller models or simpler training tasks
Code Comparison
Petals example:
from petals import DistributedBloomForCausalLM
model = DistributedBloomForCausalLM.from_pretrained("bigscience/bloom")
output = model.generate("Once upon a time,", max_new_tokens=50)
DeepSpeed example:
import deepspeed
model_engine, _, _, _ = deepspeed.initialize(args=args, model=model, model_parameters=params)
output = model_engine.generate("Once upon a time,", max_length=50)
Both libraries aim to optimize large language model training and inference, but they approach the task differently. Petals focuses on distributed inference of specific models like BLOOM, while DeepSpeed offers a more comprehensive suite of optimization techniques for various deep learning tasks. DeepSpeed provides more flexibility and power, but may require more setup and expertise to use effectively.
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
Pros of gpt-neox
- Designed for training large language models from scratch
- Highly optimized for distributed training on multiple GPUs
- Includes tools for dataset preparation and model evaluation
Cons of gpt-neox
- Requires significant computational resources for training
- Less suitable for inference or fine-tuning pre-trained models
- Steeper learning curve for users new to large-scale model training
Code Comparison
gpt-neox:
from megatron.neox_arguments import NeoXArgs
from megatron.global_vars import set_global_variables, get_args
from megatron.training import pretrain
args = NeoXArgs.from_ymls("configs/your_config.yml")
set_global_variables(args)
pretrain(get_args())
petals:
from petals import AutoDistributedModelForCausalLM
model = AutoDistributedModelForCausalLM.from_pretrained("bigscience/bloom")
outputs = model.generate("Once upon a time", max_new_tokens=50)
print(outputs[0])
The code snippets highlight the different focus areas of the two projects. gpt-neox is centered around training large models, while petals emphasizes distributed inference using pre-trained models.
An open-source NLP research library, built on PyTorch.
Pros of AllenNLP
- More comprehensive NLP toolkit with a wider range of pre-built models and tasks
- Extensive documentation and tutorials for easier onboarding
- Larger community and longer development history, leading to better stability
Cons of AllenNLP
- Heavier and more complex framework, potentially steeper learning curve
- Less focused on distributed computing and large language models
- May require more computational resources for some tasks
Code Comparison
AllenNLP example:
from allennlp.predictors.predictor import Predictor
predictor = Predictor.from_path("https://storage.googleapis.com/allennlp-public-models/bert-base-srl-2020.03.24.tar.gz")
result = predictor.predict(sentence="Did Uriah honestly think he could beat the game in under three hours?")
Petals example:
from petals import AutoDistributedModelForCausalLM
model = AutoDistributedModelForCausalLM.from_pretrained("bigscience/bloom")
outputs = model.generate("Once upon a time,", max_new_tokens=30)
AllenNLP focuses on a broader range of NLP tasks with pre-built models, while Petals specializes in distributed inference for large language models. AllenNLP's code example demonstrates semantic role labeling, whereas Petals showcases text generation using a distributed model.
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Pros of fairseq
- More comprehensive and feature-rich, supporting a wide range of NLP tasks
- Extensive documentation and examples for various use cases
- Larger community and more frequent updates
Cons of fairseq
- Steeper learning curve due to its complexity
- Requires more computational resources for training and inference
- Less focused on distributed inference of large language models
Code Comparison
fairseq:
from fairseq.models.transformer import TransformerModel
model = TransformerModel.from_pretrained('/path/to/model', 'checkpoint.pt')
tokens = model.encode('Hello world!')
output = model.decode(tokens)
petals:
from petals import AutoDistributedModelForCausalLM
model = AutoDistributedModelForCausalLM.from_pretrained("bigscience/bloom")
output = model.generate("Hello world!", max_new_tokens=50)
The code snippets demonstrate that fairseq requires more setup and configuration, while petals offers a simpler interface for distributed inference of large language models. fairseq provides more flexibility for various NLP tasks, whereas petals is specifically designed for efficient distributed inference of large models like BLOOM.
TensorFlow code and pre-trained models for BERT
Pros of BERT
- Well-established and widely adopted in the NLP community
- Extensive documentation and pre-trained models available
- Suitable for a variety of NLP tasks with relatively small models
Cons of BERT
- Limited to smaller model sizes compared to more recent language models
- May not perform as well on complex or specialized tasks as newer architectures
- Requires fine-tuning for specific tasks, which can be resource-intensive
Code Comparison
BERT example:
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
Petals example:
from petals import AutoDistributedModelForCausalLM
model = AutoDistributedModelForCausalLM.from_pretrained("bigscience/bloom")
BERT focuses on encoder-based models for various NLP tasks, while Petals is designed for distributed inference of large language models. BERT's implementation is more straightforward for traditional NLP tasks, whereas Petals aims to enable the use of massive models across distributed systems.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Run large language models at home, BitTorrent-style.
Fine-tuning and inference up to 10x faster than offloading
Generate text with distributed Llama 3.1 (up to 405B), Mixtral (8x22B), Falcon (40B+) or BLOOM (176B) and fineâtune them for your own tasks — right from your desktop computer or Google Colab:
from transformers import AutoTokenizer
from petals import AutoDistributedModelForCausalLM
# Choose any model available at https://health.petals.dev
model_name = "meta-llama/Meta-Llama-3.1-405B-Instruct"
# Connect to a distributed network hosting model layers
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoDistributedModelForCausalLM.from_pretrained(model_name)
# Run the model as if it were on your computer
inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"]
outputs = model.generate(inputs, max_new_tokens=5)
print(tokenizer.decode(outputs[0])) # A cat sat on a mat...
ð Try now in Colab
ð¦ Want to run Llama? Request access to its weights, then run huggingface-cli login
in the terminal before loading the model. Or just try it in our chatbot app.
ð Privacy. Your data will be processed with the help of other people in the public swarm. Learn more about privacy here. For sensitive data, you can set up a private swarm among people you trust.
ð¬ Any questions? Ping us in our Discord!
Connect your GPU and increase Petals capacity
Petals is a community-run system — we rely on people sharing their GPUs. You can help serving one of the available models or host a new model from ð¤ Model Hub!
As an example, here is how to host a part of Llama 3.1 (405B) Instruct on your GPU:
ð¦ Want to host Llama? Request access to its weights, then run huggingface-cli login
in the terminal before loading the model.
ð§ Linux + Anaconda. Run these commands for NVIDIA GPUs (or follow this for AMD):
conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia
pip install git+https://github.com/bigscience-workshop/petals
python -m petals.cli.run_server meta-llama/Meta-Llama-3.1-405B-Instruct
ðª Windows + WSL. Follow this guide on our Wiki.
ð Docker. Run our Docker image for NVIDIA GPUs (or follow this for AMD):
sudo docker run -p 31330:31330 --ipc host --gpus all --volume petals-cache:/cache --rm \
learningathome/petals:main \
python -m petals.cli.run_server --port 31330 meta-llama/Meta-Llama-3.1-405B-Instruct
ð macOS + Apple M1/M2 GPU. Install Homebrew, then run these commands:
brew install python
python3 -m pip install git+https://github.com/bigscience-workshop/petals
python3 -m petals.cli.run_server meta-llama/Meta-Llama-3.1-405B-Instruct
ð Learn more (how to use multiple GPUs, start the server on boot, etc.)
ð Security. Hosting a server does not allow others to run custom code on your computer. Learn more here.
ð¬ Any questions? Ping us in our Discord!
ð Thank you! Once you load and host 10+ blocks, we can show your name or link on the swarm monitor as a way to say thanks. You can specify them with --public_name YOUR_NAME
.
How does it work?
- You load a small part of the model, then join a network of people serving the other parts. Singleâbatch inference runs at up to 6 tokens/sec for Llama 2 (70B) and up to 4 tokens/sec for Falcon (180B) â enough for chatbots and interactive apps.
- You can employ any fine-tuning and sampling methods, execute custom paths through the model, or see its hidden states. You get the comforts of an API with the flexibility of PyTorch and ð¤ Transformers.
ð Read paper ð See FAQ
ð Tutorials, examples, and more
Basic tutorials:
- Getting started: tutorial
- Prompt-tune Llama-65B for text semantic classification: tutorial
- Prompt-tune BLOOM to create a personified chatbot: tutorial
Useful tools:
- Chatbot web app (connects to Petals via an HTTP/WebSocket endpoint): source code
- Monitor for the public swarm: source code
Advanced guides:
Benchmarks
Please see Section 3.3 of our paper.
ð ï¸ Contributing
Please see our FAQ on contributing.
ð Citations
Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, and Colin Raffel. Petals: Collaborative Inference and Fine-tuning of Large Models. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations). 2023.
@inproceedings{borzunov2023petals,
title = {Petals: Collaborative Inference and Fine-tuning of Large Models},
author = {Borzunov, Alexander and Baranchuk, Dmitry and Dettmers, Tim and Riabinin, Maksim and Belkada, Younes and Chumachenko, Artem and Samygin, Pavel and Raffel, Colin},
booktitle = {Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},
pages = {558--568},
year = {2023},
url = {https://arxiv.org/abs/2209.01188}
}
Alexander Borzunov, Max Ryabinin, Artem Chumachenko, Dmitry Baranchuk, Tim Dettmers, Younes Belkada, Pavel Samygin, and Colin Raffel. Distributed inference and fine-tuning of large language models over the Internet. Advances in Neural Information Processing Systems 36 (2023).
@inproceedings{borzunov2023distributed,
title = {Distributed inference and fine-tuning of large language models over the {I}nternet},
author = {Borzunov, Alexander and Ryabinin, Max and Chumachenko, Artem and Baranchuk, Dmitry and Dettmers, Tim and Belkada, Younes and Samygin, Pavel and Raffel, Colin},
booktitle = {Advances in Neural Information Processing Systems},
volume = {36},
pages = {12312--12331},
year = {2023},
url = {https://arxiv.org/abs/2312.08361}
}
This project is a part of the BigScience research workshop.
Top Related Projects
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
An open-source NLP research library, built on PyTorch.
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
TensorFlow code and pre-trained models for BERT
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot