Convert Figma logo to code with AI

LAION-AI logoOpen-Assistant

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

36,939
3,224
36,939
290

Top Related Projects

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries

9,147

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

Code and documentation to train Stanford's Alpaca models, and generate the data.

34,658

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

56,019

Inference code for Llama models

Quick Overview

Open-Assistant is an open-source project aimed at creating a large language model assistant that is free and accessible to everyone. It focuses on developing conversational AI capabilities through a collaborative effort, involving data collection, model training, and evaluation.

Pros

  • Open-source and community-driven, promoting transparency and collaboration
  • Aims to create a free alternative to proprietary AI assistants
  • Supports multiple languages and diverse use cases
  • Actively involves the community in data collection and model improvement

Cons

  • May face challenges in competing with well-funded proprietary AI assistants
  • Potential for inconsistent quality due to community-sourced data
  • Requires significant computational resources for training and deployment
  • May struggle with maintaining long-term sustainability and support

Getting Started

To contribute to the Open-Assistant project:

  1. Visit the Open-Assistant GitHub repository
  2. Read the project documentation and contribution guidelines
  3. Join the project's Discord community for discussions and updates
  4. Participate in data collection, model training, or code development as per your expertise

Note: As this is not a code library but a collaborative AI development project, there are no specific code examples or quick start instructions for using the assistant directly. Instead, contributors are encouraged to participate in the development process through the provided channels.

Competitor Comparisons

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries

Pros of gpt-neox

  • Focused on large-scale language model training and deployment
  • Extensive documentation and tutorials for model training
  • Optimized for distributed training on multiple GPUs/nodes

Cons of gpt-neox

  • Less emphasis on creating an interactive assistant
  • Requires more technical expertise to use effectively
  • Limited built-in tools for fine-tuning on specific tasks

Code Comparison

gpt-neox:

from megatron.neox_arguments import NeoXArgs
from megatron.global_vars import set_global_variables, get_tokenizer
from megatron.neox_model import GPTNeoX

neox_args = NeoXArgs.from_pretrained("EleutherAI/gpt-neox-20b")
model = GPTNeoX(neox_args)

Open-Assistant:

from oasst_shared.schemas import inference
from oasst_inference_server import OasstInferenceServer

server = OasstInferenceServer()
response = server.generate_response(inference.Message(content="Hello"))

The code snippets demonstrate the different focus areas of the projects. gpt-neox emphasizes model initialization and training, while Open-Assistant provides a more user-friendly interface for generating responses using a pre-trained model.

9,147

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

Pros of Petals

  • Focuses on distributed inference of large language models
  • Allows running models on consumer hardware through collaborative computing
  • Supports a variety of pre-trained models, including BLOOM and GPT-J

Cons of Petals

  • More specialized in scope compared to Open-Assistant's broader AI assistant framework
  • Less emphasis on fine-tuning and customization of models
  • Smaller community and contributor base

Code Comparison

Open-Assistant:

from oasst_api import OAsstAPI

api = OAsstAPI()
response = api.generate_text("Tell me a joke")
print(response)

Petals:

import petals

model = petals.load("bigscience/bloom-petals")
output = model.generate("Once upon a time", max_length=50)
print(output)

Both projects aim to make large language models more accessible, but they take different approaches. Open-Assistant focuses on creating an open-source AI assistant framework, while Petals emphasizes distributed inference of existing models. Open-Assistant offers a more comprehensive solution for building AI assistants, including data collection and model training. Petals, on the other hand, excels in allowing users to run large models on consumer hardware through collaborative computing.

Code and documentation to train Stanford's Alpaca models, and generate the data.

Pros of Stanford Alpaca

  • Focused on fine-tuning LLMs using the InstructGPT approach
  • Provides a simple and reproducible method for creating instruction-following models
  • Includes a dataset of 52K instructions for fine-tuning

Cons of Stanford Alpaca

  • Limited scope compared to Open-Assistant's broader goals
  • Less community-driven development and contribution
  • Lacks the extensive tooling and infrastructure of Open-Assistant

Code Comparison

Stanford Alpaca:

def generate_prompt(instruction, input=None):
    if input:
        return f"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
    else:
        return f"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"

Open-Assistant:

def format_human_message(message: str) -> str:
    return f"Human: {message}"

def format_assistant_message(message: str) -> str:
    return f"Assistant: {message}"

Both projects aim to improve language models, but Stanford Alpaca focuses on fine-tuning existing models, while Open-Assistant takes a more comprehensive approach to building an open-source AI assistant. Open-Assistant offers a broader scope and more extensive community involvement, whereas Stanford Alpaca provides a simpler, more focused method for creating instruction-following models.

34,658

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Pros of DeepSpeed

  • Highly optimized for large-scale model training and inference
  • Extensive documentation and tutorials for easy adoption
  • Supports various distributed training scenarios and hardware configurations

Cons of DeepSpeed

  • Primarily focused on performance optimization, not a complete AI assistant framework
  • Steeper learning curve for beginners in deep learning
  • Less emphasis on community-driven development and open collaboration

Code Comparison

Open-Assistant:

from oasst_api import OAsstAPI

api = OAsstAPI()
response = api.generate_response("Hello, how are you?")
print(response)

DeepSpeed:

import deepspeed
import torch

model = MyModel()
engine = deepspeed.initialize(model=model, config_params=ds_config)
output = engine(torch.randn(batch_size, seq_len))

Summary

Open-Assistant is a community-driven project aimed at creating an open-source AI assistant, while DeepSpeed is a deep learning optimization library. Open-Assistant focuses on building a complete AI system, whereas DeepSpeed provides tools for efficient training and inference of large models. DeepSpeed offers superior performance optimization but requires more technical expertise, while Open-Assistant aims for accessibility and collaborative development.

56,019

Inference code for Llama models

Pros of Llama

  • Developed by Meta, leveraging extensive resources and expertise
  • Offers a range of model sizes, from 7B to 65B parameters
  • Provides pre-trained models with strong performance across various tasks

Cons of Llama

  • Restricted access due to licensing and application process
  • Limited community involvement in model development and improvement
  • Less focus on open collaboration and transparency

Code Comparison

Open-Assistant:

from oasst_api import OAsstAPI

api = OAsstAPI()
response = api.generate_text("What is the capital of France?")
print(response)

Llama:

from llama import Llama

model = Llama.load("llama-7B")
response = model.generate("What is the capital of France?")
print(response)

Key Differences

  • Open-Assistant aims for full transparency and community-driven development
  • Llama focuses on high-performance models with controlled access
  • Open-Assistant encourages contributions from a diverse range of developers
  • Llama benefits from Meta's extensive research and development resources

Both projects contribute significantly to the field of large language models, with Open-Assistant prioritizing openness and collaboration, while Llama emphasizes cutting-edge performance and controlled distribution.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Open-Assistant

:memo: NOTE: OpenAssistant is completed, and the project is now finished. Thank you to everyone who contributed! Check out our blog post for more information. The final published oasst2 dataset can be found on HuggingFace at OpenAssistant/oasst2

GitHub Repo stars Docs GitHub Workflow Status GitHub Workflow Status GitHub Workflow Status GitHub Workflow Status GitHub Workflow Status GitHub Workflow Status GitHub Workflow Status GitHub Workflow Status GitHub release (latest by date) Translate

Table of Contents


What is Open Assistant?

Open Assistant is a project meant to give everyone access to a great chat based large language model.

We believe that by doing this we will create a revolution in innovation in language. In the same way that stable-diffusion helped the world make art and images in new ways we hope Open Assistant can help improve the world by improving language itself.

Useful Links

How To Try It Out

Chatting with the AI

The chat frontend is now live here. Log in and start chatting! Please try to react with a thumbs up or down for the assistant's responses when chatting.

Contributing to Data Collection

The data collection frontend is now live here. Log in and start taking on tasks! We want to collect a high volume of quality data. By submitting, ranking, and labelling model prompts and responses you will be directly helping to improve the capabilities of Open Assistant.

Running the Development Setup Locally (without chat)

You do not need to run the project locally unless you are contributing to the development process. The website link above will take you to the public website where you can use the data collection app and the chat.

If you would like to run the data collection app locally for development, you can set up an entire stack needed to run Open-Assistant, including the website, backend, and associated dependent services, with Docker.

To start the demo, run this in the root directory of the repository (check this FAQ if you have problems):

docker compose --profile ci up --build --attach-dependencies

Note: when running on MacOS with an M1 chip you have to use: DB_PLATFORM=linux/x86_64 docker compose ...

Then, navigate to http://localhost:3000 (It may take some time to boot up) and interact with the website.

Note: If an issue occurs with the build, please head to the FAQ and check out the entries about Docker.

Note: When logging in via email, navigate to http://localhost:1080 to get the magic email login link.

Note: If you would like to run this in a standardized development environment (a "devcontainer") using vscode locally or in a web browser using GitHub Codespaces, you can use the provided .devcontainer folder.

Running the Development Setup Locally for Chat

You do not need to run the project locally unless you are contributing to the development process. The website link above will take you to the public website where you can use the data collection app and the chat.

Also note that the local setup is only for development and is not meant to be used as a local chatbot, unless you know what you are doing.

If you do know what you are doing, then see the inference folder for getting the inference system up and running, or have a look at --profile inference in addition to --profile ci in the above command.

The Vision

We are not going to stop at replicating ChatGPT. We want to build the assistant of the future, able to not only write email and cover letters, but do meaningful work, use APIs, dynamically research information, and much more, with the ability to be personalized and extended by anyone. And we want to do this in a way that is open and accessible, which means we must not only build a great assistant, but also make it small and efficient enough to run on consumer hardware.

The Plan

We want to get to an initial MVP as fast as possible, by following the 3-steps outlined in the InstructGPT paper
  1. Collect high-quality human generated Instruction-Fulfillment samples (prompt + response), goal >50k. We design a crowdsourced process to collect and reviewed prompts. We do not want to train on flooding/toxic/spam/junk/personal information data. We will have a leaderboard to motivate the community that shows progress and the most active users. Swag will be given to the top-contributors.
  2. For each of the collected prompts we will sample multiple completions. Completions of one prompt will then be shown randomly to users to rank them from best to worst. Again this should happen crowd-sourced, e.g. we need to deal with unreliable potentially malicious users. At least multiple votes by independent users have to be collected to measure the overall agreement. The gathered ranking-data will be used to train a reward model.
  3. Now follows the RLHF training phase based on the prompts and the reward model.

We can then take the resulting model and continue with completion sampling step 2 for a next iteration.

Slide Decks

Vision & Roadmap

Important Data Structures

How You Can Help

All open source projects begin with people like you. Open source is the belief that if we collaborate we can together gift our knowledge and technology to the world for the benefit of humanity.

Check out our contributing guide to get started.