Convert Figma logo to code with AI

geekan logoMetaGPT

🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming

43,512
5,179
43,512
350

Top Related Projects

18,638

A guidance language for controlling large language models.

92,071

🦜🔗 Build context-aware reasoning applications

Examples and guides for using the OpenAI API

The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.

Quick Overview

MetaGPT is an innovative project that aims to create a multi-agent framework for AI-driven software development. It leverages large language models to simulate a collaborative software company, where different AI agents take on roles such as product manager, architect, and programmer to design and implement software solutions based on user requirements.

Pros

  • Automates the entire software development process, from requirement analysis to code generation
  • Provides a novel approach to AI-assisted software engineering by simulating a team of specialized agents
  • Offers potential for rapid prototyping and idea validation in software projects
  • Integrates with popular AI models and can be extended with custom agents

Cons

  • May not be suitable for complex or large-scale software projects that require human expertise and nuanced decision-making
  • Relies heavily on the capabilities of underlying language models, which may have limitations or biases
  • Could potentially reduce opportunities for human developers in certain areas of software development
  • May require significant computational resources to run effectively

Code Examples

# Initialize the MetaGPT framework
from metagpt.software_company import SoftwareCompany
from metagpt.roles import ProjectManager, Architect, Engineer

company = SoftwareCompany()
company.hire([ProjectManager(), Architect(), Engineer()])

# Start a new project
company.start_project("Create a simple todo list application")
# Customize agent behaviors
from metagpt.roles import CustomRole

class SpecializedEngineer(CustomRole):
    def __init__(self):
        super().__init__()
        self.name = "Specialized Engineer"
        self.skills = ["Python", "Machine Learning", "Data Analysis"]

company.hire([SpecializedEngineer()])
# Generate and review code
generated_code = company.generate_code("Implement a user authentication system")
review_result = company.review_code(generated_code)

print(review_result)

Getting Started

To get started with MetaGPT:

  1. Install the package:

    pip install metagpt
    
  2. Set up your OpenAI API key:

    import os
    os.environ["OPENAI_API_KEY"] = "your-api-key-here"
    
  3. Create a simple project:

    from metagpt.software_company import SoftwareCompany
    
    company = SoftwareCompany()
    company.start_project("Build a weather forecast app")
    

For more detailed instructions and advanced usage, refer to the project's documentation on GitHub.

Competitor Comparisons

18,638

A guidance language for controlling large language models.

Pros of Guidance

  • More focused on providing a structured approach to prompt engineering and LLM interactions
  • Offers a declarative language for defining LLM-powered programs
  • Provides fine-grained control over LLM outputs with constraints and validation

Cons of Guidance

  • Less comprehensive in terms of multi-agent collaboration and project management
  • May require more technical expertise to utilize effectively
  • Limited in scope compared to MetaGPT's broader software development capabilities

Code Comparison

MetaGPT:

role = Role("Product Manager")
role.set_task("Create a product requirements document")
role.run()

Guidance:

program = guidance('''
{{#system~}}
You are a helpful assistant.
{{~/system}}

{{#user~}}
Create a product requirements document for {{product}}.
{{~/user}}

{{#assistant~}}
{{gen 'prd' max_tokens=500}}
{{~/assistant}}
''')

result = program(product="Smart Home Assistant")

Both repositories offer unique approaches to leveraging LLMs for various tasks. MetaGPT focuses on multi-agent collaboration for software development, while Guidance provides a structured framework for prompt engineering and LLM interaction control. The choice between them depends on the specific use case and level of control required over LLM outputs.

92,071

🦜🔗 Build context-aware reasoning applications

Pros of LangChain

  • More extensive documentation and tutorials
  • Larger community and ecosystem of integrations
  • Flexible architecture for building complex AI applications

Cons of LangChain

  • Steeper learning curve for beginners
  • Can be overwhelming with numerous components and options
  • Requires more setup and configuration for basic tasks

Code Comparison

MetaGPT:

from metagpt.roles import ProjectManager, Architect, ProductManager
from metagpt.team import Team

team = Team()
team.hire([ProjectManager(), Architect(), ProductManager()])
team.run_project("Create a web application for task management")

LangChain:

from langchain import OpenAI, LLMChain
from langchain.prompts import PromptTemplate

llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(input_variables=["product"], template="What is a good name for a company that makes {product}?")
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run("colorful socks"))

The code comparison shows that MetaGPT focuses on high-level project management and team collaboration, while LangChain provides more granular control over language model interactions and chain composition.

Examples and guides for using the OpenAI API

Pros of OpenAI Cookbook

  • Comprehensive collection of examples and best practices for using OpenAI's APIs
  • Regularly updated with new features and improvements from OpenAI
  • Provides code snippets in multiple programming languages

Cons of OpenAI Cookbook

  • Focused solely on OpenAI's products, limiting its scope compared to MetaGPT
  • Less emphasis on end-to-end project development and software engineering practices
  • May require more manual integration and customization for complex projects

Code Comparison

OpenAI Cookbook:

import openai

response = openai.Completion.create(
  engine="text-davinci-002",
  prompt="Translate the following English text to French: '{}'",
  max_tokens=60
)

MetaGPT:

from metagpt.roles import ProjectManager, ProductManager, Architect, Engineer

team = [ProjectManager(), ProductManager(), Architect(), Engineer()]
company = Company(members=team)
company.run_project("Create a web application for task management")

The OpenAI Cookbook focuses on direct API usage, while MetaGPT provides a higher-level abstraction for software development processes using AI agents.

The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.

Pros of chatgpt-retrieval-plugin

  • Focused on document retrieval and integration with ChatGPT
  • Provides a ready-to-use solution for enhancing ChatGPT with external data
  • Offers flexibility in choosing vector database backends

Cons of chatgpt-retrieval-plugin

  • Limited to retrieval tasks, not a full-fledged development framework
  • Requires integration with OpenAI's API, which may have associated costs
  • Less emphasis on multi-agent collaboration and complex task automation

Code Comparison

chatgpt-retrieval-plugin:

from fastapi import FastAPI
from routes.chat import chat_router
from routes.upsert import upsert_router

app = FastAPI()
app.include_router(chat_router)
app.include_router(upsert_router)

MetaGPT:

from metagpt.roles import ProjectManager, Architect, ProductManager
from metagpt.team import Team

team = Team()
team.hire([ProjectManager(), Architect(), ProductManager()])
team.run(idea="Create a task management app")

The code snippets highlight the different focus areas of the two projects. chatgpt-retrieval-plugin emphasizes API setup for retrieval tasks, while MetaGPT showcases its multi-agent collaboration approach for complex software development tasks.

Pros of TaskMatrix

  • More focused on task decomposition and execution for specific domains
  • Integrates with external tools and APIs for enhanced capabilities
  • Provides a visual interface for task planning and execution

Cons of TaskMatrix

  • Less comprehensive documentation compared to MetaGPT
  • Smaller community and fewer contributors
  • More limited in scope, primarily focused on task-oriented applications

Code Comparison

TaskMatrix:

task = TaskMatrix(task_description)
subtasks = task.decompose()
for subtask in subtasks:
    subtask.execute()

MetaGPT:

team = MetaGPT()
team.hire([Product_Manager(), Architect(), ProjectManager(), Engineer()])
team.run_project("Create a web application")

Summary

TaskMatrix excels in task decomposition and execution for specific domains, offering integration with external tools and a visual interface. However, it has less comprehensive documentation and a smaller community compared to MetaGPT. TaskMatrix is more focused on task-oriented applications, while MetaGPT provides a broader framework for software development processes. The code comparison shows TaskMatrix's emphasis on task decomposition and execution, whereas MetaGPT simulates a full development team with different roles.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

MetaGPT: The Multi-Agent Framework

MetaGPT logo: Enable GPT to work in software company, collaborating to tackle more complex tasks.

Assign different roles to GPTs to form a collaborative entity for complex tasks.

CN doc EN doc JA doc License: MIT roadmap Discord Follow Twitter Follow

Open in Dev Containers Open in GitHub Codespaces Hugging Face

News

🚀 Mar. 29, 2024: v0.8.0 released. Now you can use Data Interpreter (arxiv, example, code) via pypi package import. Meanwhile, we integrated RAG module and supported multiple new LLMs.

🚀 Feb. 08, 2024: v0.7.0 released, supporting assigning different LLMs to different Roles. We also introduced Data Interpreter, a powerful agent capable of solving a wide range of real-world problems.

🚀 Jan. 16, 2024: Our paper MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework accepted for oral presentation (top 1.2%) at ICLR 2024, ranking #1 in the LLM-based Agent category.

🚀 Jan. 03, 2024: v0.6.0 released, new features include serialization, upgraded OpenAI package and supported multiple LLM, provided minimal example for debate etc.

🚀 Dec. 15, 2023: v0.5.0 released, introducing some experimental features such as incremental development, multilingual, multiple programming languages, etc.

🔥 Nov. 08, 2023: MetaGPT is selected into Open100: Top 100 Open Source achievements.

🔥 Sep. 01, 2023: MetaGPT tops GitHub Trending Monthly for the 17th time in August 2023.

🌟 Jun. 30, 2023: MetaGPT is now open source.

🌟 Apr. 24, 2023: First line of MetaGPT code committed.

Software Company as Multi-Agent System

  1. MetaGPT takes a one line requirement as input and outputs user stories / competitive analysis / requirements / data structures / APIs / documents, etc.
  2. Internally, MetaGPT includes product managers / architects / project managers / engineers. It provides the entire process of a software company along with carefully orchestrated SOPs.
    1. Code = SOP(Team) is the core philosophy. We materialize SOP and apply it to teams composed of LLMs.

A software company consists of LLM-based roles

Software Company Multi-Agent Schematic (Gradually Implementing)

Get Started

Installation

Ensure that Python 3.9+ is installed on your system. You can check this by using: python --version.
You can use conda like this: conda create -n metagpt python=3.9 && conda activate metagpt

pip install --upgrade metagpt
# or `pip install --upgrade git+https://github.com/geekan/MetaGPT.git`
# or `git clone https://github.com/geekan/MetaGPT && cd MetaGPT && pip install --upgrade -e .`

For detailed installation guidance, please refer to cli_install or docker_install

Configuration

You can init the config of MetaGPT by running the following command, or manually create ~/.metagpt/config2.yaml file:

# Check https://docs.deepwisdom.ai/main/en/guide/get_started/configuration.html for more details
metagpt --init-config  # it will create ~/.metagpt/config2.yaml, just modify it to your needs

You can configure ~/.metagpt/config2.yaml according to the example and doc:

llm:
  api_type: "openai"  # or azure / ollama / groq etc. Check LLMType for more options
  model: "gpt-4-turbo"  # or gpt-3.5-turbo
  base_url: "https://api.openai.com/v1"  # or forward url / other llm url
  api_key: "YOUR_API_KEY"

Usage

After installation, you can use MetaGPT at CLI

metagpt "Create a 2048 game"  # this will create a repo in ./workspace

or use it as library

from metagpt.software_company import generate_repo, ProjectRepo
repo: ProjectRepo = generate_repo("Create a 2048 game")  # or ProjectRepo("<path>")
print(repo)  # it will print the repo structure with files

You can also use Data Interpreter to write code:

import asyncio
from metagpt.roles.di.data_interpreter import DataInterpreter

async def main():
    di = DataInterpreter()
    await di.run("Run data analysis on sklearn Iris dataset, include a plot")

asyncio.run(main())  # or await main() in a jupyter notebook setting

QuickStart & Demo Video

https://github.com/geekan/MetaGPT/assets/34952977/34345016-5d13-489d-b9f9-b82ace413419

Tutorial

Support

Discord Join US

📢 Join Our Discord Channel! Looking forward to seeing you there! 🎉

Contributor form

📝 Fill out the form to become a contributor. We are looking forward to your participation!

Contact Information

If you have any questions or feedback about this project, please feel free to contact us. We highly appreciate your suggestions!

We will respond to all questions within 2-3 business days.

Citation

To stay updated with the latest research and development, follow @MetaGPT_ on Twitter.

To cite MetaGPT or Data Interpreter in publications, please use the following BibTeX entries.

@inproceedings{hong2024metagpt,
      title={Meta{GPT}: Meta Programming for A Multi-Agent Collaborative Framework},
      author={Sirui Hong and Mingchen Zhuge and Jonathan Chen and Xiawu Zheng and Yuheng Cheng and Jinlin Wang and Ceyao Zhang and Zili Wang and Steven Ka Shing Yau and Zijuan Lin and Liyang Zhou and Chenyu Ran and Lingfeng Xiao and Chenglin Wu and J{\"u}rgen Schmidhuber},
      booktitle={The Twelfth International Conference on Learning Representations},
      year={2024},
      url={https://openreview.net/forum?id=VtmBAGCN7o}
}
@misc{hong2024data,
      title={Data Interpreter: An LLM Agent For Data Science}, 
      author={Sirui Hong and Yizhang Lin and Bang Liu and Bangbang Liu and Binhao Wu and Danyang Li and Jiaqi Chen and Jiayi Zhang and Jinlin Wang and Li Zhang and Lingyao Zhang and Min Yang and Mingchen Zhuge and Taicheng Guo and Tuo Zhou and Wei Tao and Wenyi Wang and Xiangru Tang and Xiangtao Lu and Xiawu Zheng and Xinbing Liang and Yaying Fei and Yuheng Cheng and Zongze Xu and Chenglin Wu},
      year={2024},
      eprint={2402.18679},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}