Top Related Projects
🦜🔗 Build context-aware reasoning applications
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
Quick Overview
BabyAGI is an AI-powered task management system that uses OpenAI's language models to create, prioritize, and execute tasks. It demonstrates the potential of AI in automating and optimizing workflow processes, showcasing how AI can assist in breaking down complex goals into manageable tasks.
Pros
- Automates task creation and prioritization
- Demonstrates practical application of AI in workflow management
- Highly customizable and extensible
- Open-source, allowing for community contributions and improvements
Cons
- Requires OpenAI API key, which can be costly for extensive use
- May produce inconsistent or irrelevant tasks depending on the initial prompt
- Limited error handling and robustness
- Potential for recursive or infinite task creation if not properly managed
Code Examples
- Creating a task list:
task_list = TaskList()
initial_task = Task("Develop a marketing strategy for a new product")
task_list.add_task(initial_task)
- Executing the main loop:
baby_agi = BabyAGI(task_list, openai_api_key="your-api-key")
baby_agi.run(max_iterations=5)
- Customizing the execution agent:
class CustomExecutionAgent(ExecutionAgent):
def execute_task(self, task):
# Custom implementation for task execution
result = f"Executed task: {task.description}"
return result
baby_agi = BabyAGI(task_list, execution_agent=CustomExecutionAgent())
Getting Started
-
Clone the repository:
git clone https://github.com/yoheinakajima/babyagi.git cd babyagi
-
Install dependencies:
pip install -r requirements.txt
-
Set up your OpenAI API key:
export OPENAI_API_KEY='your-api-key-here'
-
Run the main script:
python babyagi.py
Competitor Comparisons
🦜🔗 Build context-aware reasoning applications
Pros of LangChain
- More comprehensive framework for building LLM-powered applications
- Extensive documentation and community support
- Wider range of integrations with various LLM providers and tools
Cons of LangChain
- Steeper learning curve due to its extensive features
- May be overkill for simpler projects or prototypes
- Requires more setup and configuration
Code Comparison
LangChain example:
from langchain import OpenAI, LLMChain, PromptTemplate
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run("colorful socks"))
BabyAGI example:
import openai
openai.api_key = "your-api-key-here"
response = openai.Completion.create(
engine="text-davinci-002",
prompt="What is a good name for a company that makes colorful socks?",
max_tokens=50
)
print(response.choices[0].text.strip())
The LangChain example demonstrates its structured approach with chains and templates, while BabyAGI shows a more straightforward implementation using the OpenAI API directly. LangChain offers more flexibility and abstraction, whereas BabyAGI is simpler but less feature-rich.
Pros of TaskMatrix
- Offers a more comprehensive task planning and execution system
- Integrates with external tools and APIs for enhanced functionality
- Provides a visual interface for task management and progress tracking
Cons of TaskMatrix
- More complex setup and configuration required
- Steeper learning curve for new users
- Potentially higher resource requirements due to additional features
Code Comparison
TaskMatrix:
class TaskMatrix:
def __init__(self):
self.tasks = []
self.tools = []
def add_task(self, task):
self.tasks.append(task)
def execute_tasks(self):
for task in self.tasks:
task.execute()
BabyAGI:
class BabyAGI:
def __init__(self):
self.task_list = []
def add_task(self, task):
self.task_list.append(task)
def run(self):
while self.task_list:
task = self.task_list.pop(0)
self.execute_task(task)
TaskMatrix offers a more structured approach with separate task and tool management, while BabyAGI provides a simpler implementation focused on task execution. TaskMatrix's code suggests greater flexibility in handling various task types and external tools, whereas BabyAGI's code emphasizes a straightforward task processing loop.
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
Pros of JARVIS
- More comprehensive and feature-rich, offering a wider range of AI capabilities
- Better integration with various tools and platforms, enhancing versatility
- Stronger support and documentation from Microsoft
Cons of JARVIS
- More complex setup and configuration process
- Potentially higher resource requirements due to its extensive features
Code Comparison
JARVIS:
from jarvis import Jarvis
jarvis = Jarvis()
response = jarvis.process_command("Summarize the latest news")
print(response)
BabyAGI:
from babyagi import BabyAGI
baby_agi = BabyAGI()
task = "Summarize the latest news"
result = baby_agi.execute_task(task)
print(result)
Summary
JARVIS offers a more robust and feature-rich AI system with better integration capabilities, while BabyAGI provides a simpler, more focused approach to AI task execution. JARVIS may require more setup and resources, but it offers a wider range of functionalities. BabyAGI, on the other hand, is easier to get started with but may have limitations in terms of advanced features and integrations.
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Pros of AutoGPT
- More advanced and feature-rich, offering a wider range of capabilities
- Better user interface and interaction, including a web-based GUI
- Supports multiple AI models and has a more active development community
Cons of AutoGPT
- More complex to set up and use, requiring more configuration
- Higher computational requirements and potentially higher costs
- May be overkill for simpler tasks that BabyAGI can handle efficiently
Code Comparison
AutoGPT:
def start_interaction_loop(self):
# Initialize variables for the interaction loop
loop_count = 0
command_name = None
arguments = None
user_input = ""
BabyAGI:
def run_baby_agi(objective, initial_task, llm, execution_agent):
task_list = deque([])
task_id_counter = 0
task_list.append({"task_id": task_id_counter, "task_name": initial_task})
Both projects aim to create autonomous AI agents, but AutoGPT offers a more comprehensive solution with advanced features and better user interaction. However, this comes at the cost of increased complexity and resource requirements. BabyAGI, on the other hand, provides a simpler and more lightweight approach, making it easier to understand and implement for basic tasks.
🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
Pros of AgentGPT
- User-friendly web interface for easy interaction and task management
- Supports multiple language models, including GPT-3.5 and GPT-4
- Offers a more comprehensive agent ecosystem with various agent types
Cons of AgentGPT
- More complex setup and configuration process
- Requires more computational resources due to its advanced features
- Less focused on specific task automation compared to BabyAGI
Code Comparison
BabyAGI:
def add_task(task: Dict):
task.update({"task_id": len(task_list) + 1})
task_list.append(task)
def get_ada_embedding(text):
text = text.replace("\n", " ")
return openai.Embedding.create(input=[text], model="text-embedding-ada-002")["data"][0]["embedding"]
AgentGPT:
const createAgent = async (goal: string, modelName: string) => {
const agent = new Agent(goal, modelName);
await agent.start();
return agent;
};
const executeTask = async (agent: Agent, task: string) => {
const result = await agent.executeTask(task);
return result;
};
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Translations:
Objective
This Python script is an example of an AI-powered task management system. The system uses OpenAI and vector databases such as Chroma or Weaviate to create, prioritize, and execute tasks. The main idea behind this system is that it creates tasks based on the result of previous tasks and a predefined objective. The script then uses OpenAI's natural language processing (NLP) capabilities to create new tasks based on the objective, and Chroma/Weaviate to store and retrieve task results for context. This is a pared-down version of the original Task-Driven Autonomous Agent (Mar 28, 2023).
This README will cover the following:
How It Works
The script works by running an infinite loop that does the following steps:
- Pulls the first task from the task list.
- Sends the task to the execution agent, which uses OpenAI's API to complete the task based on the context.
- Enriches the result and stores it in Chroma/Weaviate.
- Creates new tasks and reprioritizes the task list based on the objective and the result of the previous task.
The execution_agent()
function is where the OpenAI API is used. It takes two parameters: the objective and the task. It then sends a prompt to OpenAI's API, which returns the result of the task. The prompt consists of a description of the AI system's task, the objective, and the task itself. The result is then returned as a string.
The task_creation_agent()
function is where OpenAI's API is used to create new tasks based on the objective and the result of the previous task. The function takes four parameters: the objective, the result of the previous task, the task description, and the current task list. It then sends a prompt to OpenAI's API, which returns a list of new tasks as strings. The function then returns the new tasks as a list of dictionaries, where each dictionary contains the name of the task.
The prioritization_agent()
function is where OpenAI's API is used to reprioritize the task list. The function takes one parameter, the ID of the current task. It sends a prompt to OpenAI's API, which returns the reprioritized task list as a numbered list.
Finally, the script uses Chroma/Weaviate to store and retrieve task results for context. The script creates a Chroma/Weaviate collection based on the table name specified in the TABLE_NAME variable. Chroma/Weaviate is then used to store the results of the task in the collection, along with the task name and any additional metadata.
How to Use
To use the script, you will need to follow these steps:
- Clone the repository via
git clone https://github.com/yoheinakajima/babyagi.git
andcd
into the cloned repository. - Install the required packages:
pip install -r requirements.txt
- Copy the .env.example file to .env:
cp .env.example .env
. This is where you will set the following variables. - Set your OpenAI API key in the OPENAI_API_KEY and OPENAI_API_MODEL variables. In order to use with Weaviate you will also need to setup additional variables detailed here.
- Set the name of the table where the task results will be stored in the TABLE_NAME variable.
- (Optional) Set the name of the BabyAGI instance in the BABY_NAME variable.
- (Optional) Set the objective of the task management system in the OBJECTIVE variable.
- (Optional) Set the first task of the system in the INITIAL_TASK variable.
- Run the script:
python babyagi.py
All optional values above can also be specified on the command line.
Running inside a docker container
As a prerequisite, you will need docker and docker-compose installed. Docker desktop is the simplest option https://www.docker.com/products/docker-desktop/
To run the system inside a docker container, setup your .env file as per steps above and then run the following:
docker-compose up
Supported Models
This script works with all OpenAI models, as well as Llama and its variations through Llama.cpp. Default model is gpt-3.5-turbo. To use a different model, specify it through LLM_MODEL or use the command line.
Llama
Llama integration requires llama-cpp package. You will also need the Llama model weights.
- Under no circumstances share IPFS, magnet links, or any other links to model downloads anywhere in this repository, including in issues, discussions or pull requests. They will be immediately deleted.
Once you have them, set LLAMA_MODEL_PATH to the path of the specific model to use. For convenience, you can link models
in BabyAGI repo to the folder where you have the Llama model weights. Then run the script with LLM_MODEL=llama
or -l
argument.
Warning
This script is designed to be run continuously as part of a task management system. Running this script continuously can result in high API usage, so please use it responsibly. Additionally, the script requires the OpenAI API to be set up correctly, so make sure you have set up the API before running the script.
Contribution
Needless to say, BabyAGI is still in its infancy and thus we are still determining its direction and the steps to get there. Currently, a key design goal for BabyAGI is to be simple such that it's easy to understand and build upon. To maintain this simplicity, we kindly request that you adhere to the following guidelines when submitting PRs:
- Focus on small, modular modifications rather than extensive refactoring.
- When introducing new features, provide a detailed description of the specific use case you are addressing.
A note from @yoheinakajima (Apr 5th, 2023):
I know there are a growing number of PRs, appreciate your patience - as I am both new to GitHub/OpenSource, and did not plan my time availability accordingly this week. Re:direction, I've been torn on keeping it simple vs expanding - currently leaning towards keeping a core Baby AGI simple, and using this as a platform to support and promote different approaches to expanding this (eg. BabyAGIxLangchain as one direction). I believe there are various opinionated approaches that are worth exploring, and I see value in having a central place to compare and discuss. More updates coming shortly.
I am new to GitHub and open source, so please be patient as I learn to manage this project properly. I run a VC firm by day, so I will generally be checking PRs and issues at night after I get my kids down - which may not be every night. Open to the idea of bringing in support, will be updating this section soon (expectations, visions, etc). Talking to lots of people and learning - hang tight for updates!
BabyAGI Activity Report
To help the BabyAGI community stay informed about the project's progress, Blueprint AI has developed a Github activity summarizer for BabyAGI. This concise report displays a summary of all contributions to the BabyAGI repository over the past 7 days (continuously updated), making it easy for you to keep track of the latest developments.
To view the BabyAGI 7-day activity report, go here: https://app.blueprint.ai/github/yoheinakajima/babyagi
Inspired projects
In the short time since it was release, BabyAGI inspired many projects. You can see them all here.
Backstory
BabyAGI is a pared-down version of the original Task-Driven Autonomous Agent (Mar 28, 2023) shared on Twitter. This version is down to 140 lines: 13 comments, 22 blanks, and 105 code. The name of the repo came up in the reaction to the original autonomous agent - the author does not mean to imply that this is AGI.
Made with love by @yoheinakajima, who happens to be a VC (would love to see what you're building!)
Top Related Projects
🦜🔗 Build context-aware reasoning applications
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot