big-AGI
Generative AI suite powered by state-of-the-art models and providing advanced AI/AGI functions. It features AI personas, AGI functions, multi-model chats, text-to-image, voice, response streaming, code highlighting and execution, PDF import, presets for developers, much more. Deploy on-prem or in the cloud.
Top Related Projects
A guidance language for controlling large language models.
A Gradio web UI for Large Language Models.
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
The official gpt4free repository | various collection of powerful language models
🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
Quick Overview
Big-AGI is an open-source AI web interface designed to provide a powerful, customizable, and user-friendly platform for interacting with large language models. It offers a range of features including multi-modal conversations, voice interactions, and various AI models, making it a versatile tool for both personal and professional use.
Pros
- Highly customizable with support for multiple AI models and plugins
- User-friendly interface with features like voice interactions and image analysis
- Open-source and actively maintained, allowing for community contributions
- Supports both local and cloud-based AI model deployment
Cons
- Requires some technical knowledge to set up and configure
- Performance may vary depending on the chosen AI model and hardware
- Limited documentation for advanced customization
- May require significant computational resources for optimal performance
Getting Started
To get started with Big-AGI:
-
Clone the repository:
git clone https://github.com/enricoros/big-AGI.git
-
Install dependencies:
cd big-AGI npm install
-
Set up environment variables:
- Create a
.env.local
file in the root directory - Add your API keys and configuration options (e.g.,
OPENAI_API_KEY=your_api_key_here
)
- Create a
-
Run the development server:
npm run dev
-
Open your browser and navigate to
http://localhost:3000
to start using Big-AGI.
For more detailed instructions and configuration options, refer to the project's README and documentation on GitHub.
Competitor Comparisons
A guidance language for controlling large language models.
Pros of guidance
- Focused on providing a structured approach to prompt engineering and LLM interactions
- Offers a Python-based framework for creating complex, multi-step AI workflows
- Provides fine-grained control over LLM outputs with its constraint system
Cons of guidance
- Limited to backend development and lacks a user-friendly interface
- Requires more technical expertise to implement and use effectively
- May have a steeper learning curve for those new to prompt engineering
Code comparison
guidance:
with guidance.models.OpenAI('text-davinci-002') as model:
prompt = guidance('''
Human: Write a poem about {{subject}}
AI: Here's a poem about {{subject}}:
{{#block ~}}{{gen 'poem' temperature=0.7 max_tokens=100}}{{/block}}
''')
result = prompt(subject='artificial intelligence')
big-AGI:
const conversation = [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Write a poem about artificial intelligence' },
];
const response = await api.sendConversation(conversation);
Summary
guidance focuses on structured prompt engineering with Python, while big-AGI provides a more user-friendly, web-based interface for AI interactions. guidance offers more control over LLM outputs but requires more technical knowledge, whereas big-AGI is more accessible to non-technical users but may offer less fine-grained control over AI responses.
A Gradio web UI for Large Language Models.
Pros of text-generation-webui
- Supports a wider range of models and architectures
- More customizable interface with extensive settings
- Includes training and fine-tuning capabilities
Cons of text-generation-webui
- Less user-friendly for beginners
- Requires more setup and configuration
- May have higher system requirements for some features
Code Comparison
text-generation-webui:
def generate_reply(
question, state, stopping_strings=None, is_chat=False, for_ui=False
):
# Complex generation logic
# ...
big-AGI:
async function generateReply(
question: string,
context: ConversationContext,
): Promise<string> {
// Simplified generation logic
// ...
}
The code snippets show that text-generation-webui has a more complex generation function with additional parameters, while big-AGI uses a simpler, async approach. This reflects the overall difference in complexity and customization between the two projects.
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
Pros of Open-Assistant
- Larger community-driven project with more contributors
- Focuses on creating an open-source AI assistant from scratch
- Extensive dataset collection and model training efforts
Cons of Open-Assistant
- More complex setup and infrastructure requirements
- Slower development cycle due to its larger scope
- Less focus on user-friendly interfaces for non-technical users
Code Comparison
Open-Assistant (Python):
def generate_response(self, prompt: str) -> str:
input_ids = self.tokenizer.encode(prompt, return_tensors="pt")
output = self.model.generate(input_ids, max_length=100)
return self.tokenizer.decode(output[0], skip_special_tokens=True)
big-AGI (JavaScript):
async function generateResponse(prompt) {
const response = await fetch('/api/chat', {
method: 'POST',
body: JSON.stringify({ prompt }),
});
return response.json();
}
Open-Assistant aims to build an open-source AI assistant from the ground up, involving extensive data collection and model training. It has a larger community but requires more complex setup. big-AGI, on the other hand, focuses on creating a user-friendly interface for existing AI models, making it more accessible to non-technical users but with less customization options for the underlying AI.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Pros of FastChat
- More comprehensive and flexible API server implementation
- Supports a wider range of language models and architectures
- Better suited for large-scale deployments and production environments
Cons of FastChat
- Less focus on user interface and interactive features
- Requires more technical expertise to set up and configure
- May be overkill for simple chatbot applications or personal use
Code Comparison
FastChat:
from fastchat.model import load_model, get_conversation_template
model, tokenizer = load_model(model_path)
conv = get_conversation_template("vicuna")
conv.append_message(conv.roles[0], "Hello!")
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
big-AGI:
import { ChatOpenAI } from 'langchain/chat_models/openai';
import { HumanChatMessage, SystemChatMessage } from 'langchain/schema';
const chat = new ChatOpenAI({ temperature: 0 });
const response = await chat.call([
new SystemChatMessage('You are a helpful assistant.'),
new HumanChatMessage('Hello!'),
]);
The official gpt4free repository | various collection of powerful language models
Pros of gpt4free
- Offers free access to GPT-4 and other AI models
- Provides multiple API endpoints and providers
- Lightweight and easy to integrate into existing projects
Cons of gpt4free
- Less user-friendly interface compared to big-AGI
- Limited features and customization options
- Potential legal and ethical concerns regarding API usage
Code Comparison
gpt4free:
from g4f import ChatCompletion
response = ChatCompletion.create(model='gpt-3.5-turbo', messages=[
{'role': 'user', 'content': 'Hello, how are you?'}
])
print(response)
big-AGI:
import { useCompletion } from '../lib/use-completion';
const { complete } = useCompletion({
api: '/api/chat',
onResponse: (response) => {
// Handle response
},
});
Summary
gpt4free focuses on providing free access to AI models through various APIs, making it suitable for developers looking to integrate AI capabilities into their projects quickly. However, it may lack the polished user interface and advanced features offered by big-AGI. big-AGI, on the other hand, provides a more comprehensive and user-friendly experience but may require more setup and configuration. The choice between the two depends on the specific needs of the project and the desired level of customization and user experience.
🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
Pros of AgentGPT
- More user-friendly interface with a visually appealing design
- Supports multiple languages, making it accessible to a wider audience
- Offers a web-based platform, eliminating the need for local installation
Cons of AgentGPT
- Limited customization options compared to big-AGI
- Less frequent updates and potentially slower development cycle
- Fewer advanced features for power users
Code Comparison
AgentGPT (TypeScript):
const handleNewTask = (task: string) => {
const newTasks = [...tasks, { value: task, completed: false }];
setTasks(newTasks);
updateLocalStorage(newTasks);
};
big-AGI (JavaScript):
const handleNewMessage = (message) => {
setMessages((prevMessages) => [...prevMessages, message]);
localStorage.setItem('messages', JSON.stringify([...messages, message]));
};
Both repositories use similar approaches for handling new tasks or messages, but big-AGI's implementation is more straightforward and doesn't involve creating a new object for each task.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
BIG-AGI ð§ â¨
Welcome to big-AGI, the AI suite for professionals that need function, form,
simplicity, and speed. Powered by the latest models from 12 vendors and
open-source servers, big-AGI
offers best-in-class Chats,
Beams,
and Calls with AI personas,
visualizations, coding, drawing, side-by-side chatting, and more -- all wrapped in a polished UX.
Stay ahead of the curve with big-AGI. ð Pros & Devs love big-AGI. ð¤
ð Big-AGI 2 is launching Q4 2024. Be the first to experience it before the public release.
Or fork & run on Vercel
Quick links: ð roadmap ð installation ð documentation
What's New in 1.16.1...1.16.8 · Sep 13, 2024 (patch releases)
- 1.16.8: OpenAI ChatGPT-4o Latest (o1-preview and o1-mini are supported in Big-AGI 2)
- 1.16.7: OpenAI support for GPT-4o 2024-08-06
- 1.16.6: Groq support for Llama 3.1 models
- 1.16.5: GPT-4o Mini support
- 1.16.4: 8192 tokens support for Claude 3.5 Sonnet
- 1.16.3: Anthropic Claude 3.5 Sonnet model support
- 1.16.2: Improve web downloads, as text, markdwon, or HTML
- 1.16.2: Proper support for Gemini models
- 1.16.2: Added the latest Mistral model
- 1.16.2: Tokenizer support for gpt-4o
- 1.16.2: Updates to Beam
- 1.16.1: Support for the new OpenAI GPT-4o 2024-05-13 model
What's New in 1.16.0 · May 9, 2024 · Crystal Clear
- Beam core and UX improvements based on user feedback
- Chat cost estimation ð° (enable it in Labs / hover the token counter)
- Save/load chat files with Ctrl+S / Ctrl+O on desktop
- Major enhancements to the Auto-Diagrams tool
- YouTube Transcriber Persona for chatting with video content, #500
- Improved formula rendering (LaTeX), and dark-mode diagrams, #508, #520
- Models update: Anthropic, Groq, Ollama, OpenAI, OpenRouter, Perplexity
- Code soft-wrap, chat text selection toolbar, 3x faster on Apple silicon, and more #517, 507
3,000 Commits Milestone · April 7, 2024
- ð¥ Today we celebrate commit 3000 in just over one year, and going stronger ð
- ð¢ï¸ Thanks everyone for your support and words of love for Big-AGI, we are committed to creating the best AI experiences for everyone.
What's New in 1.15.0 · April 1, 2024 · Beam
- â ï¸ Beam: the multi-model AI chat. find better answers, faster - a game-changer for brainstorming, decision-making, and creativity. #443
- Managed Deployments Auto-Configuration: simplify the UI models setup with backend-set models. #436
- Message Starring â: star important messages within chats, to attach them later. #476
- Enhanced the default Persona
- Fixes to Gemini models and SVGs, improvements to UI and icons
- 1.15.1: Support for Gemini Pro 1.5 and OpenAI Turbo models
- Beast release, over 430 commits, 10,000+ lines changed: release notes, and changes v1.14.1...v1.15.0
What's New in 1.14.1 · March 7, 2024 · Modelmorphic
- Anthropic Claude-3 model family support. #443
- New Perplexity and Groq integration (thanks @Penagwin). #407, #427
- LocalAI deep integration, including support for model galleries
- Mistral Large and Google Gemini 1.5 support
- Performance optimizations: runs much faster, saves lots of power, reduces memory usage
- Enhanced UX with auto-sizing charts, refined search and folder functionalities, perfected scaling
- And with more UI improvements, documentation, bug fixes (20 tickets), and developer enhancements
What's New in 1.13.0 · Feb 8, 2024 · Multi + Mind
https://github.com/enricoros/big-AGI/assets/32999/01732528-730e-41dc-adc7-511385686b13
- Side-by-Side Split Windows: multitask with parallel conversations. #208
- Multi-Chat Mode: message everyone, all at once. #388
- Export tables as CSV: big thanks to @aj47. #392
- Adjustable text size: customize density. #399
- Dev2 Persona Technology Preview
- Better looking chats with improved spacing, fonts, and menus
- More: new video player, LM Studio tutorial (thanks @aj47), MongoDB support (thanks @ranfysvalle02), and speedups
What's New in 1.12.0 · Jan 26, 2024 · AGI Hotline
https://github.com/enricoros/big-AGI/assets/32999/95ceb03c-945d-4fdd-9a9f-3317beb54f3f
- Voice Calls: real-time voice call your personas out of the blue or in relation to a chat #354
- Support OpenAI 0125 Models. #364
- Rename or Auto-Rename chats. #222, #360
- More control over Link Sharing #356
- Accessibility to screen readers #358
- Export chats to Markdown #337
- Paste tables from Excel #286
- Ollama model updates and context window detection fixes #309
What's New in 1.11.0 · Jan 16, 2024 · Singularity
https://github.com/enricoros/big-AGI/assets/1590910/a6b8e172-0726-4b03-a5e5-10cfcb110c68
- Find chats: search in titles and content, with frequency ranking. #329
- Commands: command auto-completion (type '/'). #327
- Together AI inference platform support (good speed and newer models). #346
- Persona Creator history, deletion, custom creation, fix llm API timeouts
- Enable adding up to five custom OpenAI-compatible endpoints
- Developer enhancements: new 'Actiles' framework
What's New in 1.10.0 · Jan 6, 2024 · The Year of AGI
- New UI: for both desktop and mobile, sets the stage for future scale. #201
- Conversation Folders: enhanced conversation organization. #321
- LM Studio support and improved token management
- Resizable panes in split-screen conversations.
- Large performance optimizations
- Developer enhancements: new UI framework, updated documentation for proxy settings on browserless/docker
For full details and former releases, check out the changelog.
ð Key Features â¨
Chat Call Beam Draw, ... | Local & Cloud Open & Closed Cheap & Heavy Google, Mistral, ... | Attachments Diagrams Multi-Chat Mobile-first UI | Stored Locally Easy self-Host Local actions Data = Gold | AI Personas Voice Modes Screen Capture Camera + OCR |
You can easily configure 100s of AI models in big-AGI:
AI models | supported vendors |
---|---|
Opensource Servers | LocalAI (multimodal) · Ollama · Oobabooga |
Local Servers | LM Studio |
Multimodal services | Azure · Google Gemini · OpenAI |
Language services | Anthropic · Groq · Mistral · OpenRouter · Perplexity · Together AI |
Image services | Prodia (SDXL) |
Speech services | ElevenLabs (Voice synthesis / cloning) |
Add extra functionality with these integrations:
More | integrations |
---|---|
Web Browse | Browserless · Puppeteer-based |
Web Search | Google CSE |
Code Editors | CodePen · StackBlitz · JSFiddle |
Sharing | Paste.gg (Paste chats) |
Tracking | Helicone (LLM Observability) |
ð Installation
To get started with big-AGI, follow our comprehensive Installation Guide. The guide covers various installation options, whether you're spinning it up on your local computer, deploying on Vercel, on Cloudflare, or rolling it out through Docker.
Whether you're a developer, system integrator, or enterprise user, you'll find step-by-step instructions to set up big-AGI quickly and easily.
Or bring your API keys and jump straight into our free instance on big-AGI.com.
ð Get Involved!
- ð¢ï¸ Chat with us on Discord
- â Give us a star on GitHub ð
- ð Do you like code? You'll love this gem of a project! Pick up a task! - easy to pro
- ð¡ Got a feature suggestion? Add your roadmap ideas
- ⨠Deploy your fork for your friends and family, or customize it for work
2023-2024 · Enrico Ros x big-AGI · License: MIT · Made with ð
Top Related Projects
A guidance language for controlling large language models.
A Gradio web UI for Large Language Models.
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
The official gpt4free repository | various collection of powerful language models
🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot