Convert Figma logo to code with AI

mckaywrigley logochatbot-ui

AI chat for every model.

28,137
7,828
28,137
173

Top Related Projects

67,223

Robust Speech Recognition via Large-Scale Weak Supervision

Port of OpenAI's Whisper model in C/C++

23,528

JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf

34,658

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

Quick Overview

Chatbot UI is an open-source chat interface for AI models, primarily designed to work with OpenAI's GPT models. It provides a user-friendly, customizable frontend for interacting with AI chatbots, allowing users to create and manage conversations, customize settings, and integrate with various AI models.

Pros

  • Clean and intuitive user interface
  • Highly customizable, with support for themes and various settings
  • Open-source and actively maintained
  • Easy integration with OpenAI's API and potential for other AI model integrations

Cons

  • Requires API key setup, which may be challenging for non-technical users
  • Limited to text-based interactions (no voice or image support)
  • May require additional configuration for optimal performance
  • Dependent on external AI services, which may have associated costs

Getting Started

To set up Chatbot UI locally:

  1. Clone the repository:

    git clone https://github.com/mckaywrigley/chatbot-ui.git
    
  2. Navigate to the project directory:

    cd chatbot-ui
    
  3. Install dependencies:

    npm install
    
  4. Create a .env.local file in the root directory and add your OpenAI API key:

    OPENAI_API_KEY=your_api_key_here
    
  5. Start the development server:

    npm run dev
    
  6. Open your browser and navigate to http://localhost:3000 to use the Chatbot UI.

Competitor Comparisons

67,223

Robust Speech Recognition via Large-Scale Weak Supervision

Pros of Whisper

  • Specialized in speech recognition and transcription
  • Supports multiple languages and accents
  • Backed by OpenAI's extensive research and resources

Cons of Whisper

  • Limited to audio processing tasks
  • Requires more computational resources for processing
  • Less user-friendly for non-technical users

Code Comparison

Whisper (Python):

import whisper

model = whisper.load_model("base")
result = model.transcribe("audio.mp3")
print(result["text"])

Chatbot UI (JavaScript):

import { ChatOpenAI } from "langchain/chat_models/openai";

const chat = new ChatOpenAI({ temperature: 0 });
const response = await chat.call([
  new HumanChatMessage("Hello, how are you?"),
]);
console.log(response);

Key Differences

  • Whisper focuses on speech-to-text, while Chatbot UI is a general-purpose chatbot interface
  • Whisper is implemented in Python, Chatbot UI uses JavaScript and React
  • Whisper processes audio files, Chatbot UI handles text-based conversations
  • Chatbot UI offers a more interactive user experience with a graphical interface
  • Whisper requires more technical knowledge to implement and use effectively

Port of OpenAI's Whisper model in C/C++

Pros of whisper.cpp

  • Focuses on speech recognition and transcription, offering a specialized tool for audio processing
  • Implements OpenAI's Whisper model in C++, potentially providing better performance and lower resource usage
  • Can be used as a standalone application or integrated into other projects

Cons of whisper.cpp

  • Limited to speech recognition functionality, lacking the versatility of a full chatbot UI
  • Requires more technical knowledge to set up and use compared to a web-based chatbot interface
  • May have a steeper learning curve for users unfamiliar with C++ or command-line tools

Code Comparison

whisper.cpp:

// Example of loading and running the Whisper model
whisper_context* ctx = whisper_init_from_file("ggml-base.en.bin");
whisper_full_params params = whisper_full_default_params(WHISPER_SAMPLING_GREEDY);
whisper_full(ctx, params, pcmf32.data(), pcmf32.size(), "output.txt");

chatbot-ui:

// Example of rendering a chat message in React
const ChatMessage = ({ message }) => (
  <div className={`flex ${message.role === 'assistant' ? 'justify-start' : 'justify-end'}`}>
    <div className="bg-gray-200 rounded-lg p-2 max-w-md">{message.content}</div>
  </div>
);

While both projects serve different purposes, whisper.cpp excels in speech recognition tasks, whereas chatbot-ui provides a user-friendly interface for text-based conversations. The code examples highlight their distinct focuses: low-level audio processing versus front-end UI components.

23,528

JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf

Pros of JARVIS

  • More comprehensive AI agent framework with multi-modal capabilities
  • Supports a wider range of tasks including vision, speech, and robotics
  • Larger community and backing from Microsoft

Cons of JARVIS

  • More complex setup and steeper learning curve
  • Potentially overkill for simple chatbot applications
  • Less focused on pure conversational UI

Code Comparison

JARVIS (Python):

from jarvis import Agent, Task

agent = Agent()
task = Task("Analyze image and describe contents")
result = agent.execute(task)
print(result)

chatbot-ui (JavaScript):

import { ChatbotUI } from 'chatbot-ui';

const chatbot = new ChatbotUI();
chatbot.sendMessage("Describe this image");
chatbot.on('response', (message) => {
  console.log(message);
});

Summary

JARVIS is a more powerful and versatile AI framework, suitable for complex multi-modal tasks. chatbot-ui is simpler and more focused on conversational interfaces. JARVIS offers broader capabilities but requires more setup, while chatbot-ui provides a streamlined solution for chat-based applications. The choice between them depends on the specific project requirements and the desired level of complexity.

34,658

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Pros of DeepSpeed

  • Highly optimized for large-scale deep learning training and inference
  • Supports distributed training across multiple GPUs and nodes
  • Offers advanced memory optimization techniques for handling large models

Cons of DeepSpeed

  • Steeper learning curve due to its focus on high-performance computing
  • Requires more setup and configuration for optimal performance
  • Less suitable for simple chatbot applications or quick prototyping

Code Comparison

DeepSpeed (optimization for large models):

model_engine, optimizer, _, _ = deepspeed.initialize(
    args=args,
    model=model,
    model_parameters=model.parameters(),
    config=ds_config
)

Chatbot UI (simple chat interface):

const ChatMessage = ({ message }) => (
  <div className={`flex ${message.role === 'user' ? 'justify-end' : 'justify-start'}`}>
    <div className="message">{message.content}</div>
  </div>
);

Summary

DeepSpeed is a powerful library for optimizing large-scale deep learning models, offering advanced features for distributed training and memory management. It's ideal for researchers and organizations working with massive language models or complex AI systems. On the other hand, Chatbot UI provides a simple, user-friendly interface for building chat applications, making it more suitable for developers focusing on creating interactive chatbots or prototyping conversational AI. While DeepSpeed excels in performance and scalability, Chatbot UI shines in ease of use and rapid development of chat interfaces.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Pros of transformers

  • Comprehensive library for state-of-the-art NLP models
  • Extensive documentation and community support
  • Supports multiple deep learning frameworks (PyTorch, TensorFlow)

Cons of transformers

  • Steeper learning curve for beginners
  • Requires more computational resources
  • Less focused on UI/UX aspects of chatbot development

Code comparison

transformers:

from transformers import pipeline

classifier = pipeline("sentiment-analysis")
result = classifier("I love this product!")[0]
print(f"Label: {result['label']}, Score: {result['score']:.4f}")

chatbot-ui:

import { useState } from 'react'
import { Message } from '@/types'
import { ChatInput } from '@/components/ChatInput'

const [messages, setMessages] = useState<Message[]>([])

Key differences

  • transformers focuses on providing a wide range of NLP models and tools
  • chatbot-ui emphasizes creating a user-friendly interface for chatbots
  • transformers is primarily Python-based, while chatbot-ui uses JavaScript/React
  • transformers offers more flexibility in model selection and fine-tuning
  • chatbot-ui provides a ready-to-use UI framework for chatbot applications

Both repositories serve different purposes in the AI/NLP ecosystem, with transformers being more suitable for researchers and developers working on complex NLP tasks, while chatbot-ui is better suited for those looking to quickly implement a chatbot interface.

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

Pros of Open-Assistant

  • Open-source and community-driven project, allowing for broader collaboration and contributions
  • Aims to create a fully open AI assistant, potentially offering more transparency and customization
  • Supports multiple languages and has a focus on multilingual capabilities

Cons of Open-Assistant

  • Still in early development stages, which may result in less stability and fewer features
  • Requires more technical expertise to set up and use compared to Chatbot UI
  • May have a steeper learning curve for non-technical users

Code Comparison

Open-Assistant (Python):

from open_assistant import OpenAssistant

assistant = OpenAssistant()
response = assistant.generate_response("Hello, how are you?")
print(response)

Chatbot UI (JavaScript):

import { ChatbotUI } from 'chatbot-ui';

const chatbot = new ChatbotUI();
chatbot.sendMessage("Hello, how are you?")
  .then(response => console.log(response));

While Open-Assistant focuses on creating an open-source AI assistant with multilingual support, Chatbot UI provides a more user-friendly interface for implementing chatbots. Open-Assistant may offer more flexibility and customization options, but Chatbot UI is likely easier to set up and use for those with less technical expertise. The choice between the two depends on the specific needs of the project and the user's technical skills.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Chatbot UI

The open-source AI chat app for everyone.

Chatbot UI

Demo

View the latest demo here.

Updates

Hey everyone! I've heard your feedback and am working hard on a big update.

Things like simpler deployment, better backend compatibility, and improved mobile layouts are on their way.

Be back soon.

-- Mckay

Official Hosted Version

Use Chatbot UI without having to host it yourself!

Find the official hosted version of Chatbot UI here.

Sponsor

If you find Chatbot UI useful, please consider sponsoring me to support my open-source work :)

Issues

We restrict "Issues" to actual issues related to the codebase.

We're getting excessive amounts of issues that amount to things like feature requests, cloud provider issues, etc.

If you are having issues with things like setup, please refer to the "Help" section in the "Discussions" tab above.

Issues unrelated to the codebase will likely be closed immediately.

Discussions

We highly encourage you to participate in the "Discussions" tab above!

Discussions are a great place to ask questions, share ideas, and get help.

Odds are if you have a question, someone else has the same question.

Legacy Code

Chatbot UI was recently updated to its 2.0 version.

The code for 1.0 can be found on the legacy branch.

Updating

In your terminal at the root of your local Chatbot UI repository, run:

npm run update

If you run a hosted instance you'll also need to run:

npm run db-push

to apply the latest migrations to your live database.

Local Quickstart

Follow these steps to get your own Chatbot UI instance running locally.

You can watch the full video tutorial here.

1. Clone the Repo

git clone https://github.com/mckaywrigley/chatbot-ui.git

2. Install Dependencies

Open a terminal in the root directory of your local Chatbot UI repository and run:

npm install

3. Install Supabase & Run Locally

Why Supabase?

Previously, we used local browser storage to store data. However, this was not a good solution for a few reasons:

  • Security issues
  • Limited storage
  • Limits multi-modal use cases

We now use Supabase because it's easy to use, it's open-source, it's Postgres, and it has a free tier for hosted instances.

We will support other providers in the future to give you more options.

1. Install Docker

You will need to install Docker to run Supabase locally. You can download it here for free.

2. Install Supabase CLI

MacOS/Linux

brew install supabase/tap/supabase

Windows

scoop bucket add supabase https://github.com/supabase/scoop-bucket.git
scoop install supabase

3. Start Supabase

In your terminal at the root of your local Chatbot UI repository, run:

supabase start

4. Fill in Secrets

1. Environment Variables

In your terminal at the root of your local Chatbot UI repository, run:

cp .env.local.example .env.local

Get the required values by running:

supabase status

Note: Use API URL from supabase status for NEXT_PUBLIC_SUPABASE_URL

Now go to your .env.local file and fill in the values.

If the environment variable is set, it will disable the input in the user settings.

2. SQL Setup

In the 1st migration file supabase/migrations/20240108234540_setup.sql you will need to replace 2 values with the values you got above:

  • project_url (line 53): http://supabase_kong_chatbotui:8000 (default) can remain unchanged if you don't change your project_id in the config.toml file
  • service_role_key (line 54): You got this value from running supabase status

This prevents issues with storage files not being deleted properly.

5. Install Ollama (optional for local models)

Follow the instructions here.

6. Run app locally

In your terminal at the root of your local Chatbot UI repository, run:

npm run chat

Your local instance of Chatbot UI should now be running at http://localhost:3000. Be sure to use a compatible node version (i.e. v18).

You can view your backend GUI at http://localhost:54323/project/default/editor.

Hosted Quickstart

Follow these steps to get your own Chatbot UI instance running in the cloud.

Video tutorial coming soon.

1. Follow Local Quickstart

Repeat steps 1-4 in "Local Quickstart" above.

You will want separate repositories for your local and hosted instances.

Create a new repository for your hosted instance of Chatbot UI on GitHub and push your code to it.

2. Setup Backend with Supabase

1. Create a new project

Go to Supabase and create a new project.

2. Get Project Values

Once you are in the project dashboard, click on the "Project Settings" icon tab on the far bottom left.

Here you will get the values for the following environment variables:

  • Project Ref: Found in "General settings" as "Reference ID"

  • Project ID: Found in the URL of your project dashboard (Ex: https://supabase.com/dashboard/project/<YOUR_PROJECT_ID>/settings/general)

While still in "Settings" click on the "API" text tab on the left.

Here you will get the values for the following environment variables:

  • Project URL: Found in "API Settings" as "Project URL"

  • Anon key: Found in "Project API keys" as "anon public"

  • Service role key: Found in "Project API keys" as "service_role" (Reminder: Treat this like a password!)

3. Configure Auth

Next, click on the "Authentication" icon tab on the far left.

In the text tabs, click on "Providers" and make sure "Email" is enabled.

We recommend turning off "Confirm email" for your own personal instance.

4. Connect to Hosted DB

Open up your repository for your hosted instance of Chatbot UI.

In the 1st migration file supabase/migrations/20240108234540_setup.sql you will need to replace 2 values with the values you got above:

  • project_url (line 53): Use the Project URL value from above
  • service_role_key (line 54): Use the Service role key value from above

Now, open a terminal in the root directory of your local Chatbot UI repository. We will execute a few commands here.

Login to Supabase by running:

supabase login

Next, link your project by running the following command with the "Project ID" you got above:

supabase link --project-ref <project-id>

Your project should now be linked.

Finally, push your database to Supabase by running:

supabase db push

Your hosted database should now be set up!

3. Setup Frontend with Vercel

Go to Vercel and create a new project.

In the setup page, import your GitHub repository for your hosted instance of Chatbot UI. Within the project Settings, in the "Build & Development Settings" section, switch Framework Preset to "Next.js".

In environment variables, add the following from the values you got above:

  • NEXT_PUBLIC_SUPABASE_URL
  • NEXT_PUBLIC_SUPABASE_ANON_KEY
  • SUPABASE_SERVICE_ROLE_KEY
  • NEXT_PUBLIC_OLLAMA_URL (only needed when using local Ollama models; default: http://localhost:11434)

You can also add API keys as environment variables.

  • OPENAI_API_KEY
  • AZURE_OPENAI_API_KEY
  • AZURE_OPENAI_ENDPOINT
  • AZURE_GPT_45_VISION_NAME

For the full list of environment variables, refer to the '.env.local.example' file. If the environment variables are set for API keys, it will disable the input in the user settings.

Click "Deploy" and wait for your frontend to deploy.

Once deployed, you should be able to use your hosted instance of Chatbot UI via the URL Vercel gives you.

Contributing

We are working on a guide for contributing.

Contact

Message Mckay on Twitter/X