Top Related Projects
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
AI chat for any model.
✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows
GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.
LLM UI with advanced features, easy setup, and multiple backend support.
Quick Overview
The huggingface/chat-ui
repository provides a user interface (UI) for interacting with language models, specifically the Hugging Face Transformers library. It offers a web-based chat interface that allows users to engage in conversational interactions with AI models, making it easier to explore and experiment with these powerful language technologies.
Pros
- Intuitive Chat Interface: The UI provides a familiar and user-friendly chat interface, making it accessible for both technical and non-technical users to interact with language models.
- Supports Multiple Models: The project supports a wide range of language models from the Hugging Face Transformers library, allowing users to experiment with different model capabilities.
- Easy Deployment: The project is designed to be easily deployed, either locally or on a web server, enabling users to quickly set up and start using the chat interface.
- Open-Source and Customizable: As an open-source project, the code is available for users to inspect, modify, and extend to fit their specific needs or preferences.
Cons
- Limited Functionality: The current version of the project focuses primarily on the chat interface, and may lack advanced features or customization options that some users might desire.
- Dependency on Hugging Face Transformers: The project is tightly coupled with the Hugging Face Transformers library, which means users need to be familiar with that ecosystem to fully utilize the chat UI.
- Potential Performance Limitations: Depending on the language model and the user's hardware, the chat interface may experience performance issues, especially when handling large or complex inputs.
- Ongoing Maintenance: As an open-source project, the long-term maintenance and development of the chat UI may depend on the continued involvement of the Hugging Face community.
Code Examples
The huggingface/chat-ui
project is a web-based application, and the majority of the code is written in JavaScript and React. Here are a few examples of the code:
- Rendering the Chat Interface:
import React from 'react';
import { ChatContainer, ChatHeader, ChatInput, ChatMessages } from '@chatui/core';
const ChatUI = () => {
return (
<ChatContainer>
<ChatHeader title="Chat with AI" />
<ChatMessages />
<ChatInput />
</ChatContainer>
);
};
export default ChatUI;
This code sets up the basic structure of the chat interface, including the header, message display, and input field.
- Handling User Input:
import { useCallback, useState } from 'react';
import { useSendMessage } from '@chatui/core';
const ChatInput = () => {
const [inputValue, setInputValue] = useState('');
const { sendMessage } = useSendMessage();
const handleSendMessage = useCallback(() => {
if (inputValue.trim()) {
sendMessage(inputValue);
setInputValue('');
}
}, [inputValue, sendMessage]);
return (
<ChatInput
value={inputValue}
onChange={setInputValue}
onSend={handleSendMessage}
/>
);
};
export default ChatInput;
This code handles the user's input, allowing them to type a message and send it to the chat interface.
- Integrating with a Language Model:
import { useEffect, useState } from 'react';
import { useSendMessage, useReceiveMessage } from '@chatui/core';
import { useModel } from './useModel';
const ChatUI = () => {
const { sendMessage, receiveMessage } = useSendMessage();
const { model, loadModel } = useModel();
const [messages, setMessages] = useState([]);
useEffect(() => {
loadModel('gpt2');
}, [loadModel]);
useEffect(() => {
if (model) {
receiveMessage(async (message) => {
const response = await model.generateText(message.content);
sendMessage(response);
});
}
}, [model, receiveMessage, sendMessage]);
return (
<ChatContainer>
<ChatHeader title="Chat with AI" />
<ChatMessages
Competitor Comparisons
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Pros of FastChat
- More comprehensive, offering a full stack for training, serving, and evaluating LLMs
- Supports a wider range of models, including Vicuna, Alpaca, and LLaMA
- Provides advanced features like model quantization and multi-GPU inference
Cons of FastChat
- Steeper learning curve due to its more complex architecture
- Requires more setup and configuration compared to chat-ui's simpler approach
- May be overkill for users who only need a basic chat interface
Code Comparison
FastChat (server setup):
from fastchat.serve.controller import Controller
from fastchat.serve.model_worker import ModelWorker
from fastchat.serve.openai_api_server import OpenAIAPIServer
controller = Controller()
worker = ModelWorker(controller.controller_addr, worker_addr, model_path)
api_server = OpenAIAPIServer(controller)
chat-ui (basic usage):
import { ChatUI } from "@huggingface/chat-ui";
const chatUI = new ChatUI({
apiKey: "your-api-key",
model: "gpt-3.5-turbo",
});
chatUI.render("#chat-container");
Both repositories offer chat interfaces for language models, but FastChat provides a more comprehensive solution for deploying and managing LLMs, while chat-ui focuses on a simpler, front-end oriented approach. The choice between them depends on the specific requirements and complexity of the project.
AI chat for any model.
Pros of chatbot-ui
- More customizable UI with themes and layout options
- Supports multiple AI models and providers (OpenAI, Anthropic, etc.)
- Advanced features like conversation branching and custom instructions
Cons of chatbot-ui
- Requires more setup and configuration
- Less integrated with Hugging Face ecosystem
- May have a steeper learning curve for beginners
Code Comparison
chatbot-ui:
const handleNewConversation = () => {
const newConversation: Conversation = {
id: uuidv4(),
name: t('New Conversation'),
messages: [],
model: OpenAIModels[defaultModelId],
prompt: DEFAULT_SYSTEM_PROMPT,
temperature: DEFAULT_TEMPERATURE,
folderId: null,
};
dispatch({ field: 'selectedConversation', value: newConversation });
dispatch({ field: 'conversations', value: [...conversations, newConversation] });
};
chat-ui:
def create_conversation(user_id: str) -> Conversation:
conversation = Conversation(
id=str(uuid.uuid4()),
created_at=datetime.utcnow(),
user_id=user_id,
messages=[],
)
db.add(conversation)
db.commit()
return conversation
The code snippets show different approaches to creating new conversations. chatbot-ui uses TypeScript and manages state with a dispatch function, while chat-ui uses Python and interacts directly with a database.
✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows
Pros of NextChat
- More modern UI with a sleek, minimalist design
- Built-in support for multiple language models, including GPT-4
- Easy deployment options, including one-click deployment to Vercel
Cons of NextChat
- Less customizable than chat-ui, with fewer configuration options
- Smaller community and less frequent updates compared to chat-ui
- Limited documentation and examples for advanced use cases
Code Comparison
NextChat (React-based component):
<ChatMessage
key={message.id}
message={message}
user={user}
onEdit={handleEdit}
onDelete={handleDelete}
/>
chat-ui (Vue-based component):
<ChatMessage
:message="message"
:is-user="message.role === 'user'"
:avatar="getAvatar(message.role)"
@retry="retry(message)"
/>
Both projects use component-based architectures, but NextChat uses React while chat-ui uses Vue. NextChat's components tend to have more props and event handlers, reflecting its focus on interactivity. chat-ui's components are generally simpler, with a greater emphasis on customization through configuration files.
GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.
Pros of ChuanhuChatGPT
- Supports multiple language models, including GPT-3.5, GPT-4, and Claude
- Offers a user-friendly interface with customizable themes
- Includes features like conversation history and API key management
Cons of ChuanhuChatGPT
- Less focus on enterprise-level deployment and scalability
- May have a steeper learning curve for developers unfamiliar with Gradio
Code Comparison
ChuanhuChatGPT:
def predict(self, inputs, max_length, top_p, temperature, history, past_key_values):
response, history = self.model.chat(
self.tokenizer, inputs, history, max_length=max_length,
top_p=top_p, temperature=temperature, past_key_values=past_key_values
)
return response, history
chat-ui:
const handleSubmit = async (message: string) => {
const response = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message })
});
const data = await response.json();
return data.response;
};
The code snippets show different approaches to handling chat interactions. ChuanhuChatGPT uses a Python-based prediction function, while chat-ui employs a JavaScript API call for message handling.
LLM UI with advanced features, easy setup, and multiple backend support.
Pros of text-generation-webui
- More extensive model support, including local models and various architectures
- Advanced features like fine-tuning, training, and model merging
- Highly customizable interface with multiple chat modes and extensions
Cons of text-generation-webui
- Steeper learning curve due to more complex setup and configuration
- Potentially higher resource requirements for running local models
- Less focus on cloud-based deployment and scalability
Code Comparison
text-generation-webui:
def generate_reply(
prompt, state, stopping_strings=None, is_chat=False, escape_html=False
):
# Complex generation logic with multiple parameters and options
# ...
chat-ui:
async function generateReply(message, conversation) {
// Simpler generation logic focused on API calls
// ...
}
The code comparison highlights the difference in complexity and focus between the two projects. text-generation-webui offers more advanced generation options, while chat-ui emphasizes simplicity and cloud integration.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Chat UI
A chat interface for LLMs. It is a SvelteKit app and it powers the HuggingChat app on hf.co/chat.
[!NOTE] Chat UI only supports OpenAI-compatible APIs via
OPENAI_BASE_URL
and the/models
endpoint. Provider-specific integrations (legacyMODELS
env var, GGUF discovery, embeddings, web-search helpers, etc.) are removed, but any service that speaks the OpenAI protocol (llama.cpp server, Ollama, OpenRouter, etc. will work by default).
[!NOTE] The old version is still available on the legacy branch
Quickstart
Chat UI speaks to OpenAI-compatible APIs only. The fastest way to get running is with the Hugging Face Inference Providers router plus your personal Hugging Face access token.
Step 1 â Create .env.local
:
OPENAI_BASE_URL=https://router.huggingface.co/v1
OPENAI_API_KEY=hf_************************
# Fill in once you pick a database option below
MONGODB_URL=
OPENAI_API_KEY
can come from any OpenAI-compatible endpoint you plan to call. Pick the combo that matches your setup and drop the values into .env.local
:
Provider | Example OPENAI_BASE_URL | Example key env |
---|---|---|
Hugging Face Inference Providers router | https://router.huggingface.co/v1 | OPENAI_API_KEY=hf_xxx (or HF_TOKEN legacy alias) |
llama.cpp server (llama.cpp --server --api ) | http://127.0.0.1:8080/v1 | OPENAI_API_KEY=sk-local-demo (any string works; llama.cpp ignores it) |
Ollama (with OpenAI-compatible bridge) | http://127.0.0.1:11434/v1 | OPENAI_API_KEY=ollama |
OpenRouter | https://openrouter.ai/api/v1 | OPENAI_API_KEY=sk-or-v1-... |
Check the root .env
template for the full list of optional variables you can override.
Step 2 â Choose where MongoDB lives: Either provision a managed cluster (for example MongoDB Atlas) or run a local container. Both approaches are described in Database Options. After you have the URI, drop it into MONGODB_URL
(and, if desired, set MONGODB_DB_NAME
).
Step 3 â Install and launch the dev server:
git clone https://github.com/huggingface/chat-ui
cd chat-ui
npm install
npm run dev -- --open
You now have Chat UI running against the Hugging Face router without needing to host MongoDB yourself.
Database Options
Chat history, users, settings, files, and stats all live in MongoDB. You can point Chat UI at any MongoDB 6/7 deployment.
MongoDB Atlas (managed)
- Create a free cluster at mongodb.com.
- Add your IP (or
0.0.0.0/0
for development) to the network access list. - Create a database user and copy the connection string.
- Paste that string into
MONGODB_URL
in.env.local
. Keep the defaultMONGODB_DB_NAME=chat-ui
or change it per environment.
Atlas keeps MongoDB off your laptop, which is ideal for teams or cloud deployments.
Local MongoDB (container)
If you prefer to run MongoDB locally:
docker run -d -p 27017:27017 --name mongo-chatui mongo:latest
Then set MONGODB_URL=mongodb://localhost:27017
in .env.local
. You can also supply MONGO_STORAGE_PATH
if you want Chat UIâs fallback in-memory server to persist under a specific folder.
Launch
After configuring your environment variables, start Chat UI with:
npm install
npm run dev
The dev server listens on http://localhost:5173
by default. Use npm run build
/ npm run preview
for production builds.
Optional Docker Image
Prefer containerized setup? You can run everything in one container as long as you supply a MongoDB URI (local or hosted):
docker run \
-p 3000 \
-e MONGODB_URL=mongodb://host.docker.internal:27017 \
-e OPENAI_BASE_URL=https://router.huggingface.co/v1 \
-e OPENAI_API_KEY=hf_*** \
-v db:/data \
ghcr.io/huggingface/chat-ui-db:latest
host.docker.internal
lets the container reach a MongoDB instance on your host machine; swap it for your Atlas URI if you use the hosted option. All environment variables accepted in .env.local
can be provided as -e
flags.
Extra parameters
Theming
You can use a few environment variables to customize the look and feel of chat-ui. These are by default:
PUBLIC_APP_NAME=ChatUI
PUBLIC_APP_ASSETS=chatui
PUBLIC_APP_COLOR=blue
PUBLIC_APP_DESCRIPTION="Making the community's best AI chat models available to everyone."
PUBLIC_APP_DATA_SHARING=
PUBLIC_APP_NAME
The name used as a title throughout the app.PUBLIC_APP_ASSETS
Is used to find logos & favicons instatic/$PUBLIC_APP_ASSETS
, current options arechatui
andhuggingchat
.PUBLIC_APP_COLOR
Can be any of the tailwind colors.PUBLIC_APP_DATA_SHARING
Can be set to 1 to add a toggle in the user settings that lets your users opt-in to data sharing with models creator.
Models
This build does not use the MODELS
env var or GGUF discovery. Configure models via OPENAI_BASE_URL
only; Chat UI will fetch ${OPENAI_BASE_URL}/models
and populate the list automatically. Authorization uses OPENAI_API_KEY
(preferred). HF_TOKEN
remains a legacy alias.
LLM Router (Optional)
Chat UI can perform client-side routing using an Arch Router model without running a separate router service. The UI exposes a virtual model alias called "Omni" (configurable) that, when selected, chooses the best route/model for each message.
- Provide a routes policy JSON via
LLM_ROUTER_ROUTES_PATH
. No sample file ships with this branch, so you must point the variable to a JSON array you create yourself (for example, commit one in your project likeconfig/routes.chat.json
). Each route entry needsname
,description
,primary_model
, and optionalfallback_models
. - Configure the Arch router selection endpoint with
LLM_ROUTER_ARCH_BASE_URL
(OpenAI-compatible/chat/completions
) andLLM_ROUTER_ARCH_MODEL
(e.g.router/omni
). The Arch call reusesOPENAI_API_KEY
for auth. - Map
other
to a concrete route viaLLM_ROUTER_OTHER_ROUTE
(default:casual_conversation
). If Arch selection fails, calls fall back toLLM_ROUTER_FALLBACK_MODEL
. - Selection timeout can be tuned via
LLM_ROUTER_ARCH_TIMEOUT_MS
(default 10000). - Omni alias configuration:
PUBLIC_LLM_ROUTER_ALIAS_ID
(defaultomni
),PUBLIC_LLM_ROUTER_DISPLAY_NAME
(defaultOmni
), and optionalPUBLIC_LLM_ROUTER_LOGO_URL
.
When you select Omni in the UI, Chat UI will:
- Call the Arch endpoint once (non-streaming) to pick the best route for the last turns.
- Emit RouterMetadata immediately (route and actual model used) so the UI can display it.
- Stream from the selected model via your configured
OPENAI_BASE_URL
. On errors, it tries route fallbacks.
Building
To create a production version of your app:
npm run build
You can preview the production build with npm run preview
.
To deploy your app, you may need to install an adapter for your target environment.
Top Related Projects
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
AI chat for any model.
✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows
GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.
LLM UI with advanced features, easy setup, and multiple backend support.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot