NextChat
✨ Local and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows
Top Related Projects
Build AI-powered applications with React, Svelte, Vue, and Solid
AI chat for any model.
✨ Local and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows
用 Express 和 Vue3 搭建的 ChatGPT 演示网页
GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.
Minimal web UI for ChatGPT.
Quick Overview
ChatGPTNextWeb/NextChat is an open-source project that provides a web-based interface for interacting with ChatGPT. It offers a user-friendly, customizable chat experience with features like conversation management, prompt templates, and multi-language support. The project is built using Next.js and can be easily deployed on various platforms.
Pros
- Easy to deploy and customize
- Supports multiple languages and themes
- Offers conversation management and prompt templates
- Regularly updated and actively maintained
Cons
- Requires API key from OpenAI, which may have associated costs
- May have limitations based on OpenAI's usage policies
- Potential privacy concerns when hosting conversations on third-party platforms
- Learning curve for users unfamiliar with Next.js or React
Getting Started
To get started with ChatGPTNextWeb/NextChat:
-
Clone the repository:
git clone https://github.com/ChatGPTNextWeb/NextChat.git
-
Install dependencies:
cd NextChat npm install
-
Set up environment variables: Create a
.env.local
file in the root directory and add your OpenAI API key:OPENAI_API_KEY=your_api_key_here
-
Run the development server:
npm run dev
-
Open
http://localhost:3000
in your browser to use the application.
For production deployment, follow the documentation for your preferred hosting platform (e.g., Vercel, Netlify, or self-hosted options).
Competitor Comparisons
Build AI-powered applications with React, Svelte, Vue, and Solid
Pros of ai
- More comprehensive AI development toolkit with support for multiple models and providers
- Better integration with Vercel's ecosystem and deployment platform
- Actively maintained with frequent updates and contributions
Cons of ai
- Steeper learning curve due to its broader scope and features
- Less focused on creating a standalone chat application out of the box
- May require more configuration and setup for specific use cases
Code Comparison
NextChat:
<div className="flex flex-col h-full">
<ChatHeader />
<ChatBody messages={messages} />
<ChatInput onSendMessage={handleSendMessage} />
</div>
ai:
import { useChat } from 'ai/react'
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat()
return (
// Chat UI implementation
)
}
Summary
NextChat is a more specialized solution for building ChatGPT-like interfaces, while ai offers a broader set of tools for AI integration in web applications. NextChat may be easier to set up for a quick chat application, but ai provides more flexibility and scalability for complex AI-powered projects. The choice between them depends on the specific requirements of your project and your familiarity with Vercel's ecosystem.
AI chat for any model.
Pros of chatbot-ui
- More customizable UI with a wider range of components and styling options
- Better support for multiple chat models and providers
- More active development and frequent updates
Cons of chatbot-ui
- Steeper learning curve due to more complex architecture
- Requires more setup and configuration for advanced features
- Potentially higher resource usage due to additional features
Code Comparison
chatbot-ui:
const ChatMessage = ({ message, ...props }: Props) => {
const { role, content } = message;
const parsedContent = parseContent(content);
return (
<div className={`flex ${role === 'assistant' ? 'justify-start' : 'justify-end'}`}>
{/* Message content rendering */}
</div>
);
};
NextChat:
const ChatMessage = ({ message }) => {
return (
<div className={`chat-message ${message.role}`}>
<div className="message-content">{message.content}</div>
</div>
);
};
The code comparison shows that chatbot-ui offers more flexibility in message rendering and content parsing, while NextChat provides a simpler, more straightforward implementation. This reflects the overall difference in complexity and customization options between the two projects.
✨ Local and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows
Pros of NextChat
- More active development with frequent updates
- Larger community and contributor base
- Better documentation and user guides
Cons of NextChat
- Higher resource requirements
- Steeper learning curve for beginners
- More complex setup process
Code Comparison
NextChat:
import { NextChatProvider } from 'nextchat';
function App() {
return (
<NextChatProvider>
<ChatInterface />
</NextChatProvider>
);
}
NextChat:
import { ChatProvider } from 'nextchat';
function App() {
return (
<ChatProvider>
<ChatComponent />
</ChatProvider>
);
}
The code structures are similar, with NextChat using a more specific naming convention for its provider component. Both implementations wrap the main chat component with a provider, suggesting a similar approach to state management and context provision.
Note: As the repositories mentioned in the prompt are identical (ChatGPTNextWeb/NextChat), this comparison is hypothetical and based on common differences between similar projects. In reality, comparing the same repository would yield no differences.
用 Express 和 Vue3 搭建的 ChatGPT 演示网页
Pros of chatgpt-web
- More lightweight and focused on core chat functionality
- Simpler setup process, especially for users new to web development
- Supports multiple API endpoints, including Azure OpenAI
Cons of chatgpt-web
- Less feature-rich compared to NextChat
- Limited customization options for user interface
- Lacks some advanced features like plugins or extensions
Code Comparison
chatgpt-web:
const response = await fetch("/api/chat-process", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(params),
});
NextChat:
const response = await fetch("/api/chat", {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-Requested-With": "XMLHttpRequest",
},
body: JSON.stringify(requestOptions),
});
Both projects use similar fetch API calls for chat processing, but NextChat includes an additional header for AJAX requests. chatgpt-web's implementation is slightly simpler, aligning with its more lightweight approach.
GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.
Pros of ChuanhuChatGPT
- More extensive language support, including Chinese
- Offers a wider range of AI models, including Claude and PaLM
- Includes advanced features like PDF parsing and LaTeX rendering
Cons of ChuanhuChatGPT
- Less polished user interface compared to NextChat
- May be more complex to set up and configure
- Lacks some of the sleek design elements found in NextChat
Code Comparison
NextChat:
const ChatContent: FC<{ message: Message }> = ({ message }) => {
const [displayMode, setDisplayMode] = useState<DisplayMode>("markdown");
return (
<div className={styles["chat-message-item"]}>
{renderMessageContent(message.content, displayMode)}
</div>
);
};
ChuanhuChatGPT:
def predict(self, inputs, max_length=128, top_p=0.7, temperature=0.95):
input_ids = self.tokenizer(inputs, return_tensors="pt").input_ids
with torch.no_grad():
outputs = self.model.generate(input_ids=input_ids, max_length=max_length, top_p=top_p, temperature=temperature)
return self.tokenizer.decode(outputs[0], skip_special_tokens=True)
The code snippets show different approaches: NextChat uses TypeScript for frontend rendering, while ChuanhuChatGPT employs Python for backend processing and model interaction.
Minimal web UI for ChatGPT.
Pros of chatgpt-demo
- Lightweight and simple implementation, making it easier to understand and modify
- Supports multiple languages out of the box
- Includes a dark mode feature
Cons of chatgpt-demo
- Less feature-rich compared to NextChat
- Limited customization options for chat interface
- Lacks advanced conversation management features
Code Comparison
NextChat:
const ChatMessage: FC<{
message: Message;
showAvatar?: boolean;
}> = ({ message, showAvatar }) => {
return (
<div className={`flex flex-col ${message.role === "user" ? "items-end" : "items-start"}`}>
{/* Message content */}
</div>
);
};
chatgpt-demo:
<template>
<div class="flex flex-col w-full mb-6">
<div class="flex items-center">
<span class="text-sm text-neutral-400">{{ dateTime }}</span>
</div>
<div class="flex items-end">
<Avatar :role="role" />
<div class="flex-1 px-2 ml-2 overflow-hidden">
<Markdown :message="message" :stream="stream" />
</div>
</div>
</div>
</template>
The code snippets show different approaches to rendering chat messages. NextChat uses a React functional component with TypeScript, while chatgpt-demo uses a Vue.js template structure. NextChat's implementation appears more flexible, allowing for conditional avatar display and role-based styling. chatgpt-demo's code is more declarative and includes additional features like date/time display and Markdown rendering.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
NextChat (ChatGPT Next Web)
English / ç®ä½ä¸æ
One-Click to get a well-designed cross-platform ChatGPT web UI, with Claude, GPT4 & Gemini Pro support.
NextChatAI / Web App Demo / Desktop App / Discord / Enterprise Edition / Twitter
ð«£ NextChat Support MCP !
Before build, please set env ENABLE_MCP=true
Enterprise Edition
Meeting Your Company's Privatization and Customization Deployment Requirements:
- Brand Customization: Tailored VI/UI to seamlessly align with your corporate brand image.
- Resource Integration: Unified configuration and management of dozens of AI resources by company administrators, ready for use by team members.
- Permission Control: Clearly defined member permissions, resource permissions, and knowledge base permissions, all controlled via a corporate-grade Admin Panel.
- Knowledge Integration: Combining your internal knowledge base with AI capabilities, making it more relevant to your company's specific business needs compared to general AI.
- Security Auditing: Automatically intercept sensitive inquiries and trace all historical conversation records, ensuring AI adherence to corporate information security standards.
- Private Deployment: Enterprise-level private deployment supporting various mainstream private cloud solutions, ensuring data security and privacy protection.
- Continuous Updates: Ongoing updates and upgrades in cutting-edge capabilities like multimodal AI, ensuring consistent innovation and advancement.
For enterprise inquiries, please contact: business@nextchat.dev
Screenshots
Features
- Deploy for free with one-click on Vercel in under 1 minute
- Compact client (~5MB) on Linux/Windows/MacOS, download it now
- Fully compatible with self-deployed LLMs, recommended for use with RWKV-Runner or LocalAI
- Privacy first, all data is stored locally in the browser
- Markdown support: LaTex, mermaid, code highlight, etc.
- Responsive design, dark mode and PWA
- Fast first screen loading speed (~100kb), support streaming response
- New in v2: create, share and debug your chat tools with prompt templates (mask)
- Awesome prompts powered by awesome-chatgpt-prompts-zh and awesome-chatgpt-prompts
- Automatically compresses chat history to support long conversations while also saving your tokens
- I18n: English, ç®ä½ä¸æ, ç¹ä½ä¸æ, æ¥æ¬èª, Français, Español, Italiano, Türkçe, Deutsch, Tiếng Viá»t, Ð ÑÑÑкий, ÄeÅ¡tina, íêµì´, Indonesia
Roadmap
- System Prompt: pin a user defined prompt as system prompt #138
- User Prompt: user can edit and save custom prompts to prompt list
- Prompt Template: create a new chat with pre-defined in-context prompts #993
- Share as image, share to ShareGPT #1741
- Desktop App with tauri
- Self-host Model: Fully compatible with RWKV-Runner, as well as server deployment of LocalAI: llama/gpt4all/rwkv/vicuna/koala/gpt4all-j/cerebras/falcon/dolly etc.
- Artifacts: Easily preview, copy and share generated content/webpages through a separate window #5092
- Plugins: support network search, calculator, any other apis etc. #165 #5353
- Supports Realtime Chat #5672
- local knowledge base
What's New
- ð v2.15.8 Now supports Realtime Chat #5672
- ð v2.15.4 The Application supports using Tauri fetch LLM API, MORE SECURITY! #5379
- ð v2.15.0 Now supports Plugins! Read this: NextChat-Awesome-Plugins
- ð v2.14.0 Now supports Artifacts & SD
- ð v2.10.1 support Google Gemini Pro model.
- ð v2.9.11 you can use azure endpoint now.
- ð v2.8 now we have a client that runs across all platforms!
- ð v2.7 let's share conversations as image, or share to ShareGPT!
- ð v2.0 is released, now you can create prompt templates, turn your ideas into reality! Read this: ChatGPT Prompt Engineering Tips: Zero, One and Few Shot Prompting.
Get Started
- Get OpenAI API Key;
- Click
, remember that
CODE
is your page password; - Enjoy :)
FAQ
Keep Updated
If you have deployed your own project with just one click following the steps above, you may encounter the issue of "Updates Available" constantly showing up. This is because Vercel will create a new project for you by default instead of forking this project, resulting in the inability to detect updates correctly.
We recommend that you follow the steps below to re-deploy:
- Delete the original repository;
- Use the fork button in the upper right corner of the page to fork this project;
- Choose and deploy in Vercel again, please see the detailed tutorial.
Enable Automatic Updates
If you encounter a failure of Upstream Sync execution, please manually update code.
After forking the project, due to the limitations imposed by GitHub, you need to manually enable Workflows and Upstream Sync Action on the Actions page of the forked project. Once enabled, automatic updates will be scheduled every hour:
Manually Updating Code
If you want to update instantly, you can check out the GitHub documentation to learn how to synchronize a forked project with upstream code.
You can star or watch this project or follow author to get release notifications in time.
Access Password
This project provides limited access control. Please add an environment variable named CODE
on the vercel environment variables page. The value should be passwords separated by comma like this:
code1,code2,code3
After adding or modifying this environment variable, please redeploy the project for the changes to take effect.
Environment Variables
CODE
(optional)
Access password, separated by comma.
OPENAI_API_KEY
(required)
Your openai api key, join multiple api keys with comma.
BASE_URL
(optional)
Default:
https://api.openai.com
Examples:
http://your-openai-proxy.com
Override openai api request base url.
OPENAI_ORG_ID
(optional)
Specify OpenAI organization ID.
AZURE_URL
(optional)
Example: https://{azure-resource-url}/openai
Azure deploy url.
AZURE_API_KEY
(optional)
Azure Api Key.
AZURE_API_VERSION
(optional)
Azure Api Version, find it at Azure Documentation.
GOOGLE_API_KEY
(optional)
Google Gemini Pro Api Key.
GOOGLE_URL
(optional)
Google Gemini Pro Api Url.
ANTHROPIC_API_KEY
(optional)
anthropic claude Api Key.
ANTHROPIC_API_VERSION
(optional)
anthropic claude Api version.
ANTHROPIC_URL
(optional)
anthropic claude Api Url.
BAIDU_API_KEY
(optional)
Baidu Api Key.
BAIDU_SECRET_KEY
(optional)
Baidu Secret Key.
BAIDU_URL
(optional)
Baidu Api Url.
BYTEDANCE_API_KEY
(optional)
ByteDance Api Key.
BYTEDANCE_URL
(optional)
ByteDance Api Url.
ALIBABA_API_KEY
(optional)
Alibaba Cloud Api Key.
ALIBABA_URL
(optional)
Alibaba Cloud Api Url.
IFLYTEK_URL
(Optional)
iflytek Api Url.
IFLYTEK_API_KEY
(Optional)
iflytek Api Key.
IFLYTEK_API_SECRET
(Optional)
iflytek Api Secret.
CHATGLM_API_KEY
(optional)
ChatGLM Api Key.
CHATGLM_URL
(optional)
ChatGLM Api Url.
DEEPSEEK_API_KEY
(optional)
DeepSeek Api Key.
DEEPSEEK_URL
(optional)
DeepSeek Api Url.
HIDE_USER_API_KEY
(optional)
Default: Empty
If you do not want users to input their own API key, set this value to 1.
DISABLE_GPT4
(optional)
Default: Empty
If you do not want users to use GPT-4, set this value to 1.
ENABLE_BALANCE_QUERY
(optional)
Default: Empty
If you do want users to query balance, set this value to 1.
DISABLE_FAST_LINK
(optional)
Default: Empty
If you want to disable parse settings from url, set this to 1.
CUSTOM_MODELS
(optional)
Default: Empty Example:
+llama,+claude-2,-gpt-3.5-turbo,gpt-4-1106-preview=gpt-4-turbo
means addllama, claude-2
to model list, and removegpt-3.5-turbo
from list, and displaygpt-4-1106-preview
asgpt-4-turbo
.
To control custom models, use +
to add a custom model, use -
to hide a model, use name=displayName
to customize model name, separated by comma.
User -all
to disable all default models, +all
to enable all default models.
For Azure: use modelName@Azure=deploymentName
to customize model name and deployment name.
Example:
+gpt-3.5-turbo@Azure=gpt35
will show optiongpt35(Azure)
in model list. If you only can use Azure model,-all,+gpt-3.5-turbo@Azure=gpt35
willgpt35(Azure)
the only option in model list.
For ByteDance: use modelName@bytedance=deploymentName
to customize model name and deployment name.
Example:
+Doubao-lite-4k@bytedance=ep-xxxxx-xxx
will show optionDoubao-lite-4k(ByteDance)
in model list.
DEFAULT_MODEL
ï¼optionalï¼
Change default model
VISION_MODELS
(optional)
Default: Empty Example:
gpt-4-vision,claude-3-opus,my-custom-model
means add vision capabilities to these models in addition to the default pattern matches (which detect models containing keywords like "vision", "claude-3", "gemini-1.5", etc).
Add additional models to have vision capabilities, beyond the default pattern matching. Multiple models should be separated by commas.
WHITE_WEBDAV_ENDPOINTS
(optional)
You can use this option if you want to increase the number of webdav service addresses you are allowed to access, as required by the formatï¼
- Each address must be a complete endpoint
https://xxxx/yyy
- Multiple addresses are connected by ', '
DEFAULT_INPUT_TEMPLATE
(optional)
Customize the default template used to initialize the User Input Preprocessing configuration item in Settings.
STABILITY_API_KEY
(optional)
Stability API key.
STABILITY_URL
(optional)
Customize Stability API url.
ENABLE_MCP
(optional)
Enable MCPï¼Model Context Protocolï¼Feature
Requirements
NodeJS >= 18, Docker >= 20
Development
Before starting development, you must create a new .env.local
file at project root, and place your api key into it:
OPENAI_API_KEY=<your api key here>
# if you are not able to access openai service, use this BASE_URL
BASE_URL=https://chatgpt1.nextweb.fun/api/proxy
Local Development
# 1. install nodejs and yarn first
# 2. config local env vars in `.env.local`
# 3. run
yarn install
yarn dev
Deployment
Docker (Recommended)
docker pull yidadaa/chatgpt-next-web
docker run -d -p 3000:3000 \
-e OPENAI_API_KEY=sk-xxxx \
-e CODE=your-password \
yidadaa/chatgpt-next-web
You can start service behind a proxy:
docker run -d -p 3000:3000 \
-e OPENAI_API_KEY=sk-xxxx \
-e CODE=your-password \
-e PROXY_URL=http://localhost:7890 \
yidadaa/chatgpt-next-web
If your proxy needs password, use:
-e PROXY_URL="http://127.0.0.1:7890 user pass"
If enable MCP, useï¼
docker run -d -p 3000:3000 \
-e OPENAI_API_KEY=sk-xxxx \
-e CODE=your-password \
-e ENABLE_MCP=true \
yidadaa/chatgpt-next-web
Shell
bash <(curl -s https://raw.githubusercontent.com/Yidadaa/ChatGPT-Next-Web/main/scripts/setup.sh)
Synchronizing Chat Records (UpStash)
| ç®ä½ä¸æ | English | Italiano | æ¥æ¬èª | íêµì´
Documentation
Please go to the [docs][./docs] directory for more documentation instructions.
- Deploy with cloudflare (Deprecated)
- Frequent Ask Questions
- How to add a new translation
- How to use Vercel (No English)
- User Manual (Only Chinese, WIP)
Translation
If you want to add a new translation, read this document.
Donation
Special Thanks
Contributors
LICENSE
Top Related Projects
Build AI-powered applications with React, Svelte, Vue, and Solid
AI chat for any model.
✨ Local and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows
用 Express 和 Vue3 搭建的 ChatGPT 演示网页
GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.
Minimal web UI for ChatGPT.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot