ChatGPT-Next-Web
A cross-platform ChatGPT/Gemini UI (Web / PWA / Linux / Win / MacOS). 一键拥有你自己的跨平台 ChatGPT/Gemini 应用。
Top Related Projects
Supercharged experience for multiple models such as ChatGPT, DALL-E and Stable Diffusion.
A cross-platform ChatGPT/Gemini UI (Web / PWA / Linux / Win / MacOS). 一键拥有你自己的跨平台 ChatGPT/Gemini 应用。
GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.
用 Express 和 Vue3 搭建的 ChatGPT 演示网页
User-friendly Desktop Client App for AI Models/LLMs (GPT, Claude, Gemini, Ollama...)
🔮 ChatGPT Desktop Application (Mac, Windows and Linux)
Quick Overview
ChatGPT-Next-Web is an open-source project that provides a web-based interface for ChatGPT, allowing users to deploy their own ChatGPT web UI quickly and easily. It offers a sleek, responsive design and supports various features like conversation management, prompt templates, and multi-language support.
Pros
- Easy deployment: Can be quickly set up on platforms like Vercel or self-hosted
- Customizable: Supports custom themes, prompts, and API endpoints
- Privacy-focused: Allows users to use their own API keys and doesn't store conversation data on servers
- Multi-platform: Works on desktop and mobile devices with a responsive design
Cons
- Requires OpenAI API key: Users need to provide their own API key, which may involve costs
- Limited to ChatGPT: Doesn't support other AI models or services out of the box
- Potential for misuse: As with any AI interface, there's a risk of generating harmful or biased content
- Dependency on OpenAI's API: Changes to OpenAI's policies or API could affect the project's functionality
Getting Started
To deploy ChatGPT-Next-Web:
- Fork the repository on GitHub
- Sign up for a Vercel account if you don't have one
- Create a new project on Vercel and import your forked repository
- Set the following environment variables in your Vercel project settings:
OPENAI_API_KEY
: Your OpenAI API keyCODE
: A password to access the chat (optional)
- Deploy the project
Alternatively, you can run it locally:
git clone https://github.com/Yidadaa/ChatGPT-Next-Web.git
cd ChatGPT-Next-Web
npm install
npm run dev
Make sure to set the required environment variables in a .env.local
file before running the project locally.
Competitor Comparisons
Supercharged experience for multiple models such as ChatGPT, DALL-E and Stable Diffusion.
Pros of Anse
- Supports multiple AI models beyond just ChatGPT, including Claude, DALL-E, and Stable Diffusion
- Offers a more customizable user interface with themes and layout options
- Includes built-in image generation capabilities
Cons of Anse
- Less active development and community support compared to ChatGPT-Next-Web
- May have a steeper learning curve due to additional features and configuration options
- Lacks some of the advanced conversation management features found in ChatGPT-Next-Web
Code Comparison
ChatGPT-Next-Web (Next.js):
import { useState } from 'react'
import { useRouter } from 'next/router'
import { fetchChatCompletion } from '../lib/api'
export default function Chat() {
// Component logic
}
Anse (Svelte):
<script>
import { onMount } from 'svelte'
import { chatStore } from '../stores/chat'
import { fetchAIResponse } from '../lib/api'
// Component logic
</script>
Both projects use modern JavaScript frameworks, with ChatGPT-Next-Web utilizing Next.js (React-based) and Anse opting for Svelte. The code structures reflect their respective framework choices, but both implement similar patterns for state management and API interactions.
A cross-platform ChatGPT/Gemini UI (Web / PWA / Linux / Win / MacOS). 一键拥有你自己的跨平台 ChatGPT/Gemini 应用。
Pros of ChatGPT-Next-Web
- More active development with frequent updates and bug fixes
- Larger community support and contributions
- Better documentation and user guides
Cons of ChatGPT-Next-Web
- Potentially less stable due to rapid changes
- May have more complex setup process for beginners
- Could have higher resource requirements
Code Comparison
ChatGPT-Next-Web:
import { useState, useEffect } from 'react';
import { fetchData } from './api';
function App() {
const [data, setData] = useState(null);
useEffect(() => {
fetchData().then(setData);
}, []);
// ... rest of the component
}
ChatGPT-Next-Web:
import React from 'react';
import { getData } from './utils';
class App extends React.Component {
state = { data: null };
componentDidMount() {
getData().then(data => this.setState({ data }));
}
// ... rest of the component
}
Note: The code comparison is hypothetical as both repositories refer to the same project. In a real comparison, differences in coding style, structure, or implementation might be highlighted.
GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.
Pros of ChuanhuChatGPT
- More extensive language support, including Chinese
- Offers a wider range of AI models, including Claude and PaLM
- Includes advanced features like PDF parsing and LaTeX rendering
Cons of ChuanhuChatGPT
- Less polished user interface compared to ChatGPT-Next-Web
- Requires more setup and configuration
- May be more challenging for non-technical users to deploy
Code Comparison
ChatGPT-Next-Web (React-based frontend):
const Chat: React.FC<Props> = ({ messages, onSend, onEdit, onRemove }) => {
return (
<div className="chat-container">
<MessageList messages={messages} onEdit={onEdit} onRemove={onRemove} />
<InputArea onSend={onSend} />
</div>
);
};
ChuanhuChatGPT (Gradio-based interface):
with gr.Blocks() as demo:
chatbot = gr.Chatbot()
msg = gr.Textbox()
clear = gr.Button("Clear")
msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False)
clear.click(lambda: None, None, chatbot, queue=False)
The code snippets highlight the different approaches: ChatGPT-Next-Web uses React for a more customizable frontend, while ChuanhuChatGPT leverages Gradio for rapid prototyping and easier integration with machine learning models.
用 Express 和 Vue3 搭建的 ChatGPT 演示网页
Pros of chatgpt-web
- Simpler setup process, especially for users less familiar with Next.js
- Lighter weight and potentially faster load times
- More straightforward customization options for basic users
Cons of chatgpt-web
- Less feature-rich compared to ChatGPT-Next-Web
- Limited internationalization support
- Fewer advanced configuration options for power users
Code Comparison
ChatGPT-Next-Web:
export function trimTopic(topic: string) {
return topic.replace(/[,。!?"""、,.!?]*$/, "").trim();
}
chatgpt-web:
export function formatDate(date) {
const year = date.getFullYear()
const month = date.getMonth() + 1
const day = date.getDate()
return `${year}-${month}-${day}`
}
The code snippets demonstrate different approaches:
- ChatGPT-Next-Web uses TypeScript, offering stronger typing
- chatgpt-web uses plain JavaScript, which may be more accessible for some developers
- The functions serve different purposes, with ChatGPT-Next-Web focusing on string manipulation and chatgpt-web on date formatting
Both projects aim to provide web interfaces for ChatGPT, but ChatGPT-Next-Web offers a more comprehensive feature set at the cost of increased complexity. chatgpt-web, on the other hand, provides a simpler solution that may be preferable for users seeking a more straightforward implementation.
User-friendly Desktop Client App for AI Models/LLMs (GPT, Claude, Gemini, Ollama...)
Pros of chatbox
- Cross-platform support (Windows, macOS, Linux)
- Offers a desktop application with a user-friendly interface
- Supports multiple AI models beyond just ChatGPT
Cons of chatbox
- Less active development and community engagement
- Fewer advanced features for customization and deployment
- Limited integration options with other services
Code Comparison
ChatGPT-Next-Web (React):
const Chat: FC<ChatProps> = ({ messages, onUserInput }) => {
return (
<div className="chat-container">
{messages.map((message) => (
<Message key={message.id} content={message.content} />
))}
<Input onSubmit={onUserInput} />
</div>
);
};
chatbox (Electron/React):
const ChatWindow = ({ messages, sendMessage }) => {
return (
<div className="chat-window">
{messages.map((msg) => (
<MessageBubble key={msg.id} text={msg.text} />
))}
<InputBox onSend={sendMessage} />
</div>
);
};
Both projects use React for their UI components, but ChatGPT-Next-Web is a web-based application, while chatbox is built using Electron for desktop deployment. The code structures are similar, with ChatGPT-Next-Web potentially offering more flexibility for web-based deployments and customizations.
🔮 ChatGPT Desktop Application (Mac, Windows and Linux)
Pros of ChatGPT
- Desktop application with cross-platform support (Windows, macOS, Linux)
- Offers system tray integration for quick access
- Includes additional features like text-to-speech and export options
Cons of ChatGPT
- Less frequently updated compared to ChatGPT-Next-Web
- May have a steeper learning curve for users new to desktop applications
- Limited customization options for the user interface
Code Comparison
ChatGPT (Tauri-based desktop app):
#[tauri::command]
fn greet(name: &str) -> String {
format!("Hello, {}! You've been greeted from Rust!", name)
}
ChatGPT-Next-Web (Next.js-based web app):
export async function getServerSideProps(context) {
const session = await getSession(context);
return {
props: { session }
};
}
The code snippets highlight the different technologies used: ChatGPT uses Rust with Tauri for desktop development, while ChatGPT-Next-Web employs JavaScript with Next.js for web development. This difference in approach affects the deployment, user experience, and development process for each project.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
NextChat (ChatGPT Next Web)
English / ç®ä½ä¸æ
One-Click to get a well-designed cross-platform ChatGPT web UI, with GPT3, GPT4 & Gemini Pro support.
ä¸é®å è´¹é¨ç½²ä½ ç跨平å°ç§äºº ChatGPT åºç¨, æ¯æ GPT3, GPT4 & Gemini Pro 模åã
NextChatAI / Web App / Desktop App / Discord / Enterprise Edition / Twitter
NextChatAI / ç½é¡µç / 客æ·ç«¯ / ä¼ä¸ç / åé¦
Enterprise Edition
Meeting Your Company's Privatization and Customization Deployment Requirements:
- Brand Customization: Tailored VI/UI to seamlessly align with your corporate brand image.
- Resource Integration: Unified configuration and management of dozens of AI resources by company administrators, ready for use by team members.
- Permission Control: Clearly defined member permissions, resource permissions, and knowledge base permissions, all controlled via a corporate-grade Admin Panel.
- Knowledge Integration: Combining your internal knowledge base with AI capabilities, making it more relevant to your company's specific business needs compared to general AI.
- Security Auditing: Automatically intercept sensitive inquiries and trace all historical conversation records, ensuring AI adherence to corporate information security standards.
- Private Deployment: Enterprise-level private deployment supporting various mainstream private cloud solutions, ensuring data security and privacy protection.
- Continuous Updates: Ongoing updates and upgrades in cutting-edge capabilities like multimodal AI, ensuring consistent innovation and advancement.
For enterprise inquiries, please contact: business@nextchat.dev
ä¼ä¸ç
满足ä¼ä¸ç¨æ·ç§æåé¨ç½²å个æ§åå®å¶éæ±ï¼
- **åçå®å¶**ï¼ä¼ä¸é身å®å¶ VI/UIï¼ä¸ä¼ä¸åç形象æ ç¼å¥å
- èµæºéæï¼ç±ä¼ä¸ç®¡ç人åç»ä¸é ç½®å管çæ°åç§ AI èµæºï¼å¢éæåå¼ç®±å³ç¨
- æé管çï¼æåæéãèµæºæéãç¥è¯åºæéå±çº§åæï¼ä¼ä¸çº§ Admin Panel ç»ä¸æ§å¶
- **ç¥è¯æ¥å ¥**ï¼ä¼ä¸å é¨ç¥è¯åºä¸ AI è½åç¸ç»åï¼æ¯éç¨ AI æ´è´´è¿ä¼ä¸èªèº«ä¸å¡éæ±
- **å®å ¨å®¡è®¡**ï¼èªå¨æ¦æªæææé®ï¼æ¯æè¿½æº¯å ¨é¨åå²å¯¹è¯è®°å½ï¼è®© AI ä¹è½éµå¾ªä¼ä¸ä¿¡æ¯å®å ¨è§è
- ç§æé¨ç½²ï¼ä¼ä¸çº§ç§æé¨ç½²ï¼æ¯æå类主æµç§æäºé¨ç½²ï¼ç¡®ä¿æ°æ®å®å ¨åéç§ä¿æ¤
- **æç»æ´æ°**ï¼æä¾å¤æ¨¡æãæºè½ä½çå沿è½åæç»æ´æ°å级æå¡ï¼å¸¸ç¨å¸¸æ°ãæç»å è¿
ä¼ä¸çå¨è¯¢: business@nextchat.dev
Features
- Deploy for free with one-click on Vercel in under 1 minute
- Compact client (~5MB) on Linux/Windows/MacOS, download it now
- Fully compatible with self-deployed LLMs, recommended for use with RWKV-Runner or LocalAI
- Privacy first, all data is stored locally in the browser
- Markdown support: LaTex, mermaid, code highlight, etc.
- Responsive design, dark mode and PWA
- Fast first screen loading speed (~100kb), support streaming response
- New in v2: create, share and debug your chat tools with prompt templates (mask)
- Awesome prompts powered by awesome-chatgpt-prompts-zh and awesome-chatgpt-prompts
- Automatically compresses chat history to support long conversations while also saving your tokens
- I18n: English, ç®ä½ä¸æ, ç¹ä½ä¸æ, æ¥æ¬èª, Français, Español, Italiano, Türkçe, Deutsch, Tiếng Viá»t, Ð ÑÑÑкий, ÄeÅ¡tina, íêµì´, Indonesia
Roadmap
- System Prompt: pin a user defined prompt as system prompt #138
- User Prompt: user can edit and save custom prompts to prompt list
- Prompt Template: create a new chat with pre-defined in-context prompts #993
- Share as image, share to ShareGPT #1741
- Desktop App with tauri
- Self-host Model: Fully compatible with RWKV-Runner, as well as server deployment of LocalAI: llama/gpt4all/rwkv/vicuna/koala/gpt4all-j/cerebras/falcon/dolly etc.
- Artifacts: Easily preview, copy and share generated content/webpages through a separate window #5092
- Plugins: support network search, calculator, any other apis etc. #165 #5353
- local knowledge base
What's New
- ð v2.15.4 The Application supports using Tauri fetch LLM API, MORE SECURITY! #5379
- ð v2.15.0 Now supports Plugins! Read this: NextChat-Awesome-Plugins
- ð v2.14.0 Now supports Artifacts & SD
- ð v2.10.1 support Google Gemini Pro model.
- ð v2.9.11 you can use azure endpoint now.
- ð v2.8 now we have a client that runs across all platforms!
- ð v2.7 let's share conversations as image, or share to ShareGPT!
- ð v2.0 is released, now you can create prompt templates, turn your ideas into reality! Read this: ChatGPT Prompt Engineering Tips: Zero, One and Few Shot Prompting.
主è¦åè½
- å¨ 1 åéå ä½¿ç¨ Vercel å è´¹ä¸é®é¨ç½²
- æä¾ä½ç§¯æå°ï¼~5MBï¼ç跨平å°å®¢æ·ç«¯ï¼Linux/Windows/MacOSï¼, ä¸è½½å°å
- å®æ´ç Markdown æ¯æï¼LaTex å ¬å¼ãMermaid æµç¨å¾ã代ç é«äº®çç
- ç²¾å¿è®¾è®¡ç UIï¼ååºå¼è®¾è®¡ï¼æ¯ææ·±è²æ¨¡å¼ï¼æ¯æ PWA
- æå¿«çé¦å±å è½½é度ï¼~100kbï¼ï¼æ¯ææµå¼ååº
- éç§å®å ¨ï¼æææ°æ®ä¿åå¨ç¨æ·æµè§å¨æ¬å°
- é¢å¶è§è²åè½ï¼é¢å ·ï¼ï¼æ¹ä¾¿å°å建ãå享åè°è¯ä½ ç个æ§å对è¯
- æµ·éçå ç½® prompt å表ï¼æ¥èªä¸æåè±æ
- èªå¨å缩ä¸ä¸æè天记å½ï¼å¨èç Token çåæ¶æ¯æè¶ é¿å¯¹è¯
- å¤å½è¯è¨æ¯æï¼English, ç®ä½ä¸æ, ç¹ä½ä¸æ, æ¥æ¬èª, Español, Italiano, Türkçe, Deutsch, Tiếng Viá»t, Ð ÑÑÑкий, ÄeÅ¡tina, íêµì´, Indonesia
- æ¥æèªå·±çååï¼å¥½ä¸å 好ï¼ç»å®åå³å¯å¨ä»»ä½å°æ¹æ éç¢å¿«é访é®
å¼å计å
- 为æ¯ä¸ªå¯¹è¯è®¾ç½®ç³»ç» Prompt #138
- å 许ç¨æ·èªè¡ç¼è¾å ç½® Prompt å表
- é¢å¶è§è²ï¼ä½¿ç¨é¢å¶è§è²å¿«éå®å¶æ°å¯¹è¯ #993
- å享为å¾çï¼åäº«å° ShareGPT é¾æ¥ #1741
- ä½¿ç¨ tauri æå æ¡é¢åºç¨
- æ¯æèªé¨ç½²ç大è¯è¨æ¨¡åï¼å¼ç®±å³ç¨ RWKV-Runner ï¼æå¡ç«¯é¨ç½² LocalAI é¡¹ç® llama / gpt4all / rwkv / vicuna / koala / gpt4all-j / cerebras / falcon / dolly ççï¼æè ä½¿ç¨ api-for-open-llm
- Artifacts: éè¿ç¬ç«çªå£ï¼è½»æ¾é¢è§ãå¤å¶åå享çæçå 容/å¯äº¤äºç½é¡µ #5092
- æ件æºå¶ï¼æ¯æ
èç½æç´¢
ã计ç®å¨
ãè°ç¨å ¶ä»å¹³å° api #165 #5353 - æ¬å°ç¥è¯åº
ææ°å¨æ
- ð v2.15.4 客æ·ç«¯æ¯æTauriæ¬å°ç´æ¥è°ç¨å¤§æ¨¡åAPIï¼æ´å®å ¨ï¼#5379
- ð v2.15.0 ç°å¨æ¯ææ件åè½äºï¼äºè§£æ´å¤ï¼NextChat-Awesome-Plugins
- ð v2.14.0 ç°å¨æ¯æ Artifacts & SD äºã
- ð v2.10.1 ç°å¨æ¯æ Gemini Pro 模åã
- ð v2.9.11 ç°å¨å¯ä»¥ä½¿ç¨èªå®ä¹ Azure æå¡äºã
- ð v2.8 åå¸äºæ¨ªè·¨ Linux/Windows/MacOS çä½ç§¯æå°ç客æ·ç«¯ã
- ð v2.7 ç°å¨å¯ä»¥å°ä¼è¯å享为å¾çäºï¼ä¹å¯ä»¥åäº«å° ShareGPT çå¨çº¿é¾æ¥ã
- ð v2.0 å·²ç»åå¸ï¼ç°å¨ä½ å¯ä»¥ä½¿ç¨é¢å ·åè½å¿«éå建é¢å¶å¯¹è¯äºï¼ äºè§£æ´å¤ï¼ ChatGPT æ示è¯é«é¶æè½ï¼é¶æ¬¡ãä¸æ¬¡åå°æ ·æ¬æ示ã
- ð¡ æ³è¦æ´æ¹ä¾¿å°éæ¶éå°ä½¿ç¨æ¬é¡¹ç®ï¼å¯ä»¥è¯ä¸è¿æ¬¾æ¡é¢æ件ï¼https://github.com/mushan0x0/AI0x0.com
Get Started
- Get OpenAI API Key;
- Click
, remember that
CODE
is your page password; - Enjoy :)
FAQ
Keep Updated
If you have deployed your own project with just one click following the steps above, you may encounter the issue of "Updates Available" constantly showing up. This is because Vercel will create a new project for you by default instead of forking this project, resulting in the inability to detect updates correctly.
We recommend that you follow the steps below to re-deploy:
- Delete the original repository;
- Use the fork button in the upper right corner of the page to fork this project;
- Choose and deploy in Vercel again, please see the detailed tutorial.
Enable Automatic Updates
If you encounter a failure of Upstream Sync execution, please manually update code.
After forking the project, due to the limitations imposed by GitHub, you need to manually enable Workflows and Upstream Sync Action on the Actions page of the forked project. Once enabled, automatic updates will be scheduled every hour:
Manually Updating Code
If you want to update instantly, you can check out the GitHub documentation to learn how to synchronize a forked project with upstream code.
You can star or watch this project or follow author to get release notifications in time.
Access Password
This project provides limited access control. Please add an environment variable named CODE
on the vercel environment variables page. The value should be passwords separated by comma like this:
code1,code2,code3
After adding or modifying this environment variable, please redeploy the project for the changes to take effect.
Environment Variables
ç®ä½ä¸æ > å¦ä½é ç½® api keyã访é®å¯ç ãæ¥å£ä»£ç
CODE
(optional)
Access password, separated by comma.
OPENAI_API_KEY
(required)
Your openai api key, join multiple api keys with comma.
BASE_URL
(optional)
Default:
https://api.openai.com
Examples:
http://your-openai-proxy.com
Override openai api request base url.
OPENAI_ORG_ID
(optional)
Specify OpenAI organization ID.
AZURE_URL
(optional)
Example: https://{azure-resource-url}/openai
Azure deploy url.
AZURE_API_KEY
(optional)
Azure Api Key.
AZURE_API_VERSION
(optional)
Azure Api Version, find it at Azure Documentation.
GOOGLE_API_KEY
(optional)
Google Gemini Pro Api Key.
GOOGLE_URL
(optional)
Google Gemini Pro Api Url.
ANTHROPIC_API_KEY
(optional)
anthropic claude Api Key.
ANTHROPIC_API_VERSION
(optional)
anthropic claude Api version.
ANTHROPIC_URL
(optional)
anthropic claude Api Url.
BAIDU_API_KEY
(optional)
Baidu Api Key.
BAIDU_SECRET_KEY
(optional)
Baidu Secret Key.
BAIDU_URL
(optional)
Baidu Api Url.
BYTEDANCE_API_KEY
(optional)
ByteDance Api Key.
BYTEDANCE_URL
(optional)
ByteDance Api Url.
ALIBABA_API_KEY
(optional)
Alibaba Cloud Api Key.
ALIBABA_URL
(optional)
Alibaba Cloud Api Url.
IFLYTEK_URL
(Optional)
iflytek Api Url.
IFLYTEK_API_KEY
(Optional)
iflytek Api Key.
IFLYTEK_API_SECRET
(Optional)
iflytek Api Secret.
HIDE_USER_API_KEY
(optional)
Default: Empty
If you do not want users to input their own API key, set this value to 1.
DISABLE_GPT4
(optional)
Default: Empty
If you do not want users to use GPT-4, set this value to 1.
ENABLE_BALANCE_QUERY
(optional)
Default: Empty
If you do want users to query balance, set this value to 1.
DISABLE_FAST_LINK
(optional)
Default: Empty
If you want to disable parse settings from url, set this to 1.
CUSTOM_MODELS
(optional)
Default: Empty Example:
+llama,+claude-2,-gpt-3.5-turbo,gpt-4-1106-preview=gpt-4-turbo
means addllama, claude-2
to model list, and removegpt-3.5-turbo
from list, and displaygpt-4-1106-preview
asgpt-4-turbo
.
To control custom models, use +
to add a custom model, use -
to hide a model, use name=displayName
to customize model name, separated by comma.
User -all
to disable all default models, +all
to enable all default models.
For Azure: use modelName@Azure=deploymentName
to customize model name and deployment name.
Example:
+gpt-3.5-turbo@Azure=gpt35
will show optiongpt35(Azure)
in model list. If you only can use Azure model,-all,+gpt-3.5-turbo@Azure=gpt35
willgpt35(Azure)
the only option in model list.
For ByteDance: use modelName@bytedance=deploymentName
to customize model name and deployment name.
Example:
+Doubao-lite-4k@bytedance=ep-xxxxx-xxx
will show optionDoubao-lite-4k(ByteDance)
in model list.
DEFAULT_MODEL
ï¼optionalï¼
Change default model
WHITE_WEBDAV_ENDPOINTS
(optional)
You can use this option if you want to increase the number of webdav service addresses you are allowed to access, as required by the formatï¼
- Each address must be a complete endpoint
https://xxxx/yyy
- Multiple addresses are connected by ', '
DEFAULT_INPUT_TEMPLATE
(optional)
Customize the default template used to initialize the User Input Preprocessing configuration item in Settings.
STABILITY_API_KEY
(optional)
Stability API key.
STABILITY_URL
(optional)
Customize Stability API url.
Requirements
NodeJS >= 18, Docker >= 20
Development
Before starting development, you must create a new .env.local
file at project root, and place your api key into it:
OPENAI_API_KEY=<your api key here>
# if you are not able to access openai service, use this BASE_URL
BASE_URL=https://chatgpt1.nextweb.fun/api/proxy
Local Development
# 1. install nodejs and yarn first
# 2. config local env vars in `.env.local`
# 3. run
yarn install
yarn dev
Deployment
Docker (Recommended)
docker pull yidadaa/chatgpt-next-web
docker run -d -p 3000:3000 \
-e OPENAI_API_KEY=sk-xxxx \
-e CODE=your-password \
yidadaa/chatgpt-next-web
You can start service behind a proxy:
docker run -d -p 3000:3000 \
-e OPENAI_API_KEY=sk-xxxx \
-e CODE=your-password \
-e PROXY_URL=http://localhost:7890 \
yidadaa/chatgpt-next-web
If your proxy needs password, use:
-e PROXY_URL="http://127.0.0.1:7890 user pass"
Shell
bash <(curl -s https://raw.githubusercontent.com/Yidadaa/ChatGPT-Next-Web/main/scripts/setup.sh)
Synchronizing Chat Records (UpStash)
| ç®ä½ä¸æ | English | Italiano | æ¥æ¬èª | íêµì´
Documentation
Please go to the [docs][./docs] directory for more documentation instructions.
- Deploy with cloudflare (Deprecated)
- Frequent Ask Questions
- How to add a new translation
- How to use Vercel (No English)
- User Manual (Only Chinese, WIP)
Screenshots
Translation
If you want to add a new translation, read this document.
Donation
Special Thanks
Sponsor
ä» ååºæèµ éé¢ >= 100RMB çç¨æ·ã
@mushan0x0 @ClarenceDan @zhangjia @hoochanlon @relativequantum @desenmeng @webees @chazzhou @hauy @Corwin006 @yankunsong @ypwhs @fxxxchao @hotic @WingCH @jtung4 @micozhu @jhansion @Sha1rholder @AnsonHyq @synwith @piksonGit @ouyangzhiping @wenjiavv @LeXwDeX @Licoy @shangmin2009
Contributors
LICENSE
Top Related Projects
Supercharged experience for multiple models such as ChatGPT, DALL-E and Stable Diffusion.
A cross-platform ChatGPT/Gemini UI (Web / PWA / Linux / Win / MacOS). 一键拥有你自己的跨平台 ChatGPT/Gemini 应用。
GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.
用 Express 和 Vue3 搭建的 ChatGPT 演示网页
User-friendly Desktop Client App for AI Models/LLMs (GPT, Claude, Gemini, Ollama...)
🔮 ChatGPT Desktop Application (Mac, Windows and Linux)
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot