Convert Figma logo to code with AI

chatgpt-web-dev logochatgpt-web

A third-party ChatGPT Web UI page built with Express and Vue3, through the official OpenAI completion API. / 用 Express 和 Vue3 搭建的第三方 ChatGPT 前端页面, 基于 OpenAI 官方 completion API.

1,683
449
1,683
153

Top Related Projects

Minimal web UI for ChatGPT.

用 Express 和 Vue3 搭建的 ChatGPT 演示网页

78,774

✨ Local and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows

GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.

53,461

🔮 ChatGPT Desktop Application (Mac, Windows and Linux)

63,126

The official gpt4free repository | various collection of powerful language models

Quick Overview

ChatGPT-Web is an open-source project that provides a web interface for interacting with OpenAI's ChatGPT model. It allows users to deploy their own ChatGPT-like web application with customizable features and a user-friendly interface.

Pros

  • Easy to deploy and self-host
  • Customizable UI and features
  • Supports multiple API endpoints (OpenAI, Azure, Claude, etc.)
  • Active community and frequent updates

Cons

  • Requires API key from OpenAI or other supported providers
  • May have higher latency compared to official ChatGPT interface
  • Limited advanced features compared to paid ChatGPT Plus
  • Potential for misuse if not properly configured

Getting Started

  1. Clone the repository:

    git clone https://github.com/Chanzhaoyu/chatgpt-web.git
    
  2. Install dependencies:

    cd chatgpt-web
    pnpm install
    
  3. Configure environment variables: Create a .env file in the root directory and add your OpenAI API key:

    OPENAI_API_KEY=your_api_key_here
    
  4. Start the development server:

    pnpm dev
    
  5. Build for production:

    pnpm build
    

For more detailed instructions and configuration options, refer to the project's README.md file.

Competitor Comparisons

Minimal web UI for ChatGPT.

Pros of chatgpt-demo

  • Lightweight and easy to deploy
  • Supports multiple languages out of the box
  • Clean and modern user interface

Cons of chatgpt-demo

  • Limited customization options
  • Fewer advanced features compared to chatgpt-web
  • May require more setup for complex use cases

Code Comparison

chatgpt-demo:

import { defineConfig } from 'vite'
import vue from '@vitejs/plugin-vue'
import { VitePWA } from 'vite-plugin-pwa'

export default defineConfig({
  plugins: [vue(), VitePWA()],
})

chatgpt-web:

const express = require('express');
const path = require('path');
const app = express();

app.use(express.static(path.join(__dirname, 'public')));
app.listen(3000, () => console.log('Server running on port 3000'));

The code snippets show different approaches:

  • chatgpt-demo uses Vite for building and Vue.js for the frontend
  • chatgpt-web uses Express.js for serving static files and handling server-side logic

Both projects aim to provide a web interface for ChatGPT, but they differ in their implementation and feature sets. chatgpt-demo focuses on simplicity and ease of use, while chatgpt-web offers more advanced features and customization options.

用 Express 和 Vue3 搭建的 ChatGPT 演示网页

Pros of chatgpt-web

  • More active development with frequent updates and bug fixes
  • Supports multiple API endpoints, including Azure OpenAI
  • Offers a wider range of customization options for appearance and functionality

Cons of chatgpt-web

  • Slightly more complex setup process due to additional features
  • May have higher resource requirements for hosting and deployment
  • Some users report occasional stability issues with certain features

Code Comparison

chatgpt-web:

const apiKey = import.meta.env.OPENAI_API_KEY
const model = import.meta.env.OPENAI_API_MODEL
const temperature = import.meta.env.OPENAI_API_TEMPERATURE

chatgpt-web-dev:

const API_KEY = process.env.OPENAI_API_KEY
const MODEL = 'gpt-3.5-turbo'
const TEMPERATURE = 0.7

The code snippets show differences in environment variable handling and default model settings. chatgpt-web uses Vite's import.meta.env for configuration, while chatgpt-web-dev uses Node.js process.env. chatgpt-web also allows for more flexible model and temperature settings through environment variables.

78,774

✨ Local and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows

Pros of NextChat

  • Built with Next.js, offering better performance and SEO optimization
  • More modern UI with a sleek, responsive design
  • Supports multiple API endpoints and models

Cons of NextChat

  • Potentially more complex setup due to Next.js framework
  • May require more resources to run compared to the simpler chatgpt-web

Code Comparison

NextChat (TypeScript):

import { NextApiRequest, NextApiResponse } from 'next'
import { ChatGPTAPI } from 'chatgpt'

export default async function handler(req: NextApiRequest, res: NextApiResponse) {
  const api = new ChatGPTAPI({ apiKey: process.env.OPENAI_API_KEY })
  // ... rest of the code
}

chatgpt-web (JavaScript):

const express = require('express')
const { Configuration, OpenAIApi } = require('openai')

const app = express()
const configuration = new Configuration({ apiKey: process.env.OPENAI_API_KEY })
const openai = new OpenAIApi(configuration)

The code snippets show that NextChat uses Next.js API routes and TypeScript, while chatgpt-web uses Express.js and plain JavaScript. NextChat's approach may offer better type safety and integration with the Next.js ecosystem, but chatgpt-web's setup is simpler and more straightforward for developers familiar with Express.js.

Both projects aim to provide a web interface for ChatGPT, but NextChat offers a more feature-rich experience with its modern tech stack, while chatgpt-web focuses on simplicity and ease of use.

GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.

Pros of ChuanhuChatGPT

  • More extensive language support, including Chinese
  • Advanced features like API key management and model selection
  • Active development with frequent updates

Cons of ChuanhuChatGPT

  • More complex setup process
  • Heavier resource requirements due to additional features

Code Comparison

ChuanhuChatGPT:

def predict(self, input, chatbot, stream = False, use_websearch=False, files = []):
    if stream:
        return self.predict_stream(input, chatbot, use_websearch, files)
    else:
        return self.predict_no_stream(input, chatbot, use_websearch, files)

chatgpt-web:

async function sendMessage(message) {
  const response = await fetch('/api/chat', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ message }),
  });
  return response.json();
}

The code snippets highlight the different approaches:

  • ChuanhuChatGPT uses Python and offers more flexibility with streaming and web search options
  • chatgpt-web uses JavaScript and focuses on a simpler API call structure

Overall, ChuanhuChatGPT offers more features and language support but requires more setup, while chatgpt-web provides a simpler, more lightweight solution.

53,461

🔮 ChatGPT Desktop Application (Mac, Windows and Linux)

Pros of ChatGPT

  • Desktop application with cross-platform support (Windows, macOS, Linux)
  • Offers additional features like export/import of chat history and proxy settings
  • Regular updates and active development

Cons of ChatGPT

  • Requires installation and system resources
  • May have a steeper learning curve for non-technical users
  • Limited customization options compared to web-based alternatives

Code Comparison

ChatGPT (Tauri-based desktop app):

#[tauri::command]
fn greet(name: &str) -> String {
    format!("Hello, {}! You've been greeted from Rust!", name)
}

chatgpt-web (Vue.js-based web app):

<template>
  <div>
    <h1>{{ greeting }}</h1>
    <input v-model="name" @keyup.enter="greet" />
  </div>
</template>

<script>
export default {
  data() {
    return {
      name: '',
      greeting: 'Enter your name'
    }
  },
  methods: {
    greet() {
      this.greeting = `Hello, ${this.name}!`
    }
  }
}
</script>

The code comparison highlights the different approaches: ChatGPT uses Rust for backend logic, while chatgpt-web utilizes Vue.js for frontend interactivity. This reflects the fundamental difference between a desktop application and a web-based solution.

63,126

The official gpt4free repository | various collection of powerful language models

Pros of gpt4free

  • Offers free access to GPT-4 and other AI models without API keys
  • Supports multiple providers and model types
  • Includes examples and implementations in various programming languages

Cons of gpt4free

  • May have legal and ethical concerns due to unauthorized API usage
  • Less stable and reliable compared to official APIs
  • Requires more frequent updates to maintain functionality

Code Comparison

gpt4free:

from g4f import ChatCompletion

response = ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello, how are you?"}]
)
print(response)

chatgpt-web:

const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

The gpt4free code snippet demonstrates how to use the library to generate responses without an API key, while the chatgpt-web code shows the setup for using the official OpenAI API with proper authentication.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

ChatGPT Web

中文 | English

说明

[!IMPORTANT] 此项目 Fork 自 Chanzhaoyu/chatgpt-web

由于原项目作者不愿意引入对数据库的依赖 故制作该永久分叉独立开发 详见讨论

再次感谢 Chanzhaoyu 大佬对开源的贡献 🙏

新增了部分特色功能:

[✓] 注册 & 登录 & 重置密码 & 2FA

[✓] 同步历史会话

[✓] 前端页面设置apikey

[✓] 自定义敏感词

[✓] 每个会话设置独有 Prompt

[✓] 用户管理

[✓] 多 Key 随机

[✓] 对话数量限制 & 设置不同用户对话数量 & 兑换数量

[✓] 通过 auth proxy 功能实现sso登录 (配合第三方身份验证反向代理 可实现支持 LDAP/OIDC/SAML 等协议登录)

[!CAUTION] 声明:此项目只发布于 Github,基于 MIT 协议,免费且作为开源学习使用。并且不会有任何形式的卖号、付费服务、讨论群、讨论组等行为。谨防受骗。

截图

cover3 cover cover2 cover3 cover3 cover3 cover3 userlimit setmanuallimit giftcarddb

介绍

支持双模型,提供了两种非官方 ChatGPT API 方法

方式免费?可靠性质量
ChatGPTAPI(gpt-3.5-turbo-0301)否可靠相对较笨
ChatGPTUnofficialProxyAPI(网页 accessToken)是相对不可靠聪明

对比:

  1. ChatGPTAPI 使用 gpt-3.5-turbo 通过 OpenAI 官方 API 调用 ChatGPT
  2. ChatGPTUnofficialProxyAPI 使用非官方代理服务器访问 ChatGPT 的后端API,绕过Cloudflare(依赖于第三方服务器,并且有速率限制)

警告:

  1. 你应该首先使用 API 方式
  2. 使用 API 时,如果网络不通,那是国内被墙了,你需要自建代理,绝对不要使用别人的公开代理,那是危险的。
  3. 使用 accessToken 方式时反向代理将向第三方暴露您的访问令牌,这样做应该不会产生任何不良影响,但在使用这种方法之前请考虑风险。
  4. 使用 accessToken 时,不管你是国内还是国外的机器,都会使用代理。默认代理为 pengzhile 大佬的 https://ai.fakeopen.com/api/conversation,这不是后门也不是监听,除非你有能力自己翻过 CF 验证,用前请知悉。社区代理(注意:只有这两个是推荐,其他第三方来源,请自行甄别)
  5. 把项目发布到公共网络时,你应该设置 AUTH_SECRET_KEY 变量添加你的密码访问权限,你也应该修改 index.html 中的 title,防止被关键词搜索到。

切换方式:

  1. 进入 service/.env.example 文件,复制内容到 service/.env 文件
  2. 使用 OpenAI API Key 请填写 OPENAI_API_KEY 字段 (获取 apiKey)
  3. 使用 Web API 请填写 OPENAI_ACCESS_TOKEN 字段 (获取 accessToken)
  4. 同时存在时以 OpenAI API Key 优先

环境变量:

全部参数变量请查看或这里

/service/.env.example

待实现路线

[✓] 双模型

[✓] 多会话储存和上下文逻辑

[✓] 对代码等消息类型的格式化美化处理

[✓] 支持用户登录注册

[✓] 前端页面设置 apikey 等信息

[✓] 数据导入、导出

[✓] 保存消息到本地图片

[✓] 界面多语言

[✓] 界面主题

[✗] More...

前置要求

Node

node 需要 ^16 || ^18 || ^20 || ^22 版本,使用 nvm 可管理本地多个 node 版本

node -v

PNPM

如果你没有安装过 pnpm

npm install pnpm -g

填写密钥

获取 Openai Api Key 或 accessToken 并填写本地环境变量 跳转

# service/.env 文件

# OpenAI API Key - https://platform.openai.com/overview
OPENAI_API_KEY=

# change this to an `accessToken` extracted from the ChatGPT site's `https://chat.openai.com/api/auth/session` response
OPENAI_ACCESS_TOKEN=

安装依赖

为了简便 后端开发人员 的了解负担,所以并没有采用前端 workspace 模式,而是分文件夹存放。如果只需要前端页面做二次开发,删除 service 文件夹即可。

后端

进入文件夹 /service 运行以下命令

pnpm install

前端

根目录下运行以下命令

pnpm bootstrap

测试环境运行

后端服务

进入文件夹 /service 运行以下命令

pnpm start

前端网页

根目录下运行以下命令

pnpm dev

环境变量

API 可用:

  • OPENAI_API_KEY 和 OPENAI_ACCESS_TOKEN 二选一
  • OPENAI_API_BASE_URL 设置接口地址,可选,默认:https://api.openai.com
  • OPENAI_API_DISABLE_DEBUG 设置接口关闭 debug 日志,可选,默认:empty 不关闭

ACCESS_TOKEN 可用:

  • OPENAI_ACCESS_TOKEN 和 OPENAI_API_KEY 二选一,同时存在时,OPENAI_API_KEY 优先
  • API_REVERSE_PROXY 设置反向代理,可选,默认:https://ai.fakeopen.com/api/conversation,社区(注意:只有这两个是推荐,其他第三方来源,请自行甄别)

通用:

  • AUTH_SECRET_KEY 访问权限密钥,可选
  • MAX_REQUEST_PER_HOUR 每小时最大请求次数,可选,默认无限
  • TIMEOUT_MS 超时,单位毫秒,可选
  • SOCKS_PROXY_HOST 和 SOCKS_PROXY_PORT 一起时生效,可选
  • SOCKS_PROXY_PORT 和 SOCKS_PROXY_HOST 一起时生效,可选
  • HTTPS_PROXY 支持 http,https, socks5,可选

打包

使用 Docker

Docker 参数示例

docker

Docker build & Run

GIT_COMMIT_HASH=`git rev-parse HEAD`
RELEASE_VERSION=`git branch --show-current`
docker build --build-arg GIT_COMMIT_HASH=${GIT_COMMIT_HASH} --build-arg RELEASE_VERSION=${RELEASE_VERSION} -t chatgpt-web .

# 前台运行
# 如果在宿主机运行 mongodb 则使用 MONGODB_URL=mongodb://host.docker.internal:27017/chatgpt
docker run --name chatgpt-web --rm -it -p 3002:3002 --env OPENAI_API_KEY=your_api_key --env MONGODB_URL=your_mongodb_url chatgpt-web

# 后台运行
docker run --name chatgpt-web -d -p 127.0.0.1:3002:3002 --env OPENAI_API_KEY=your_api_key --env MONGODB_URL=your_mongodb_url chatgpt-web

# 运行地址
http://localhost:3002/

Docker compose

Hub 地址

version: '3'

services:
  app:
    image: chatgptweb/chatgpt-web # 总是使用latest,更新时重新pull该tag镜像即可
    container_name: chatgptweb
    restart: unless-stopped
    ports:
      - 3002:3002
    depends_on:
      - database
    environment:
      TZ: Asia/Shanghai
      # 每小时最大请求次数,可选,默认无限
      MAX_REQUEST_PER_HOUR: 0
      # 访问jwt加密参数,可选 不为空则允许登录 同时需要设置 MONGODB_URL
      AUTH_SECRET_KEY: xxx
      # 网站名称
      SITE_TITLE: ChatGpt Web
      # mongodb 的连接字符串
      MONGODB_URL: 'mongodb://chatgpt:xxxx@database:27017'
      # 开启注册之后 密码加密的盐
      PASSWORD_MD5_SALT: xxx
      # 开启注册之后 超级管理邮箱
      ROOT_USER: me@example.com
      # 网站是否开启注册 必须开启, 否则管理员都没法注册, 可后续关闭
      REGISTER_ENABLED: true
      # 更多配置, 在运行后, 注册管理员, 在管理员页面中设置
    links:
      - database

  database:
    image: mongo
    container_name: chatgptweb-database
    restart: unless-stopped
    ports:
      - '27017:27017'
    expose:
      - '27017'
    volumes:
      - mongodb:/data/db
    environment:
      MONGO_INITDB_ROOT_USERNAME: chatgpt
      MONGO_INITDB_ROOT_PASSWORD: xxxx
      MONGO_INITDB_DATABASE: chatgpt

volumes:
  mongodb: {}
  • OPENAI_API_BASE_URL 可选,设置 OPENAI_API_KEY 时可用

防止爬虫抓取

nginx

将下面配置填入nginx配置文件中,可以参考 docker-compose/nginx/nginx.conf 文件中添加反爬虫的方法

    # 防止爬虫抓取
    if ($http_user_agent ~* "360Spider|JikeSpider|Spider|spider|bot|Bot|2345Explorer|curl|wget|webZIP|qihoobot|Baiduspider|Googlebot|Googlebot-Mobile|Googlebot-Image|Mediapartners-Google|Adsbot-Google|Feedfetcher-Google|Yahoo! Slurp|Yahoo! Slurp China|YoudaoBot|Sosospider|Sogou spider|Sogou web spider|MSNBot|ia_archiver|Tomato Bot|NSPlayer|bingbot"){
      return 403;
    }

使用 Railway 部署

Deploy on Railway

参考这个 issue 详细教程 https://github.com/Kerwin1202/chatgpt-web/issues/266

注意: Railway 修改环境变量会重新 Deploy

手动打包

后端服务

如果你不需要本项目的 node 接口,可以省略如下操作

复制 service 文件夹到你有 node 服务环境的服务器上。

# 安装
pnpm install

# 打包
pnpm build

# 运行
pnpm prod

PS: 不进行打包,直接在服务器上运行 pnpm start 也可

前端网页

1、修改根目录下 .env 文件中的 VITE_GLOB_API_URL 为你的实际后端接口地址

2、根目录下运行以下命令,然后将 dist 文件夹内的文件复制到你网站服务的根目录下

参考信息

pnpm build

Auth Proxy Mode

[!WARNING] 该功能仅适用于有相关经验的运维人员在集成企业内部账号管理系统时部署 配置不当可能会导致安全风险

设置环境变量 AUTH_PROXY_ENABLED=true 即可开启 auth proxy 模式

在开启该功能后 需确保 chatgpt-web 只能通过反向代理访问

由反向代理进行进行身份验证 并再转发请求时携带请求头标识用户身份 默认请求头为 X-Email 并可以通过设置环境变量 AUTH_PROXY_HEADER_NAME 自定义配置

推荐当前 Idp 使用 LDAP 协议的 可以选择使用 authelia

当前 Idp 使用 OIDC 协议的 可以选择使用 oauth2-proxy

常见问题

Q: 为什么 Git 提交总是报错?

A: 因为有提交信息验证,请遵循 Commit 指南

Q: 如果只使用前端页面,在哪里改请求接口?

A: 根目录下 .env 文件中的 VITE_GLOB_API_URL 字段。

Q: 文件保存时全部爆红?

A: vscode 请安装项目推荐插件,或手动安装 Eslint 插件。

Q: 前端没有打字机效果?

A: 一种可能原因是经过 Nginx 反向代理,开启了 buffer,则 Nginx 会尝试从后端缓冲一定大小的数据再发送给浏览器。请尝试在反代参数后添加 proxy_buffering off;,然后重载 Nginx。其他 web server 配置同理。

参与贡献

贡献之前请先阅读 贡献指南

感谢所有做过贡献的人!

Contributors Image

Star 历史

Star History Chart

赞助

如果你觉得这个项目对你有帮助,请给我点个Star。并且情况允许的话,可以给我一点点支持,总之非常感谢支持~

微信

WeChat Pay

支付宝

Alipay


感谢 DigitalOcean 赞助提供开源积分用于运行基础设施服务器

digitalocean

License

MIT © github.com/chatgpt-web-dev Contributors