lobe-chat
🤯 Lobe Chat - an open-source, modern-design AI chat framework. Supports Multi AI Providers( OpenAI / Claude 3 / Gemini / Ollama / Azure / DeepSeek), Knowledge Base (file upload / knowledge management / RAG ), Multi-Modals (Vision/TTS) and plugin system. One-click FREE deployment of your private ChatGPT/ Claude application.
Top Related Projects
Robust Speech Recognition via Large-Scale Weak Supervision
Port of OpenAI's Whisper model in C/C++
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
A Gradio web UI for Large Language Models.
Quick Overview
Lobe Chat is an open-source chatbot framework designed to provide an out-of-the-box AI assistant experience. It offers a user-friendly interface for interacting with various AI models, supports plugins, and allows for easy deployment and customization.
Pros
- User-friendly interface with a modern design
- Supports multiple AI models and easy integration of new ones
- Extensible through a plugin system
- Easy deployment options, including one-click deployment to Vercel
Cons
- Limited documentation for advanced customization
- Requires some technical knowledge for self-hosting and configuration
- May have performance limitations with large-scale deployments
- Dependency on third-party AI services for core functionality
Getting Started
To get started with Lobe Chat, follow these steps:
-
Clone the repository:
git clone https://github.com/lobehub/lobe-chat.git
-
Install dependencies:
cd lobe-chat pnpm install
-
Set up environment variables:
- Copy
.env.example
to.env.local
- Add your OpenAI API key to
OPENAI_API_KEY
- Copy
-
Run the development server:
pnpm dev
-
Open
http://localhost:3000
in your browser to use Lobe Chat.
For production deployment, you can use the one-click deploy button on the project's README to deploy to Vercel, or follow the self-hosting instructions provided in the documentation.
Competitor Comparisons
Robust Speech Recognition via Large-Scale Weak Supervision
Pros of Whisper
- Specialized in speech recognition and transcription
- Supports multiple languages and accents
- Backed by OpenAI's extensive research and resources
Cons of Whisper
- Limited to audio processing tasks
- Requires more computational resources for real-time transcription
- Less user-friendly interface for non-technical users
Code Comparison
Whisper:
import whisper
model = whisper.load_model("base")
result = model.transcribe("audio.mp3")
print(result["text"])
Lobe Chat:
import { LobeChat } from '@lobehub/chat';
const chat = new LobeChat();
const response = await chat.sendMessage('Hello, how are you?');
console.log(response);
Key Differences
- Whisper focuses on speech-to-text, while Lobe Chat is a general-purpose chatbot framework
- Lobe Chat offers a more user-friendly interface for creating conversational AI
- Whisper provides more accurate and multilingual audio transcription capabilities
- Lobe Chat is better suited for building interactive chat applications
Use Cases
Whisper is ideal for:
- Transcribing podcasts or interviews
- Generating subtitles for videos
- Voice command systems
Lobe Chat is better for:
- Creating custom chatbots
- Building conversational interfaces
- Prototyping AI-powered chat applications
Port of OpenAI's Whisper model in C/C++
Pros of whisper.cpp
- Lightweight and efficient C++ implementation of OpenAI's Whisper model
- Focuses specifically on speech recognition and transcription tasks
- Can run on various platforms, including mobile devices and low-power hardware
Cons of whisper.cpp
- Limited to speech-to-text functionality, lacking chat or conversational features
- Requires more technical knowledge to set up and use compared to Lobe Chat
- Less user-friendly interface for non-technical users
Code Comparison
whisper.cpp:
#include "whisper.h"
int main(int argc, char ** argv) {
struct whisper_context * ctx = whisper_init_from_file("ggml-base.en.bin");
whisper_full_default(ctx, wparams, pcmf32.data(), pcmf32.size());
whisper_print_timings(ctx);
whisper_free(ctx);
}
Lobe Chat:
import { LobeChat } from '@lobehub/chat-sdk';
const chat = new LobeChat();
const response = await chat.sendMessage('Hello, how are you?');
console.log(response);
Summary
whisper.cpp is a specialized tool for speech recognition, offering high performance and efficiency. It's ideal for developers working on speech-to-text applications but requires more technical expertise. Lobe Chat, on the other hand, provides a more comprehensive chat interface with a focus on ease of use and conversational AI capabilities. The choice between the two depends on the specific requirements of the project and the user's technical proficiency.
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Pros of AutoGPT
- More autonomous and capable of completing complex tasks with minimal human intervention
- Supports a wider range of functionalities, including web browsing and file operations
- Has a larger and more active community, leading to frequent updates and improvements
Cons of AutoGPT
- Steeper learning curve and more complex setup process
- Requires more computational resources and may be slower for simple tasks
- Less focus on user-friendly interface and chat-like interactions
Code Comparison
AutoGPT (Python):
def browse_website(url, question):
summary = summarize_text(url)
answer = self.ask_chatgpt(f"{summary}\n\nQuestion: {question}")
return answer
Lobe Chat (TypeScript):
async function sendMessage(message: string): Promise<string> {
const response = await fetch('/api/chat', {
method: 'POST',
body: JSON.stringify({ message }),
});
return response.json();
}
AutoGPT focuses on autonomous task completion with more complex functions, while Lobe Chat emphasizes a simpler, chat-based interface for user interactions. AutoGPT's code demonstrates web browsing capabilities, whereas Lobe Chat's code shows a straightforward message-sending function.
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
Pros of JARVIS
- More comprehensive AI system with multimodal capabilities (vision, language, robotics)
- Backed by Microsoft's extensive resources and research expertise
- Designed for broader applications beyond just chat interfaces
Cons of JARVIS
- More complex setup and integration due to its broader scope
- Less focused on user-friendly chat experiences compared to Lobe Chat
- May require more computational resources to run effectively
Code Comparison
JARVIS (Python):
from jarvis import JARVIS
jarvis = JARVIS()
response = jarvis.process_input("What's the weather like today?")
print(response)
Lobe Chat (JavaScript):
import { LobeChat } from 'lobe-chat';
const chat = new LobeChat();
const response = await chat.sendMessage("What's the weather like today?");
console.log(response);
Key Differences
- JARVIS is a more comprehensive AI system, while Lobe Chat focuses on chat interfaces
- Lobe Chat offers a more user-friendly experience for simple conversational AI
- JARVIS provides broader capabilities but may require more setup and resources
- Lobe Chat is likely easier to integrate for developers focusing on chat applications
- JARVIS may be better suited for complex, multimodal AI tasks beyond simple conversations
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
Pros of Open-Assistant
- Larger community and more contributors, potentially leading to faster development and diverse features
- Focuses on creating an open-source alternative to ChatGPT, with a broader scope and ambition
- Supports multiple languages and has a more extensive dataset for training
Cons of Open-Assistant
- More complex setup and installation process compared to Lobe Chat
- May require more computational resources due to its larger scale and ambition
- Less focused on providing a simple, user-friendly chat interface
Code Comparison
Open-Assistant (Python):
from oa.assistant import Assistant
assistant = Assistant()
response = assistant.generate_response("Hello, how are you?")
print(response)
Lobe Chat (JavaScript):
import { LobeChat } from 'lobe-chat';
const chat = new LobeChat();
chat.sendMessage("Hello, how are you?")
.then(response => console.log(response));
While both projects aim to provide conversational AI capabilities, Open-Assistant is more focused on creating a comprehensive, open-source alternative to large language models, whereas Lobe Chat prioritizes a user-friendly chat interface with easy integration. The code examples demonstrate the difference in implementation languages and complexity, with Open-Assistant using Python and Lobe Chat using JavaScript.
A Gradio web UI for Large Language Models.
Pros of text-generation-webui
- More flexible and customizable, supporting a wider range of language models
- Offers advanced features like fine-tuning, training, and model merging
- Provides a comprehensive web interface with various chat modes and extensions
Cons of text-generation-webui
- Steeper learning curve and more complex setup process
- Requires more computational resources to run effectively
- Less focus on a streamlined, user-friendly chat experience
Code Comparison
text-generation-webui:
def generate_reply(
question, state, stopping_strings=None, is_chat=False, for_ui=False
):
# Complex generation logic with multiple parameters and options
lobe-chat:
export const generateChatCompletion = async (
messages: ChatMessage[],
config: ChatConfig,
): Promise<string> => {
// Simplified chat completion function
};
The code comparison highlights the difference in complexity and focus between the two projects. text-generation-webui offers more advanced options and flexibility, while lobe-chat provides a more streamlined approach to chat interactions.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Lobe Chat
An open-source, modern-design ChatGPT/LLMs UI/Framework.
Supports speech-synthesis, multi-modal, and extensible (function call) plugin system.
One-click FREE deployment of your private OpenAI ChatGPT/Claude/Gemini/Groq/Ollama chat application.
English · ç®ä½ä¸æ · æ¥æ¬èª · Official Site · Changelog · Documents · Blog · Feedback
Share LobeChat Repository
Pioneering the new age of thinking and creating. Built for you, the Super Individual.
Table of contents
TOC
- ðð» Getting Started & Join Our Community
- ⨠Features
1
File Upload/Knowledge Base2
Multi-Model Service Provider Support3
Local Large Language Model (LLM) Support4
Model Visual Recognition5
TTS & STT Voice Conversation6
Text to Image Generation7
Plugin System (Function Calling)8
Agent Market (GPTs)9
Support Local / Remote Database10
Support Multi-User Management11
Progressive Web App (PWA)12
Mobile Device Adaptation13
Custom Themes*
What's more
- â¡ï¸ Performance
- ð³ Self Hosting
- ð¦ Ecosystem
- 𧩠Plugins
- â¨ï¸ Local Development
- ð¤ Contributing
- â¤ï¸ Sponsor
- ð More Products
ðð» Getting Started & Join Our Community
We are a group of e/acc design-engineers, hoping to provide modern design components and tools for AIGC. By adopting the Bootstrapping approach, we aim to provide developers and users with a more open, transparent, and user-friendly product ecosystem.
Whether for users or professional developers, LobeHub will be your AI Agent playground. Please be aware that LobeChat is currently under active development, and feedback is welcome for any issues encountered.
[!IMPORTANT]
Star Us, You will receive all release notifications from GitHub without any delay ~ âï¸
Star History
⨠Features
1
File Upload/Knowledge Base
LobeChat supports file upload and knowledge base functionality. You can upload various types of files including documents, images, audio, and video, as well as create knowledge bases, making it convenient for users to manage and search for files. Additionally, you can utilize files and knowledge base features during conversations, enabling a richer dialogue experience.
https://github.com/user-attachments/assets/faa8cf67-e743-4590-8bf6-ebf6ccc34175
[!TIP]
Learn more on ð LobeChat Knowledge Base Launch â From Now On, Every Step Counts
2
Multi-Model Service Provider Support
In the continuous development of LobeChat, we deeply understand the importance of diversity in model service providers for meeting the needs of the community when providing AI conversation services. Therefore, we have expanded our support to multiple model service providers, rather than being limited to a single one, in order to offer users a more diverse and rich selection of conversations.
In this way, LobeChat can more flexibly adapt to the needs of different users, while also providing developers with a wider range of choices.
Supported Model Service Providers
We have implemented support for the following model service providers:
- AWS Bedrock: Integrated with AWS Bedrock service, supporting models such as Claude / LLama2, providing powerful natural language processing capabilities. Learn more
- Anthropic (Claude): Accessed Anthropic's Claude series models, including Claude 3 and Claude 2, with breakthroughs in multi-modal capabilities and extended context, setting a new industry benchmark. Learn more
- Google AI (Gemini Pro, Gemini Vision): Access to Google's Gemini series models, including Gemini and Gemini Pro, to support advanced language understanding and generation. Learn more
- Groq: Accessed Groq's AI models, efficiently processing message sequences and generating responses, capable of multi-turn dialogues and single-interaction tasks. Learn more
- OpenRouter: Supports routing of models including Claude 3, Gemma, Mistral, Llama2 and Cohere, with intelligent routing optimization to improve usage efficiency, open and flexible. Learn more
- 01.AI (Yi Model): Integrated the 01.AI models, with series of APIs featuring fast inference speed, which not only shortened the processing time, but also maintained excellent model performance. Learn more
- Together.ai: Over 100 leading open-source Chat, Language, Image, Code, and Embedding models are available through the Together Inference API. For these models you pay just for what you use. Learn more
- ChatGLM: Added the ChatGLM series models from Zhipuai (GLM-4/GLM-4-vision/GLM-3-turbo), providing users with another efficient conversation model choice. Learn more
- Moonshot AI (Dark Side of the Moon): Integrated with the Moonshot series models, an innovative AI startup from China, aiming to provide deeper conversation understanding. Learn more
- Minimax: Integrated the Minimax models, including the MoE model abab6, offers a broader range of choices. Learn more
- DeepSeek: Integrated with the DeepSeek series models, an innovative AI startup from China, The product has been designed to provide a model that balances performance with price. Learn more
- Qwen: Integrated the Qwen series models, including the latest qwen-turbo, qwen-plus and qwen-max. Lean more
- Novita AI: Access Llama, Mistral, and other leading open-source models at cheapest prices. Engage in uncensored role-play, spark creative discussions, and foster unrestricted innovation. Pay For What You Use. Learn more
At the same time, we are also planning to support more model service providers, such as Replicate and Perplexity, to further enrich our service provider library. If you would like LobeChat to support your favorite service provider, feel free to join our community discussion.
3
Local Large Language Model (LLM) Support
To meet the specific needs of users, LobeChat also supports the use of local models based on Ollama, allowing users to flexibly use their own or third-party models.
[!TIP]
Learn more about ð Using Ollama in LobeChat by checking it out.
4
Model Visual Recognition
LobeChat now supports OpenAI's latest gpt-4-vision
model with visual recognition capabilities,
a multimodal intelligence that can perceive visuals. Users can easily upload or drag and drop images into the dialogue box,
and the agent will be able to recognize the content of the images and engage in intelligent conversation based on this,
creating smarter and more diversified chat scenarios.
This feature opens up new interactive methods, allowing communication to transcend text and include a wealth of visual elements. Whether it's sharing images in daily use or interpreting images within specific industries, the agent provides an outstanding conversational experience.
5
TTS & STT Voice Conversation
LobeChat supports Text-to-Speech (TTS) and Speech-to-Text (STT) technologies, enabling our application to convert text messages into clear voice outputs, allowing users to interact with our conversational agent as if they were talking to a real person. Users can choose from a variety of voices to pair with the agent.
Moreover, TTS offers an excellent solution for those who prefer auditory learning or desire to receive information while busy. In LobeChat, we have meticulously selected a range of high-quality voice options (OpenAI Audio, Microsoft Edge Speech) to meet the needs of users from different regions and cultural backgrounds. Users can choose the voice that suits their personal preferences or specific scenarios, resulting in a personalized communication experience.
6
Text to Image Generation
With support for the latest text-to-image generation technology, LobeChat now allows users to invoke image creation tools directly within conversations with the agent. By leveraging the capabilities of AI tools such as DALL-E 3
, MidJourney
, and Pollinations
, the agents are now equipped to transform your ideas into images.
This enables a more private and immersive creative process, allowing for the seamless integration of visual storytelling into your personal dialogue with the agent.
7
Plugin System (Function Calling)
The plugin ecosystem of LobeChat is an important extension of its core functionality, greatly enhancing the practicality and flexibility of the LobeChat assistant.
By utilizing plugins, LobeChat assistants can obtain and process real-time information, such as searching for web information and providing users with instant and relevant news.
In addition, these plugins are not limited to news aggregation, but can also extend to other practical functions, such as quickly searching documents, generating images, obtaining data from various platforms like Bilibili, Steam, and interacting with various third-party services.
[!TIP]
Learn more about ð Plugin Usage by checking it out.
Recent Submits | Description |
---|---|
Tongyi wanxiang Image Generator By YoungTx on 2024-08-09 | This plugin uses Alibaba's Tongyi Wanxiang model to generate images based on text prompts.image tongyi wanxiang |
Shopping tools By shoppingtools on 2024-07-19 | Search for products on eBay & AliExpress, find eBay events & coupons. Get prompt examples.shopping e-bay ali-express coupons |
Savvy Trader AI By savvytrader on 2024-06-27 | Realtime stock, crypto and other investment data.stock analyze |
Search1API By fatwang2 on 2024-05-06 | Search aggregation service, specifically designed for LLMsweb search |
ð Total plugins: 50
8
Agent Market (GPTs)
In LobeChat Agent Marketplace, creators can discover a vibrant and innovative community that brings together a multitude of well-designed agents, which not only play an important role in work scenarios but also offer great convenience in learning processes. Our marketplace is not just a showcase platform but also a collaborative space. Here, everyone can contribute their wisdom and share the agents they have developed.
[!TIP]
By ð¤/ðª Submit Agents, you can easily submit your agent creations to our platform. Importantly, LobeChat has established a sophisticated automated internationalization (i18n) workflow, capable of seamlessly translating your agent into multiple language versions. This means that no matter what language your users speak, they can experience your agent without barriers.
[!IMPORTANT]
We welcome all users to join this growing ecosystem and participate in the iteration and optimization of agents. Together, we can create more interesting, practical, and innovative agents, further enriching the diversity and practicality of the agent offerings.
Recent Submits | Description |
---|---|
AI Agent Generator By xyftw on 2024-09-10 | Skilled at creating AI Agent character descriptions that meet the needs.ai-agent character-creation |
HTML to React By xingwang02 on 2024-09-10 | Input HTML snippets and convert them into React componentsreact html |
FiveM & QBCore Framework Expert By heartsiddharth1 on 2024-09-08 | Expertise in FiveM development, QBCore framework, Lua programming, JavaScript, database management, server administration, version control, full-stack web development, DevOps, and community engagement with a focus on performance, security, and best practices.five-m qb-core lua java-script my-sql server-management git full-stack-web-development dev-ops community-engagement |
Nuxt 3/Vue.js Master Developer By Kadreev on 2024-09-03 | Specialized in full-stack development with Nuxt 3 expertise.nuxt-3 vue-js full-stack-development java-script web-applications |
ð Total agents: 325
9
Support Local / Remote Database
LobeChat supports the use of both server-side and local databases. Depending on your needs, you can choose the appropriate deployment solution:
- Local database: suitable for users who want more control over their data and privacy protection. LobeChat uses CRDT (Conflict-Free Replicated Data Type) technology to achieve multi-device synchronization. This is an experimental feature aimed at providing a seamless data synchronization experience.
- Server-side database: suitable for users who want a more convenient user experience. LobeChat supports PostgreSQL as a server-side database. For detailed documentation on how to configure the server-side database, please visit Configure Server-side Database.
Regardless of which database you choose, LobeChat can provide you with an excellent user experience.
10
Support Multi-User Management
LobeChat supports multi-user management and provides two main user authentication and management solutions to meet different needs:
-
next-auth: LobeChat integrates
next-auth
, a flexible and powerful identity verification library that supports multiple authentication methods, including OAuth, email login, credential login, etc. Withnext-auth
, you can easily implement user registration, login, session management, social login, and other functions to ensure the security and privacy of user data. -
Clerk: For users who need more advanced user management features, LobeChat also supports
Clerk
, a modern user management platform.Clerk
provides richer functions, such as multi-factor authentication (MFA), user profile management, login activity monitoring, etc. WithClerk
, you can get higher security and flexibility, and easily cope with complex user management needs.
Regardless of which user management solution you choose, LobeChat can provide you with an excellent user experience and powerful functional support.
11
Progressive Web App (PWA)
We deeply understand the importance of providing a seamless experience for users in today's multi-device environment. Therefore, we have adopted Progressive Web Application (PWA) technology, a modern web technology that elevates web applications to an experience close to that of native apps.
Through PWA, LobeChat can offer a highly optimized user experience on both desktop and mobile devices while maintaining its lightweight and high-performance characteristics. Visually and in terms of feel, we have also meticulously designed the interface to ensure it is indistinguishable from native apps, providing smooth animations, responsive layouts, and adapting to different device screen resolutions.
[!NOTE]
If you are unfamiliar with the installation process of PWA, you can add LobeChat as your desktop application (also applicable to mobile devices) by following these steps:
- Launch the Chrome or Edge browser on your computer.
- Visit the LobeChat webpage.
- In the upper right corner of the address bar, click on the Install icon.
- Follow the instructions on the screen to complete the PWA Installation.
12
Mobile Device Adaptation
We have carried out a series of optimization designs for mobile devices to enhance the user's mobile experience. Currently, we are iterating on the mobile user experience to achieve smoother and more intuitive interactions. If you have any suggestions or ideas, we welcome you to provide feedback through GitHub Issues or Pull Requests.
13
Custom Themes
As a design-engineering-oriented application, LobeChat places great emphasis on users' personalized experiences, hence introducing flexible and diverse theme modes, including a light mode for daytime and a dark mode for nighttime. Beyond switching theme modes, a range of color customization options allow users to adjust the application's theme colors according to their preferences. Whether it's a desire for a sober dark blue, a lively peach pink, or a professional gray-white, users can find their style of color choices in LobeChat.
[!TIP]
The default configuration can intelligently recognize the user's system color mode and automatically switch themes to ensure a consistent visual experience with the operating system. For users who like to manually control details, LobeChat also offers intuitive setting options and a choice between chat bubble mode and document mode for conversation scenarios.
*
What's more
Beside these features, LobeChat also have much better basic technique underground:
- ð¨ Quick Deployment: Using the Vercel platform or docker image, you can deploy with just one click and complete the process within 1 minute without any complex configuration.
- ð Custom Domain: If users have their own domain, they can bind it to the platform for quick access to the dialogue agent from anywhere.
- ð Privacy Protection: All data is stored locally in the user's browser, ensuring user privacy.
- ð Exquisite UI Design: With a carefully designed interface, it offers an elegant appearance and smooth interaction. It supports light and dark themes and is mobile-friendly. PWA support provides a more native-like experience.
- ð£ï¸ Smooth Conversation Experience: Fluid responses ensure a smooth conversation experience. It fully supports Markdown rendering, including code highlighting, LaTex formulas, Mermaid flowcharts, and more.
⨠more features will be added when LobeChat evolve.
[!NOTE]
You can find our upcoming Roadmap plans in the Projects section.
â¡ï¸ Performance
[!NOTE]
The complete list of reports can be found in the ð Lighthouse Reports
Desktop | Mobile |
---|---|
ð Lighthouse Report | ð Lighthouse Report |
ð³ Self Hosting
LobeChat provides Self-Hosted Version with Vercel and Docker Image. This allows you to deploy your own chatbot within a few minutes without any prior knowledge.
[!TIP]
Learn more about ð Build your own LobeChat by checking it out.
A
Deploying with Vercel, Zeabur or Sealos
If you want to deploy this service yourself on either Vercel or Zeabur, you can follow these steps:
- Prepare your OpenAI API Key.
- Click the button below to start deployment: Log in directly with your GitHub account, and remember to fill in the
OPENAI_API_KEY
(required) andACCESS_CODE
(recommended) on the environment variable section. - After deployment, you can start using it.
- Bind a custom domain (optional): The DNS of the domain assigned by Vercel is polluted in some areas; binding a custom domain can connect directly.
After Fork
After fork, only retain the upstream sync action and disable other actions in your repository on GitHub.
Keep Updated
If you have deployed your own project following the one-click deployment steps in the README, you might encounter constant prompts indicating "updates available." This is because Vercel defaults to creating a new project instead of forking this one, resulting in an inability to detect updates accurately.
[!TIP]
We suggest you redeploy using the following steps, ð Auto Sync With Latest
B
Deploying with Docker
We provide a Docker image for deploying the LobeChat service on your own private device. Use the following command to start the LobeChat service:
$ docker run -d -p 3210:3210 \
-e OPENAI_API_KEY=sk-xxxx \
-e ACCESS_CODE=lobe66 \
--name lobe-chat \
lobehub/lobe-chat
[!TIP]
If you need to use the OpenAI service through a proxy, you can configure the proxy address using the
OPENAI_PROXY_URL
environment variable:
$ docker run -d -p 3210:3210 \
-e OPENAI_API_KEY=sk-xxxx \
-e OPENAI_PROXY_URL=https://api-proxy.com/v1 \
-e ACCESS_CODE=lobe66 \
--name lobe-chat \
lobehub/lobe-chat
[!NOTE]
For detailed instructions on deploying with Docker, please refer to the ð Docker Deployment Guide
Environment Variable
This project provides some additional configuration items set with environment variables:
Environment Variable | Required | Description | Example |
---|---|---|---|
OPENAI_API_KEY | Yes | This is the API key you apply on the OpenAI account page | sk-xxxxxx...xxxxxx |
OPENAI_PROXY_URL | No | If you manually configure the OpenAI interface proxy, you can use this configuration item to override the default OpenAI API request base URL | https://api.chatanywhere.cn or https://aihubmix.com/v1 The default value is https://api.openai.com/v1 |
ACCESS_CODE | No | Add a password to access this service; you can set a long password to avoid leaking. If this value contains a comma, it is a password array. | awCTe)re_r74 or rtrt_ewee3@09! or code1,code2,code3 |
OPENAI_MODEL_LIST | No | Used to control the model list. Use + to add a model, - to hide a model, and model_name=display_name to customize the display name of a model, separated by commas. | qwen-7b-chat,+glm-6b,-gpt-3.5-turbo |
[!NOTE]
The complete list of environment variables can be found in the ð Environment Variables
ð¦ Ecosystem
NPM | Repository | Description | Version |
---|---|---|---|
@lobehub/ui | lobehub/lobe-ui | Open-source UI component library dedicated to building AIGC web applications. | |
@lobehub/icons | lobehub/lobe-icons | Popular AI / LLM Model Brand SVG Logo and Icon Collection. | |
@lobehub/tts | lobehub/lobe-tts | High-quality & reliable TTS/STT React Hooks library | |
@lobehub/lint | lobehub/lobe-lint | Configurations for ESlint, Stylelint, Commitlint, Prettier, Remark, and Semantic Release for LobeHub. |
𧩠Plugins
Plugins provide a means to extend the Function Calling capabilities of LobeChat. They can be used to introduce new function calls and even new ways to render message results. If you are interested in plugin development, please refer to our ð Plugin Development Guide in the Wiki.
- lobe-chat-plugins: This is the plugin index for LobeChat. It accesses index.json from this repository to display a list of available plugins for LobeChat to the user.
- chat-plugin-template: This is the plugin template for LobeChat plugin development.
- @lobehub/chat-plugin-sdk: The LobeChat Plugin SDK assists you in creating exceptional chat plugins for Lobe Chat.
- @lobehub/chat-plugins-gateway: The LobeChat Plugins Gateway is a backend service that provides a gateway for LobeChat plugins. We deploy this service using Vercel. The primary API POST /api/v1/runner is deployed as an Edge Function.
[!NOTE]
The plugin system is currently undergoing major development. You can learn more in the following issues:
- Plugin Phase 1: Implement separation of the plugin from the main body, split the plugin into an independent repository for maintenance, and realize dynamic loading of the plugin.
- Plugin Phase 2: The security and stability of the plugin's use, more accurately presenting abnormal states, the maintainability of the plugin architecture, and developer-friendly.
- Plugin Phase 3: Higher-level and more comprehensive customization capabilities, support for plugin authentication, and examples.
â¨ï¸ Local Development
You can use GitHub Codespaces for online development:
Or clone it for local development:
$ git clone https://github.com/lobehub/lobe-chat.git
$ cd lobe-chat
$ pnpm install
$ pnpm dev
If you would like to learn more details, please feel free to look at our ð Development Guide.
ð¤ Contributing
Contributions of all types are more than welcome; if you are interested in contributing code, feel free to check out our GitHub Issues and Projects to get stuck in to show us what youâre made of.
[!TIP]
We are creating a technology-driven forum, fostering knowledge interaction and the exchange of ideas that may culminate in mutual inspiration and collaborative innovation.
Help us make LobeChat better. Welcome to provide product design feedback, user experience discussions directly to us.
Principal Maintainers: @arvinxx @canisminor1990
|
|
---|---|
â¤ï¸ Sponsor
Every bit counts and your one-time donation sparkles in our galaxy of support! You're a shooting star, making a swift and bright impact on our journey. Thank you for believing in us â your generosity guides us toward our mission, one brilliant flash at a time.
ð More Products
- ð °ï¸ Lobe SD Theme: Modern theme for Stable Diffusion WebUI, exquisite interface design, highly customizable UI, and efficiency-boosting features.
- âµï¸ Lobe Midjourney WebUI: WebUI for Midjourney, leverages AI to quickly generate a wide array of rich and diverse images from text prompts, sparking creativity and enhancing conversations.
- ð Lobe i18n : Lobe i18n is an automation tool for the i18n (internationalization) translation process, powered by ChatGPT. It supports features such as automatic splitting of large files, incremental updates, and customization options for the OpenAI model, API proxy, and temperature.
- ð Lobe Commit: Lobe Commit is a CLI tool that leverages Langchain/ChatGPT to generate Gitmoji-based commit messages.
Copyright © 2024 LobeHub.
This project is Apache 2.0 licensed.
Top Related Projects
Robust Speech Recognition via Large-Scale Weak Supervision
Port of OpenAI's Whisper model in C/C++
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
A Gradio web UI for Large Language Models.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot