langui
UI for your AI. Open Source Tailwind components tailored for your GPT, generative AI, and LLM projects.
Top Related Projects
Integrate cutting-edge LLM technology quickly and easily into your apps
🦜 🔗 Build context-aware reasoning applications
Examples and guides for using the OpenAI API
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
:mag: AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
Quick Overview
LangUI is an open-source project that provides a user interface for interacting with large language models (LLMs) like GPT-3 and GPT-4. It offers a simple and intuitive interface for users to input prompts, receive responses, and manage conversations with AI models.
Pros
- Easy-to-use interface for interacting with LLMs
- Supports multiple AI models and providers
- Open-source and customizable
- Includes features like conversation history and prompt templates
Cons
- Limited documentation and setup instructions
- May require technical knowledge to set up and configure
- Dependent on external AI services and their pricing models
- Still in early development stages, potentially lacking some advanced features
Code Examples
from langui import LangUI
# Initialize LangUI with OpenAI API key
ui = LangUI(api_key="your_openai_api_key")
# Send a prompt and get a response
response = ui.send_prompt("What is the capital of France?")
print(response)
# Using a specific model
ui.set_model("gpt-4")
response = ui.send_prompt("Explain quantum computing in simple terms.")
print(response)
# Managing conversation history
ui.start_conversation("Science Chat")
ui.send_prompt("What is photosynthesis?")
ui.send_prompt("How does it relate to climate change?")
history = ui.get_conversation_history("Science Chat")
print(history)
Getting Started
To get started with LangUI:
-
Install the package:
pip install langui
-
Import and initialize LangUI:
from langui import LangUI ui = LangUI(api_key="your_api_key")
-
Send a prompt:
response = ui.send_prompt("Hello, how are you?") print(response)
For more advanced usage, refer to the project's documentation and examples in the GitHub repository.
Competitor Comparisons
Integrate cutting-edge LLM technology quickly and easily into your apps
Pros of semantic-kernel
- More comprehensive and feature-rich, offering a wide range of AI capabilities
- Better documentation and extensive examples for developers
- Stronger community support and regular updates
Cons of semantic-kernel
- Steeper learning curve due to its complexity
- Heavier resource requirements for implementation
- Less focused on UI components compared to langui
Code Comparison
semantic-kernel:
using Microsoft.SemanticKernel;
var kernel = Kernel.Builder.Build();
var result = await kernel.RunAsync("Hello, world!");
Console.WriteLine(result);
langui:
import { LangUI } from 'langui';
const langui = new LangUI();
const result = await langui.process("Hello, world!");
console.log(result);
Both examples show a basic setup and processing of a simple input string. semantic-kernel uses a more structured approach with the Kernel builder, while langui appears to have a more straightforward API. However, it's important to note that these snippets are simplified and may not fully represent the capabilities of each library.
🦜🔗 Build context-aware reasoning applications
Pros of langchain
- More comprehensive and feature-rich framework for building LLM applications
- Larger community and ecosystem, with extensive documentation and examples
- Supports a wide range of LLMs, embeddings, and integrations
Cons of langchain
- Steeper learning curve due to its extensive features and abstractions
- Can be overkill for simpler projects or prototypes
- Requires more setup and configuration for basic use cases
Code Comparison
langchain:
from langchain import OpenAI, LLMChain, PromptTemplate
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(input_variables=["product"], template="What is a good name for a company that makes {product}?")
chain = LLMChain(llm=llm, prompt=prompt)
langui:
from langui import LLM
llm = LLM()
response = llm.generate("What is a good name for a company that makes {product}?", product="cars")
Summary
langchain offers a more comprehensive solution for building complex LLM applications, with a rich ecosystem and extensive features. However, it may be more complex for beginners or simpler projects. langui, on the other hand, provides a more straightforward approach for basic LLM interactions, making it easier to get started but potentially limiting for more advanced use cases.
Examples and guides for using the OpenAI API
Pros of openai-cookbook
- Comprehensive collection of examples and best practices for using OpenAI's APIs
- Regularly updated with new features and techniques
- Backed by OpenAI, ensuring accuracy and relevance
Cons of openai-cookbook
- Focused solely on OpenAI's offerings, limiting its scope
- May be overwhelming for beginners due to its extensive content
- Lacks a user-friendly interface for easy navigation and implementation
Code Comparison
openai-cookbook:
import openai
response = openai.Completion.create(
engine="text-davinci-002",
prompt="Translate the following English text to French: '{}'",
max_tokens=60
)
langui:
from langui import LangUI
ui = LangUI()
ui.add_text_input("Enter English text:")
ui.add_button("Translate to French")
ui.add_text_output("French translation:")
The openai-cookbook example demonstrates direct API usage, while langui focuses on creating a user interface for language tasks. openai-cookbook provides lower-level control, whereas langui abstracts the complexity for easier implementation of language-related applications.
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
Pros of promptflow
- More comprehensive documentation and examples
- Larger community and support from Microsoft
- Integration with Azure AI services
Cons of promptflow
- Steeper learning curve for beginners
- Potentially higher resource requirements
- Less focus on UI components
Code Comparison
langui:
from langui import LangUI
ui = LangUI()
ui.add_text_input("Name")
ui.add_button("Submit")
ui.run()
promptflow:
from promptflow import tool, flow
@tool
def greet(name: str) -> str:
return f"Hello, {name}!"
with flow("greeting_flow"):
name = flow.input("name")
greeting = greet(name)
flow.output("greeting", greeting)
Summary
langui focuses on simplifying UI creation for language models, while promptflow offers a more comprehensive toolkit for building and managing AI workflows. langui may be easier for beginners, but promptflow provides more advanced features and integration options. The choice between them depends on the specific project requirements and the developer's familiarity with AI tools.
:mag: AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
Pros of Haystack
- More comprehensive and feature-rich framework for building end-to-end NLP applications
- Larger community and ecosystem, with extensive documentation and examples
- Supports a wide range of NLP tasks, including question answering, document retrieval, and summarization
Cons of Haystack
- Steeper learning curve due to its extensive features and components
- Heavier resource requirements, which may impact performance on smaller systems
- More complex setup and configuration process compared to simpler alternatives
Code Comparison
Haystack example:
from haystack import Pipeline
from haystack.nodes import TfidfRetriever, FARMReader
pipeline = Pipeline()
pipeline.add_node(component=TfidfRetriever(document_store=document_store), name="Retriever", inputs=["Query"])
pipeline.add_node(component=FARMReader(model_name_or_path="deepset/roberta-base-squad2"), name="Reader", inputs=["Retriever"])
Langui example:
from langui import LangUI
ui = LangUI()
ui.add_text_input("Query")
ui.add_text_output("Response")
ui.run()
Note: The code comparison is limited due to the different focus areas of the two projects. Haystack is more oriented towards building NLP pipelines, while Langui appears to be focused on creating user interfaces for language models.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
The perfect UI for your AI â Build & Deploy your own ChatGPT
Trusted by some of the world's largest companies, developers, and investors, LangUI is a set of beautifully designed components that you can copy and paste to build your own ChatGPT. Free. Customizable. Open Source.
LangUI is powered by Tailwind CSS with free components tailored for your AI and GPT projects. It offers a collection of beautiful, ready-to-use components to enhance the user interface of your AI applications, allowing you to focus on building the next best project while leaving the UI to LangUI.
âï¸ LangUI is Open Source and its 60+ components are completely free. Please star it to show your support!
Documentation
For documentation and components, visit LangUI.dev.
Get Started
-
LangUI components are ready-to-use, meaning you don't need to install or configure anything.
-
Browse LangUI.dev and select a comopnent.
-
Copy the desired component's code in HTML or JSX from the LangUI documentation.
-
Paste the code into your project's HTML or React/Vue/Angular components. Done.
-
â Deploy: You can deploy your own ChatGPT built with LangUI on Langbase.com by creating a chat pipe.
Docker deploy
You can directly run using the image I have already built.
docker run -d -t -p 3000:3000 --name langui --restart=always docker.io/wenyang0/langui:latest
Or, you can manually compile it yourself if you prefer.
#clone the code
git clone https://github.com/ahmadbilaldev/langui.git
#docker build
cd langui/
docker build -t langui:v1 .
#start server
docker run -d -t -p 3000:3000 --name langui --restart=always langui:v1
Finally, open your browser and access the service's address at http://serverIP:3000
Features
-
Copy & Paste Integration: Zero installations or dependencies! Simply choose your desired component, copy, and paste it into your project.
-
Open Source & Free: LangUI is MIT licensed, making it suitable for both personal and commercial projects. Feel free to contribute and support us by starring LangUI on GitHub.
-
Dark & Light Modes: All LangUI components support light & dark modes and are carefully designed to look the best across both modes.
-
Fully Responsive: LangUI components are responsive, ensuring they look fantastic on any screen size or device.
-
Easy Customization: LangUI uses a two-color-only pallete. The two color pallete - slate and blue - allows for effortless customization into your brand's colors.
Screenshots
Request a component
Have an idea for a new component? We'd love to hear from you! Simply head over to our GitHub repository and submit your component request. Let's collaborate and cook up something spicy together!
Contributing
Contributions to LangUI are highly welcome! Whether it's bug fixes, new components, or improvements, we appreciate your support in making this library better for the AI community. Please read our contribution guidelines to get started.
License
LangUI is licensed under the MIT License.
Uses
-
Shades of Purple Theme by Ahmad Awais for syntax highlighting
Authored By
Originally authored by Ahmad Bilal â Founding Engineer at Langbase.
For questions, integrations, and other discussions, feel free to reach out.
Enjoy using LangUI to build stunning UIs for your AI and GPT projects.
ð If you find it helpful, don't forget to give it a star on GitHub! Stars are like little virtual hugs that keep us going! We appreciate every single one we receive.
For any queries or issues, feel free to open an issue on the repository.
Happy coding! ð
Top Related Projects
Integrate cutting-edge LLM technology quickly and easily into your apps
🦜🔗 Build context-aware reasoning applications
Examples and guides for using the OpenAI API
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
:mag: AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot