dify
Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
Top Related Projects
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
Integrate cutting-edge LLM technology quickly and easily into your apps
🦜🔗 Build context-aware reasoning applications
Examples and guides for using the OpenAI API
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
Quick Overview
Dify is an open-source LLM (Large Language Model) application development platform. It provides a web UI and REST API for developers to create, deploy, and manage AI-native applications with ease. Dify aims to simplify the process of building AI applications by offering a user-friendly interface and powerful features.
Pros
- User-friendly interface for creating and managing AI applications
- Supports multiple LLM providers, including OpenAI, Anthropic, and Hugging Face
- Offers both prompt engineering and dataset management capabilities
- Provides REST API for easy integration with existing applications
Cons
- Limited documentation for advanced use cases
- Relatively new project, which may lead to potential stability issues
- Requires technical knowledge to fully utilize its capabilities
- May have a learning curve for users new to LLM application development
Getting Started
To get started with Dify, follow these steps:
-
Clone the repository:
git clone https://github.com/langgenius/dify.git
-
Install dependencies:
cd dify pip install -r requirements.txt
-
Set up environment variables:
cp .env.example .env # Edit .env file with your configuration
-
Run the application:
python manage.py runserver
-
Access the web UI at
http://localhost:8000
and start creating your AI applications.
For more detailed instructions and API documentation, refer to the project's README and documentation in the repository.
Competitor Comparisons
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
Pros of Open-Assistant
- Larger community and more contributors, potentially leading to faster development and diverse features
- Focuses on creating an open-source alternative to ChatGPT, which may appeal to users seeking a free, customizable AI assistant
- Supports multiple languages, making it more accessible to a global audience
Cons of Open-Assistant
- More complex setup and configuration compared to Dify's user-friendly interface
- Less emphasis on low-code/no-code solutions, which may be challenging for non-technical users
- Currently in earlier stages of development, potentially less stable than Dify
Code Comparison
Open-Assistant (Python):
from oa.client import Client
client = Client("api_key")
response = client.chat("Hello, how are you?")
print(response.text)
Dify (JavaScript):
import { DifyClient } from '@dify/client';
const client = new DifyClient({ apiKey: 'your-api-key' });
const response = await client.chat('Hello, how are you?');
console.log(response.text);
Both projects aim to provide AI-powered conversational interfaces, but they differ in their approach and target audience. Open-Assistant focuses on creating an open-source alternative to ChatGPT, while Dify emphasizes ease of use and low-code solutions for building AI applications.
Pros of TaskMatrix
- Focuses on multi-modal AI agents capable of handling diverse tasks
- Implements a modular architecture for easy extension and customization
- Provides a visual interface for task planning and execution
Cons of TaskMatrix
- Less emphasis on end-user application development
- May require more technical expertise to set up and use effectively
- Smaller community and fewer resources compared to Dify
Code Comparison
TaskMatrix:
class Agent:
def __init__(self, name, capabilities):
self.name = name
self.capabilities = capabilities
def execute_task(self, task):
# Task execution logic
Dify:
class Application:
def __init__(self, name, config):
self.name = name
self.config = config
def run(self, input_data):
# Application execution logic
TaskMatrix focuses on agent-based task execution, while Dify emphasizes application development and deployment. TaskMatrix's code structure is centered around individual agents and their capabilities, whereas Dify's code is oriented towards creating and running AI-powered applications.
Both projects aim to simplify AI integration, but they approach it from different angles. TaskMatrix provides a framework for building complex, multi-modal AI systems, while Dify offers a more streamlined platform for creating AI applications with less technical overhead.
Integrate cutting-edge LLM technology quickly and easily into your apps
Pros of Semantic Kernel
- More extensive documentation and examples
- Broader language support (C#, Python, Java)
- Stronger integration with Azure AI services
Cons of Semantic Kernel
- Steeper learning curve for beginners
- Less focus on no-code/low-code solutions
- Tighter coupling with Microsoft ecosystem
Code Comparison
Semantic Kernel (C#):
var kernel = Kernel.Builder.Build();
var promptTemplate = "{{$input}}";
var function = kernel.CreateSemanticFunction(promptTemplate);
var result = await kernel.RunAsync("Hello, world!", function);
Dify (Python):
from dify_client import DifyClient
client = DifyClient("YOUR_API_KEY")
response = client.completion(prompt="Hello, world!")
print(response.content)
Summary
Semantic Kernel offers a more comprehensive toolkit for AI development, particularly for those working within the Microsoft ecosystem. It provides extensive documentation and supports multiple programming languages. However, it may have a steeper learning curve and is less focused on no-code solutions.
Dify, on the other hand, aims to simplify AI application development with a more user-friendly approach. It offers a web-based interface for creating AI apps without extensive coding knowledge. While it may have fewer features compared to Semantic Kernel, Dify's simplicity makes it more accessible for rapid prototyping and smaller-scale projects.
🦜🔗 Build context-aware reasoning applications
Pros of LangChain
- More extensive and mature ecosystem with a wider range of integrations
- Highly flexible and customizable for complex AI applications
- Strong community support and active development
Cons of LangChain
- Steeper learning curve, especially for beginners
- Requires more code and configuration for basic use cases
- Less focus on out-of-the-box UI components
Code Comparison
LangChain:
from langchain import OpenAI, LLMChain, PromptTemplate
template = "What is a good name for a company that makes {product}?"
prompt = PromptTemplate(template=template, input_variables=["product"])
llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0.9))
print(llm_chain.run("colorful socks"))
Dify:
from dify_client import DifyClient
client = DifyClient("YOUR_API_KEY")
response = client.completion(
"What is a good name for a company that makes colorful socks?",
conversation_id="optional_conversation_id"
)
print(response.content)
Summary
LangChain offers more flexibility and a broader ecosystem, making it suitable for complex AI applications. However, it has a steeper learning curve and requires more setup. Dify provides a simpler interface and focuses on rapid development with built-in UI components, but may be less customizable for advanced use cases.
Examples and guides for using the OpenAI API
Pros of openai-cookbook
- Comprehensive guide with diverse examples and best practices for OpenAI API usage
- Regularly updated with new techniques and features from OpenAI
- Maintained by OpenAI, ensuring accuracy and alignment with official API guidelines
Cons of openai-cookbook
- Focused solely on OpenAI's offerings, limiting its applicability to other AI platforms
- Primarily educational, lacking ready-to-use application frameworks
- Requires more technical expertise to implement examples in production environments
Code Comparison
openai-cookbook:
import openai
response = openai.Completion.create(
engine="text-davinci-002",
prompt="Translate the following English text to French: '{}'",
max_tokens=60
)
dify:
from core.model.model_factory import ModelFactory
model = ModelFactory.get_model_instance(
tenant_id, model_name, model_type
)
response = model.generate(prompt)
Summary
While openai-cookbook offers comprehensive guidance for OpenAI API usage, dify provides a more versatile framework for building AI-powered applications. openai-cookbook excels in educational content and best practices, whereas dify focuses on practical implementation and supports multiple AI providers. The choice between them depends on whether you need in-depth OpenAI-specific knowledge or a flexible application development platform.
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
Pros of chatgpt-retrieval-plugin
- Focused on document retrieval and integration with ChatGPT
- Supports multiple vector database options (Pinecone, Weaviate, Zilliz, Milvus, Qdrant)
- Provides a RESTful API for easy integration
Cons of chatgpt-retrieval-plugin
- Limited to retrieval tasks, not a full-fledged AI application framework
- Requires more setup and configuration for custom use cases
- Less emphasis on user interface and application management
Code Comparison
chatgpt-retrieval-plugin:
@app.post("/upsert")
async def upsert(
request: UpsertRequest = Body(...),
background_tasks: BackgroundTasks,
token: str = Depends(get_bearer_token),
):
try:
ids = await datastore.upsert(request.documents)
background_tasks.add_task(embed_and_upsert, request.documents)
return UpsertResponse(ids=ids)
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
dify:
@bp.route('/completion', methods=['POST'])
@jwt_required()
def completion():
from core.model_runtime.model_providers import model_provider_factory
from core.model_runtime.errors.invoke import InvokeError
tenant_id = get_jwt_identity()
model_provider = model_provider_factory.get_model_provider('openai')
try:
response = model_provider.invoke_completion(tenant_id, request.json)
return jsonify(response)
except InvokeError as e:
return jsonify({'error': str(e)}), 400
Both repositories showcase different approaches to handling API requests and integrating with AI models. chatgpt-retrieval-plugin focuses on document upsert and retrieval, while dify provides a more general-purpose AI application framework with built-in support for various model providers.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Dify Cloud · Self-hosting · Documentation · Enterprise inquiry
Dify is an open-source LLM app development platform. Its intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production. Here's a list of the core features:
1. Workflow: Build and test powerful AI workflows on a visual canvas, leveraging all the following features and beyond.
https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa
2. Comprehensive model support: Seamless integration with hundreds of proprietary / open-source LLMs from dozens of inference providers and self-hosted solutions, covering GPT, Mistral, Llama3, and any OpenAI API-compatible models. A full list of supported model providers can be found here.
3. Prompt IDE: Intuitive interface for crafting prompts, comparing model performance, and adding additional features such as text-to-speech to a chat-based app.
4. RAG Pipeline: Extensive RAG capabilities that cover everything from document ingestion to retrieval, with out-of-box support for text extraction from PDFs, PPTs, and other common document formats.
5. Agent capabilities: You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DALL·E, Stable Diffusion and WolframAlpha.
6. LLMOps: Monitor and analyze application logs and performance over time. You could continuously improve prompts, datasets, and models based on production data and annotations.
7. Backend-as-a-Service: All of Dify's offerings come with corresponding APIs, so you could effortlessly integrate Dify into your own business logic.
Feature comparison
Feature | Dify.AI | LangChain | Flowise | OpenAI Assistants API |
---|---|---|---|---|
Programming Approach | API + App-oriented | Python Code | App-oriented | API-oriented |
Supported LLMs | Rich Variety | Rich Variety | Rich Variety | OpenAI-only |
RAG Engine | â | â | â | â |
Agent | â | â | â | â |
Workflow | â | â | â | â |
Observability | â | â | â | â |
Enterprise Features (SSO/Access control) | â | â | â | â |
Local Deployment | â | â | â | â |
Using Dify
-
Cloud
We host a Dify Cloud service for anyone to try with zero setup. It provides all the capabilities of the self-deployed version, and includes 200 free GPT-4 calls in the sandbox plan. -
Self-hosting Dify Community Edition
Quickly get Dify running in your environment with this starter guide. Use our documentation for further references and more in-depth instructions. -
Dify for enterprise / organizations
We provide additional enterprise-centric features. Log your questions for us through this chatbot or send us an email to discuss enterprise needs.For startups and small businesses using AWS, check out Dify Premium on AWS Marketplace and deploy it to your own AWS VPC with one-click. It's an affordable AMI offering with the option to create apps with custom logo and branding.
Staying ahead
Star Dify on GitHub and be instantly notified of new releases.
Quick start
Before installing Dify, make sure your machine meets the following minimum system requirements:
- CPU >= 2 Core
- RAM >= 4GB
The easiest way to start the Dify server is to run our docker-compose.yml file. Before running the installation command, make sure that Docker and Docker Compose are installed on your machine:
cd docker
cp .env.example .env
docker compose up -d
After running, you can access the Dify dashboard in your browser at http://localhost/install and start the initialization process.
If you'd like to contribute to Dify or do additional development, refer to our guide to deploying from source code
Next steps
If you need to customize the configuration, please refer to the comments in our .env.example file and update the corresponding values in your .env
file. Additionally, you might need to make adjustments to the docker-compose.yaml
file itself, such as changing image versions, port mappings, or volume mounts, based on your specific deployment environment and requirements. After making any changes, please re-run docker-compose up -d
. You can find the full list of available environment variables here.
If you'd like to configure a highly-available setup, there are community-contributed Helm Charts and YAML files which allow Dify to be deployed on Kubernetes.
Using Terraform for Deployment
Azure Global
Deploy Dify to Azure with a single click using terraform.
Contributing
For those who'd like to contribute code, see our Contribution Guide. At the same time, please consider supporting Dify by sharing it on social media and at events and conferences.
We are looking for contributors to help with translating Dify to languages other than Mandarin or English. If you are interested in helping, please see the i18n README for more information, and leave us a comment in the
global-users
channel of our Discord Community Server.
Contributors
Community & contact
- Github Discussion. Best for: sharing feedback and asking questions.
- GitHub Issues. Best for: bugs you encounter using Dify.AI, and feature proposals. See our Contribution Guide.
- Discord. Best for: sharing your applications and hanging out with the community.
- Twitter. Best for: sharing your applications and hanging out with the community.
Star history
Security disclosure
To protect your privacy, please avoid posting security issues on GitHub. Instead, send your questions to security@dify.ai and we will provide you with a more detailed answer.
License
This repository is available under the Dify Open Source License, which is essentially Apache 2.0 with a few additional restrictions.
Top Related Projects
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
Integrate cutting-edge LLM technology quickly and easily into your apps
🦜🔗 Build context-aware reasoning applications
Examples and guides for using the OpenAI API
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot