promptflow
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
Top Related Projects
🦜🔗 Build context-aware reasoning applications
Examples and guides for using the OpenAI API
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
:mag: AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
Open source platform for the machine learning lifecycle
Quick Overview
PromptFlow is a Python library that provides a framework for building and deploying large language model (LLM) applications. It simplifies the process of creating, testing, and deploying LLM-powered applications by providing a set of abstractions and utilities.
Pros
- Modular and Extensible: PromptFlow's architecture is designed to be modular and extensible, allowing developers to easily integrate new components and customize the workflow to fit their specific needs.
- Streamlined Deployment: The library includes tools for packaging and deploying LLM applications, making it easier to get your models into production.
- Comprehensive Tooling: PromptFlow provides a range of utilities for tasks like prompt engineering, model evaluation, and data management, reducing the overhead of building LLM applications.
- Community Support: The project is actively maintained by Microsoft and has a growing community of contributors, ensuring ongoing development and support.
Cons
- Learning Curve: The library's flexibility and comprehensive feature set may come with a steeper learning curve for developers new to LLM development.
- Dependency on Microsoft Technologies: PromptFlow is closely tied to Microsoft's ecosystem, which may be a drawback for developers who prefer to work with other cloud providers or open-source tools.
- Limited Documentation: While the project has good documentation, some areas may be less thoroughly covered, especially for more advanced use cases.
- Performance Overhead: The abstraction layers and additional tooling provided by PromptFlow may introduce some performance overhead compared to a more bare-bones LLM integration.
Code Examples
Here are a few examples of how to use PromptFlow:
- Creating a Simple Prompt-based Application:
from promptflow.app import PromptApp
from promptflow.prompts import Prompt
app = PromptApp()
@app.prompt
def greet(name: str) -> str:
return f"Hello, {name}!"
if __name__ == "__main__":
app.run()
This code creates a simple PromptApp that defines a single prompt function, greet
, which takes a name as input and returns a greeting.
- Integrating a Large Language Model:
from promptflow.app import PromptApp
from promptflow.models import HuggingFaceModel
app = PromptApp()
model = HuggingFaceModel("gpt2")
@app.prompt
def generate_text(prompt: str, max_length: int = 100) -> str:
return model.generate(prompt, max_length=max_length)
if __name__ == "__main__":
app.run()
This example shows how to integrate a pre-trained Hugging Face model (in this case, GPT-2) into a PromptApp, and define a prompt function that generates text based on a given prompt.
- Deploying a PromptApp:
from promptflow.app import PromptApp
from promptflow.deployment import AzureWebAppDeployer
app = PromptApp()
# Define your prompts here...
if __name__ == "__main__":
deployer = AzureWebAppDeployer(app)
deployer.deploy()
This code demonstrates how to use the AzureWebAppDeployer
class to package and deploy a PromptApp to an Azure Web App, making it accessible as a web service.
Getting Started
To get started with PromptFlow, follow these steps:
- Install the library using pip:
pip install promptflow
- Create a new Python file (e.g.,
app.py
) and import the necessary modules:
from promptflow.app import PromptApp
from promptflow.models import HuggingFaceModel
- Define your prompts and integrate a language model:
app = PromptApp()
model = HuggingFaceModel("gpt2")
@app.prompt
def generate_text(prompt: str, max_length: int = 100) -> str:
return model.generate(prompt, max_length=max_length)
- Run your PromptApp locally:
python app.py
Competitor Comparisons
🦜🔗 Build context-aware reasoning applications
Pros of LangChain
- More extensive ecosystem with a wider range of integrations and tools
- Larger community and more extensive documentation
- Supports multiple programming languages (Python, JavaScript)
Cons of LangChain
- Steeper learning curve due to its extensive features
- Can be overwhelming for simple projects or beginners
- Less focus on visual workflow design
Code Comparison
LangChain:
from langchain import OpenAI, LLMChain, PromptTemplate
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(input_variables=["product"], template="What is a good name for a company that makes {product}?")
chain = LLMChain(llm=llm, prompt=prompt)
PromptFlow:
from promptflow import tool
@tool
def name_generator(product: str):
return f"What is a good name for a company that makes {product}?"
Both frameworks aim to simplify working with language models, but they approach it differently. LangChain provides a more comprehensive set of tools and abstractions, while PromptFlow focuses on a simpler, more streamlined approach with visual workflow design capabilities. LangChain's code tends to be more verbose but offers more flexibility, while PromptFlow's code is often more concise and easier to read, especially for simpler tasks.
Examples and guides for using the OpenAI API
Pros of openai-cookbook
- Extensive collection of practical examples and tutorials for using OpenAI's APIs
- Covers a wide range of use cases and applications, from basic to advanced
- Regularly updated with new examples and best practices
Cons of openai-cookbook
- Focused solely on OpenAI's offerings, limiting its scope compared to promptflow
- Less structured approach to workflow management and integration
- Lacks built-in tools for prompt engineering and flow visualization
Code Comparison
openai-cookbook:
import openai
response = openai.Completion.create(
engine="text-davinci-002",
prompt="Translate the following English text to French: '{}'",
max_tokens=60
)
promptflow:
from promptflow import tool
@tool
def translate_to_french(text: str) -> str:
# Implementation using a language model or translation service
return translated_text
Summary
While openai-cookbook provides a wealth of examples for working with OpenAI's APIs, promptflow offers a more structured approach to building and managing AI workflows. openai-cookbook is ideal for those specifically working with OpenAI's services, while promptflow provides a more versatile framework for integrating various AI tools and services into cohesive workflows.
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
Pros of NeMo
- Comprehensive support for speech AI tasks (ASR, TTS, NLP)
- Built-in support for distributed training and mixed precision
- Extensive collection of pre-trained models and examples
Cons of NeMo
- Steeper learning curve due to its focus on advanced AI tasks
- Requires more computational resources for training and inference
- Less flexibility for general-purpose prompt engineering workflows
Code Comparison
NeMo example (ASR model):
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained("QuartzNet15x5Base-En")
transcription = asr_model.transcribe(["audio_file.wav"])
PromptFlow example (text generation):
from promptflow import tool
@tool
def generate_text(prompt: str) -> str:
# Your text generation logic here
return generated_text
NeMo is more specialized for speech and language AI tasks, offering powerful pre-trained models and distributed training capabilities. PromptFlow, on the other hand, provides a more flexible framework for building and managing prompt-based workflows, with a focus on ease of use and integration with various LLMs and tools. While NeMo excels in specific AI domains, PromptFlow is better suited for general prompt engineering and pipeline creation across different AI applications.
:mag: AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
Pros of Haystack
- More focused on natural language processing (NLP) tasks and information retrieval
- Offers a wider range of pre-built NLP components and pipelines
- Provides more extensive documentation and tutorials for NLP-specific use cases
Cons of Haystack
- Less integrated with cloud services and MLOps tools
- More complex setup and configuration for non-NLP tasks
- Limited support for visual AI and multimodal workflows
Code Comparison
Haystack example:
from haystack import Pipeline
from haystack.nodes import TextConverter, Preprocessor, BM25Retriever, FARMReader
pipeline = Pipeline()
pipeline.add_node(component=TextConverter(), name="TextConverter", inputs=["File"])
pipeline.add_node(component=Preprocessor(), name="Preprocessor", inputs=["TextConverter"])
pipeline.add_node(component=BM25Retriever(document_store), name="Retriever", inputs=["Preprocessor"])
pipeline.add_node(component=FARMReader(model_name_or_path="deepset/roberta-base-squad2"), name="Reader", inputs=["Retriever"])
Promptflow example:
from promptflow import tool, flow
@tool
def my_python_tool(input1: str, input2: int) -> str:
return f"Processed {input1} with {input2}"
flow.add_node("my_python_tool", inputs={"input1": "${upstream.output}", "input2": 42})
flow.add_node("llm", inputs={"prompt": "${my_python_tool.output}"})
Open source platform for the machine learning lifecycle
Pros of MLflow
- More mature and established project with a larger community and ecosystem
- Broader scope, covering experiment tracking, model management, and deployment
- Language-agnostic, supporting multiple programming languages and frameworks
Cons of MLflow
- Steeper learning curve due to its comprehensive feature set
- Can be overkill for simpler projects or those focused solely on prompt engineering
- Less specialized for LLM and prompt-based workflows
Code Comparison
MLflow:
import mlflow
mlflow.start_run()
mlflow.log_param("param1", value1)
mlflow.log_metric("metric1", value2)
mlflow.end_run()
Promptflow:
from promptflow import PFClient
flow = PFClient().run(flow="my_flow", inputs={"prompt": "Hello, world!"})
print(flow.outputs["response"])
Summary
MLflow is a comprehensive ML lifecycle management tool, while Promptflow focuses on LLM prompt engineering and workflows. MLflow offers broader functionality but may be more complex, whereas Promptflow provides a streamlined experience for prompt-based projects. Choose based on your specific needs and project scope.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Prompt flow
Welcome to join us to make prompt flow better by participating discussions, opening issues, submitting PRs.
Prompt flow is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
With prompt flow, you will be able to:
- Create and iteratively develop flow
- Create executable flows that link LLMs, prompts, Python code and other tools together.
- Debug and iterate your flows, especially tracing interaction with LLMs with ease.
- Evaluate flow quality and performance
- Evaluate your flow's quality and performance with larger datasets.
- Integrate the testing and evaluation into your CI/CD system to ensure quality of your flow.
- Streamlined development cycle for production
- Deploy your flow to the serving platform you choose or integrate into your app's code base easily.
- (Optional but highly recommended) Collaborate with your team by leveraging the cloud version of Prompt flow in Azure AI.
Installation
To get started quickly, you can use a pre-built development environment. Click the button below to open the repo in GitHub Codespaces, and then continue the readme!
If you want to get started in your local environment, first install the packages:
Ensure you have a python environment, python>=3.9, <=3.11
is recommended.
pip install promptflow promptflow-tools
Quick Start â¡
Create a chatbot with prompt flow
Run the command to initiate a prompt flow from a chat template, it creates folder named my_chatbot
and generates required files within it:
pf flow init --flow ./my_chatbot --type chat
Setup a connection for your API key
For OpenAI key, establish a connection by running the command, using the openai.yaml
file in the my_chatbot
folder, which stores your OpenAI key (override keys and name with --set to avoid yaml file changes):
pf connection create --file ./my_chatbot/openai.yaml --set api_key=<your_api_key> --name open_ai_connection
For Azure OpenAI key, establish the connection by running the command, using the azure_openai.yaml
file:
pf connection create --file ./my_chatbot/azure_openai.yaml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
Chat with your flow
In the my_chatbot
folder, there's a flow.dag.yaml
file that outlines the flow, including inputs/outputs, nodes, connection, and the LLM model, etc
Note that in the
chat
node, we're using a connection namedopen_ai_connection
(specified inconnection
field) and thegpt-35-turbo
model (specified indeployment_name
field). The deployment_name filed is to specify the OpenAI model, or the Azure OpenAI deployment resource.
Interact with your chatbot by running: (press Ctrl + C
to end the session)
pf flow test --flow ./my_chatbot --interactive
Core value: ensuring "High Qualityâ from prototype to production
Explore our 15-minute tutorial that guides you through prompt tuning â¡ batch testing â¡ evaluation, all designed to ensure high quality ready for production.
Next Step! Continue with the Tutorial ð section to delve deeper into prompt flow.
Tutorial ðââï¸
Prompt flow is a tool designed to build high quality LLM apps, the development process in prompt flow follows these steps: develop a flow, improve the flow quality, deploy the flow to production.
Develop your own LLM apps
VS Code Extension
We also offer a VS Code extension (a flow designer) for an interactive flow development experience with UI.
You can install it from the visualstudio marketplace.
Deep delve into flow development
Getting started with prompt flow: A step by step guidance to invoke your first flow run.
Learn from use cases
Tutorial: Chat with PDF: An end-to-end tutorial on how to build a high quality chat application with prompt flow, including flow development and evaluation with metrics.
More examples can be found here. We welcome contributions of new use cases!
Setup for contributors
If you're interested in contributing, please start with our dev setup guide: dev_setup.md.
Next Step! Continue with the Contributing ð section to contribute to prompt flow.
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.
Code of Conduct
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Data Collection
The software may collect information about you and your use of the software and send it to Microsoft if configured to enable telemetry. Microsoft may use this information to provide services and improve our products and services. You may turn on the telemetry as described in the repository. There are also some features in the software that may enable you and Microsoft to collect data from users of your applications. If you use these features, you must comply with applicable law, including providing appropriate notices to users of your applications together with a copy of Microsoft's privacy statement. Our privacy statement is located at https://go.microsoft.com/fwlink/?LinkID=824704. You can learn more about data collection and use in the help documentation and our privacy statement. Your use of the software operates as your consent to these practices.
Telemetry Configuration
Telemetry collection is on by default.
To opt out, please run pf config set telemetry.enabled=false
to turn it off.
License
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT license.
Top Related Projects
🦜🔗 Build context-aware reasoning applications
Examples and guides for using the OpenAI API
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
:mag: AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
Open source platform for the machine learning lifecycle
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot