Top Related Projects
A guidance language for controlling large language models.
The official Python library for the OpenAI API
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
🦜🔗 Build context-aware reasoning applications
Integrate cutting-edge LLM technology quickly and easily into your apps
Quick Overview
The guidance-ai/guidance
repository is a Python library that provides a set of tools and utilities for building and deploying large language models (LLMs) and other AI-powered applications. The library aims to simplify the process of working with LLMs, offering a high-level API for tasks such as text generation, question answering, and more.
Pros
- Flexible and Extensible: The library is designed to be highly modular and extensible, allowing developers to easily integrate it into their own projects and customize it to their specific needs.
- Supports Multiple LLM Backends: The library supports a variety of LLM backends, including OpenAI's GPT-3, Anthropic's InstructGPT, and Hugging Face's Transformers, making it easy to experiment with different models.
- Comprehensive Documentation: The project has detailed documentation that covers a wide range of topics, from installation and setup to advanced usage and deployment.
- Active Development and Community: The project is actively maintained and has a growing community of contributors, ensuring that it continues to evolve and improve over time.
Cons
- Steep Learning Curve: The library's flexibility and feature-richness can make it challenging for beginners to get started, especially if they're new to working with LLMs.
- Dependency on External LLM Providers: The library relies on external LLM providers, which can introduce additional costs and potential availability issues.
- Limited Support for Specialized Hardware: While the library supports a variety of LLM backends, it may not provide optimal performance on specialized hardware, such as GPUs or TPUs.
- Potential Performance Overhead: The abstraction layer provided by the library may introduce some performance overhead compared to directly using the underlying LLM APIs.
Code Examples
Here are a few examples of how to use the guidance-ai/guidance
library:
- Text Generation:
from guidance import Guidance
# Initialize the Guidance instance
g = Guidance()
# Generate text using the default language model
text = g.generate("The quick brown fox jumps over the lazy dog.")
print(text)
- Question Answering:
from guidance import Guidance
# Initialize the Guidance instance
g = Guidance()
# Answer a question based on a given context
context = "The Eiffel Tower is a wrought-iron lattice tower built in 1889 in Paris, France."
question = "What is the Eiffel Tower?"
answer = g.answer(question, context)
print(answer)
- Summarization:
from guidance import Guidance
# Initialize the Guidance instance
g = Guidance()
# Summarize a given text
text = "This is a long and detailed text that needs to be summarized. It covers a wide range of topics, including history, science, and current events. The goal is to extract the key points and present them in a concise and easy-to-understand format."
summary = g.summarize(text)
print(summary)
- Sentiment Analysis:
from guidance import Guidance
# Initialize the Guidance instance
g = Guidance()
# Analyze the sentiment of a given text
text = "I really enjoyed the movie. It was well-written and the acting was superb."
sentiment = g.analyze_sentiment(text)
print(sentiment)
Getting Started
To get started with the guidance-ai/guidance
library, follow these steps:
- Install the library using pip:
pip install guidance
- Import the
Guidance
class and create an instance:
from guidance import Guidance
g = Guidance()
- Use the various methods provided by the
Guidance
class to interact with the LLM backend of your choice. For example, to generate text:
text = g.generate("The quick brown fox jumps over the lazy dog.")
print(text)
- Refer to the project's documentation for more detailed information on the available features, configuration options, and advanced usage.
Competitor Comparisons
A guidance language for controlling large language models.
Pros of guidance
- More actively maintained with recent updates
- Larger community with more stars and contributors
- Extensive documentation and examples
Cons of guidance
- Potentially more complex API due to additional features
- May have a steeper learning curve for beginners
- Larger codebase could lead to longer load times
Code Comparison
guidance:
import guidance
prompt = guidance('''
Human: What is the capital of France?
AI: The capital of France is Paris.
Human: What is the population of Paris?
AI: ''')
result = prompt()
print(result)
guidance>:
from guidance import guidance
@guidance
def conversation(human_input):
ai_response = yield from guidance.complete(human_input)
return ai_response
result = conversation("What is the capital of France?")
print(result)
Both repositories provide similar functionality for generating AI responses, but guidance offers a more flexible and feature-rich approach. The guidance> repository appears to be a simplified or earlier version of the guidance project, with less active development and fewer features. Users looking for a more comprehensive and up-to-date solution may prefer guidance, while those seeking a simpler implementation might find guidance> sufficient for basic needs.
The official Python library for the OpenAI API
Pros of openai-python
- Official OpenAI library, ensuring direct compatibility and up-to-date features
- Comprehensive support for all OpenAI API endpoints and models
- Well-documented with extensive examples and community support
Cons of openai-python
- Limited to OpenAI's services, lacking flexibility for other AI providers
- Requires more boilerplate code for complex prompts and chained operations
- Less focus on prompt engineering and advanced text generation techniques
Code Comparison
openai-python:
import openai
openai.api_key = "your-api-key"
response = openai.Completion.create(
engine="text-davinci-002",
prompt="Translate the following English text to French: 'Hello, world!'",
max_tokens=60
)
guidance:
import guidance
prompt = guidance('''
Human: Translate the following English text to French: 'Hello, world!'
AI: Here's the translation:
{{gen 'translation' max_tokens=20}}
''')
executed = prompt()
print(executed['translation'])
The guidance library offers a more intuitive and flexible approach to prompt engineering, allowing for easier creation of complex, multi-step prompts. However, openai-python provides direct access to OpenAI's models and services, making it the go-to choice for straightforward OpenAI API interactions.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Pros of transformers
- Extensive model support: Covers a wide range of transformer-based models
- Rich documentation and community support
- Seamless integration with PyTorch and TensorFlow
Cons of transformers
- Steeper learning curve for beginners
- Can be resource-intensive for large models
- Less focused on prompt engineering and controlled generation
Code Comparison
transformers:
from transformers import pipeline
generator = pipeline('text-generation', model='gpt2')
result = generator("Hello, I'm a language model,", max_length=30)
print(result[0]['generated_text'])
guidance:
import guidance
prompt = guidance('''
Human: Hello, I'm a language model,
AI: {{gen 'response' max_tokens=20}}
''')
result = prompt()
print(result['response'])
The transformers library provides a more traditional approach to model usage, while guidance focuses on prompt engineering and controlled text generation. guidance offers a more intuitive interface for prompt design and fine-grained control over the generation process, making it easier to create complex prompts and manage model outputs.
🦜🔗 Build context-aware reasoning applications
Pros of LangChain
- More extensive ecosystem with a wider range of integrations and tools
- Stronger community support and more frequent updates
- Better documentation and learning resources
Cons of LangChain
- Can be more complex and overwhelming for beginners
- Potentially slower execution due to its comprehensive nature
Code Comparison
LangChain:
from langchain import OpenAI, LLMChain, PromptTemplate
template = "What is a good name for a company that makes {product}?"
prompt = PromptTemplate(template=template, input_variables=["product"])
llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0.9))
print(llm_chain.run("colorful socks"))
Guidance:
import guidance
prompt = guidance('''
Human: What is a good name for a company that makes {{product}}?
AI: Here's a suggestion for a company name that makes {{product}}:''')
print(prompt(product="colorful socks"))
Both repositories aim to simplify working with language models, but they have different approaches. LangChain offers a more comprehensive toolkit with various components and integrations, while Guidance focuses on a simpler, more direct approach to prompt engineering. The choice between them depends on the specific needs of your project and your familiarity with language model development.
Integrate cutting-edge LLM technology quickly and easily into your apps
Pros of Semantic Kernel
- More comprehensive framework with built-in memory, planning, and skills management
- Better integration with Azure services and Microsoft ecosystem
- Stronger community support and regular updates
Cons of Semantic Kernel
- Steeper learning curve due to more complex architecture
- Primarily focused on C# and .NET, limiting language options
- Heavier dependency on Microsoft technologies
Code Comparison
Guidance:
import guidance
prompt = guidance('''
Human: What is the capital of France?
AI: The capital of France is Paris.
Human: What is the population of Paris?
AI: As of 2023, the estimated population of Paris is approximately 2.2 million people in the city proper.
Human: What is a famous landmark in Paris?
AI: One of the most famous landmarks in Paris is the Eiffel Tower.
''')
result = prompt()
print(result)
Semantic Kernel:
using Microsoft.SemanticKernel;
var kernel = Kernel.Builder.Build();
var promptTemplate = "What is the capital of {{$country}}?";
var prompt = kernel.CreateSemanticFunction(promptTemplate);
var result = await prompt.InvokeAsync(new ContextVariables { ["country"] = "France" });
Console.WriteLine(result);
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Guidance is an efficient programming paradigm for steering language models. With Guidance, you can control how output is structured and get high-quality output for your use caseâwhile reducing latency and cost vs. conventional prompting or fine-tuning. It allows users to constrain generation (e.g. with regex and CFGs) as well as to interleave control (conditionals, loops, tool use) and generation seamlessly.
Install
Guidance is available through PyPI and supports a variety of backends (Transformers, llama.cpp, OpenAI, etc.). If you already have the backend required for your model, you can simply run
pip install guidance
Features
A Pythonic interface for language models
When using Guidance, you can work with large language models using common Python idioms:
from guidance import system, user, assistant, gen
from guidance.models import Transformers
# Could also do LlamaCpp or many other models
phi_lm = Transformers("microsoft/Phi-4-mini-instruct")
# Model objects are immutable, so this is a copy
lm = phi_lm
with system():
lm += "You are a helpful assistant"
with user():
lm += "Hello. What is your name?"
with assistant():
lm += gen(max_tokens=20)
print(lm)
If run at the command line, this will produce output like:
<|system|>You are a helpful assistant<|end|><|user|>Hello. What is your name?<|end|><|assistant|>I am Phi, an AI developed by Microsoft. How can I help you today?
However, if running in a Jupyter notebook, then Guidance provides a widget for a richer user experience:

With Guidance, it's really easy to capture generated text:
# Get a new copy of the Model
lm = phi_lm
with system():
lm += "You are a helpful assistant"
with user():
lm += "Hello. What is your name?"
with assistant():
lm += gen(name="lm_response", max_tokens=20)
print(f"{lm['lm_response']=}")
lm['lm_response']='I am Phi, an AI developed by Microsoft. How can I help you today?'
Guarantee output syntax with constrained generation
Guidance provides an easy to use, yet immensely powerful syntax for constraining the output of a language model.
For example, a gen()
call can be constrained to match a regular expression:
lm = phi_lm
with system():
lm += "You are a teenager"
with user():
lm += "How old are you?"
with assistant():
lm += gen("lm_age", regex=r"\d+", temperature=0.8)
print(f"The language model is {lm['lm_age']} years old")
The language model is 13 years old
Often, we know that the output has to be an item from a list we know in advance.
Guidance provides a select()
function for this scenario:
from guidance import select
lm = phi_lm
with system():
lm += "You are a geography expert"
with user():
lm += """What is the capital of Sweden? Answer with the correct letter.
A) Helsinki
B) ReykjavÃk
C) Stockholm
D) Oslo
"""
with assistant():
lm += select(["A", "B", "C", "D"], name="model_selection")
print(f"The model selected {lm['model_selection']}")
The model selected C
The constraint system offered by Guidance is extremely powerful. It can ensure that the output conforms to any context free grammar (so long as the backend LLM has full support for Guidance). More on this below.
Create your own Guidance functions
With Guidance, you can create your own Guidance functions which can interact with language models.
These are marked using the @guidance
decorator.
Suppose we wanted to answer lots of multiple choice questions.
We could do something like the following:
import guidance
from guidance.models import Model
ASCII_OFFSET = ord("a")
@guidance
def zero_shot_multiple_choice(
language_model: Model,
question: str,
choices: list[str],
):
with user():
language_model += question + "\n"
for i, choice in enumerate(choices):
language_model += f"{chr(i+ASCII_OFFSET)} : {choice}\n"
with assistant():
language_model += select(
[chr(i + ASCII_OFFSET) for i in range(len(choices))], name="string_choice"
)
return language_model
Now, define some questions:
questions = [
{
"question" : "Which state has the northernmost capital?",
"choices" : [
"New South Wales",
"Northern Territory",
"Queensland",
"South Australia",
"Tasmania",
"Victoria",
"Western Australia",
],
"answer" : 1,
},
{
"question" : "Which of the following is venomous?",
"choices" : [
"Kangaroo",
"Koala Bear",
"Platypus",
],
"answer" : 2,
}
]
We can use our decorated function like gen()
or select()
.
The language_model
argument will be filled in for us automatically:
lm = phi_lm
with system():
lm += "You are a student taking a multiple choice test."
for mcq in questions:
lm_temp = lm + zero_shot_multiple_choice(question=mcq["question"], choices=mcq["choices"])
converted_answer = ord(lm_temp["string_choice"]) - ASCII_OFFSET
print(lm_temp)
print(f"LM Answer: {converted_answer}, Correct Answer: {mcq['answer']}")
<|system|>You are a student taking a multiple choice test.<|end|><|user|>Which state has the northernmost capital?
a : New South Wales
b : Northern Territory
c : Queensland
d : South Australia
e : Tasmania
f : Victoria
g : Western Australia
<|end|><|assistant|>b
LM Answer: 1, Correct Answer: 1
<|system|>You are a student taking a multiple choice test.<|end|><|user|>Which of the following is venomous?
a : Kangaroo
b : Koala Bear
c : Platypus
<|end|><|assistant|>c
LM Answer: 2, Correct Answer: 2
Guidance functions can be composed, in order to construct a full context free grammar.
For example, we can create Guidance functions to build a simple HTML webpage (note that this is not a full implementation of HTML).
We start with a simple function which will generate text which does not contain any HTML tags.
The function is marked as stateless
to indicate that we intend to use it for composing a grammar:
@guidance(stateless=True)
def _gen_text(lm: Model):
return lm + gen(regex="[^<>]+")
We can then use this function to generate text within an arbitrary HTML tag:
@guidance(stateless=True)
def _gen_text_in_tag(lm: Model, tag: str):
lm += f"<{tag}>"
lm += _gen_text()
lm += f"</{tag}>"
return lm
Now, let us create the page header. As part of this, we need to generate a page title:
@guidance(stateless=True)
def _gen_header(lm: Model):
lm += "<head>\n"
lm += _gen_text_in_tag("title") + "\n"
lm += "</head>\n"
return lm
The body of the HTML page is going to be filled with headings and paragraphs. We can define a function to do each:
from guidance.library import one_or_more
@guidance(stateless=True)
def _gen_heading(lm: Model):
lm += select(
options=[_gen_text_in_tag("h1"), _gen_text_in_tag("h2"), _gen_text_in_tag("h3")]
)
lm += "\n"
return lm
@guidance(stateless=True)
def _gen_para(lm: Model):
lm += "<p>"
lm += one_or_more(
select(
options=[
_gen_text(),
_gen_text_in_tag("em"),
_gen_text_in_tag("strong"),
"<br />",
],
)
)
lm += "</p>\n"
return lm
Now, the function to define the body of the HTML itself:
@guidance(stateless=True)
def _gen_body(lm: Model):
lm += "<body>\n"
lm += one_or_more(select(options=[_gen_heading(), one_or_more(_gen_para())]))
lm += "</body>\n"
return lm
Next, we come to the function which generates the complete HTML page. We add the HTML start tag, then generate the header, then body, and then append the ending HTML tag:
@guidance(stateless=True)
def _gen_html(lm: Model):
lm += "<html>\n"
lm += _gen_header()
lm += _gen_body()
lm += "</html>\n"
return lm
Finally, we provide a user-friendly wrapper, which will allow us to:
- Set the temperature of the generation
- Capture the generated page from the Model object
from guidance.library import capture, with_temperature
@guidance(stateless=True)
def make_html(
lm,
name: str | None = None,
*,
temperature: float = 0.0,
):
return lm + capture(
with_temperature(_gen_html(), temperature=temperature),
name=name,
)
Now, use this to generate a simple webpage:
lm = phi_lm
with system():
lm += "You are an expert in HTML"
with user():
lm += "Create a simple and short web page about your life story."
with assistant():
lm += make_html(name="html_text", temperature=0.7)
When running in a Jupyter Notebook so that the widget is active, we get the following output:

Note the varying highlighting of the generation.
This is showing another of Guidance's capabilities: fast-forwarding of tokens.
The constraints imposed by a grammar often mean that some tokens are known in advance.
Guidance doesn't need the model to generate these; instead it can insert them into the generation.
This saves forward passes through the model, and hence reduces GPU usage.
For example, in the above HTML generation, Guidance always knows the last opening tag.
If the last opened tag was <h1>
(for example), then as soon as the model generates </
, Guidance can fill in h1>
without needing the model to perform a forward pass.
Generating JSON
A JSON schema is actually a context free grammar, and hence it can be used to constrain an LLM using Guidance. This is a common enough case that Guidance provides special support for it. A quick sample, based on a Pydantic model:
import json
from pydantic import BaseModel, Field
from guidance import json as gen_json
class BloodPressure(BaseModel):
systolic: int = Field(gt=300, le=400)
diastolic: int = Field(gt=0, le=20)
location: str = Field(max_length=50)
model_config = dict(extra="forbid")
lm = phi_lm
with system():
lm += "You are a doctor taking a patient's blood pressure taken from their arm"
with user():
lm += "Report the blood pressure"
with assistant():
lm += gen_json(name="bp", schema=BloodPressure)
print(f"{lm['bp']=}")
# Use Python's JSON library
loaded_json = json.loads(lm["bp"])
print(json.dumps(loaded_json, indent=4))
# Use Pydantic
result = BloodPressure.model_validate_json(lm["bp"])
print(result.model_dump_json(indent=8))
lm['bp']='{"systolic": 301, "diastolic": 15, "location": "arm"}'
{
"systolic": 301,
"diastolic": 15,
"location": "arm"
}
{
"systolic": 301,
"diastolic": 15,
"location": "arm"
}
Note that the generated blood pressure is not one the model will have seen for a human. When generating JSON, a substantial number of tokens can often be fast-forwarded, due to the structural constraints imposed by the schema.
Top Related Projects
A guidance language for controlling large language models.
The official Python library for the OpenAI API
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
🦜🔗 Build context-aware reasoning applications
Integrate cutting-edge LLM technology quickly and easily into your apps
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot