evals
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Top Related Projects
Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models
A framework for few-shot evaluation of language models.
🤗 Evaluate: A library for easily evaluating machine learning models and datasets.
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
Quick Overview
OpenAI's Evals is an open-source framework for evaluating AI models, particularly large language models (LLMs). It provides a standardized way to create, run, and share evaluations, helping researchers and developers assess model performance across various tasks and domains.
Pros
- Flexible and extensible framework for creating custom evaluations
- Supports a wide range of evaluation types, including multiple-choice, free-response, and programmatic evaluations
- Enables easy comparison of different models and versions
- Promotes transparency and reproducibility in AI research
Cons
- Requires some programming knowledge to create custom evaluations
- Documentation could be more comprehensive for advanced use cases
- Limited built-in visualizations for evaluation results
- May require significant computational resources for large-scale evaluations
Code Examples
- Running a basic evaluation:
from evals.api import CompletionFn, CompletionResult
from evals.elsuite.basic.match import Match
from evals.registry import Registry
registry = Registry()
eval = registry.get_eval("test-match")
completion_fn = CompletionFn.from_openai("text-davinci-003")
results = eval.run(completion_fn)
print(results.stats())
- Creating a custom evaluation:
from evals.api import CompletionFn
from evals.eval import Eval
class CustomEval(Eval):
def __init__(self, samples):
self.samples = samples
def eval_sample(self, sample, completion_fn):
prompt = f"Question: {sample['question']}\nAnswer:"
result = completion_fn(prompt)
return result.strip().lower() == sample['answer'].lower()
def run(self, completion_fn):
results = [self.eval_sample(sample, completion_fn) for sample in self.samples]
return sum(results) / len(results)
- Registering and running a custom evaluation:
from evals.registry import Registry
registry = Registry()
registry.register_eval("custom-eval", CustomEval)
eval = registry.get_eval("custom-eval", {"samples": [...]})
completion_fn = CompletionFn.from_openai("text-davinci-003")
result = eval.run(completion_fn)
print(f"Accuracy: {result:.2f}")
Getting Started
To get started with OpenAI Evals:
-
Install the library:
pip install evals
-
Set up your OpenAI API key:
import os os.environ["OPENAI_API_KEY"] = "your-api-key-here"
-
Run a basic evaluation:
from evals.api import CompletionFn from evals.registry import Registry registry = Registry() eval = registry.get_eval("test-match") completion_fn = CompletionFn.from_openai("text-davinci-003") results = eval.run(completion_fn) print(results.stats())
Competitor Comparisons
Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models
Pros of BIG-bench
- Larger and more diverse set of tasks, covering a wider range of capabilities
- More collaborative, with contributions from multiple organizations and researchers
- Includes detailed task descriptions and metadata for better understanding and analysis
Cons of BIG-bench
- Less frequently updated compared to evals
- More complex setup and usage due to its larger scope
- May require more computational resources to run all tasks
Code Comparison
evals:
from evals.api import CompletionFn, CompletionResult
from evals.elsuite.basic.match import Match
def eval_model(model: CompletionFn):
eval = Match()
result = eval.eval(model)
return result
BIG-bench:
import bigbench.api.task as task
import bigbench.api.model as model
def run_task(task_name: str, model: model.Model):
task = task.TaskManager.get_task(task_name)
score = task.evaluate_model(model)
return score
Both repositories provide frameworks for evaluating language models, but BIG-bench offers a more extensive set of tasks and a collaborative approach. evals focuses on a more streamlined evaluation process with easier setup and frequent updates. The code examples show that evals uses a completion-based approach, while BIG-bench employs a task-specific evaluation method.
A framework for few-shot evaluation of language models.
Pros of lm-evaluation-harness
- More extensive set of benchmarks and evaluation tasks
- Better support for distributed evaluation across multiple GPUs
- More flexible and customizable evaluation pipeline
Cons of lm-evaluation-harness
- Steeper learning curve and more complex setup process
- Less focus on safety evaluations and alignment-specific tasks
- Potentially slower execution for simpler evaluation scenarios
Code Comparison
lm-evaluation-harness:
from lm_eval import evaluator, tasks
results = evaluator.simple_evaluate(
model="gpt-3.5-turbo",
tasks=["hellaswag", "mmlu"],
num_fewshot=5,
batch_size=32
)
evals:
from evals.api import CompletionFn, CompletionResult
from evals.elsuite import basic_evals
result = basic_evals.classify(
completion_fn=CompletionFn,
samples=[("What color is the sky?", "blue")],
max_tokens=5
)
The lm-evaluation-harness code showcases its ability to evaluate multiple tasks with few-shot learning and batching, while the evals code demonstrates a simpler, more focused approach to basic classification tasks.
🤗 Evaluate: A library for easily evaluating machine learning models and datasets.
Pros of evaluate
- More extensive and diverse set of evaluation metrics and tasks
- Better integration with the Hugging Face ecosystem and datasets
- More active community and frequent updates
Cons of evaluate
- Steeper learning curve due to more complex API
- Less focus on specific AI safety and alignment evaluations
- Potentially slower execution for large-scale evaluations
Code comparison
evaluate:
from evaluate import load
metric = load("accuracy")
results = metric.compute(predictions=preds, references=refs)
evals:
from evals import get_eval
eval = get_eval("accuracy")
result = eval.run(samples)
Both repositories provide tools for evaluating machine learning models, but they have different focuses and strengths. evaluate offers a wider range of metrics and better integration with the Hugging Face ecosystem, while evals is more tailored to OpenAI's specific needs and AI safety considerations. The code comparison shows that evaluate uses a slightly more verbose API, while evals has a more streamlined approach for running evaluations.
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
Pros of PromptFlow
- More comprehensive workflow management for prompt engineering
- Integrated with Azure AI services for seamless deployment
- Supports multiple LLM providers and custom models
Cons of PromptFlow
- Less focused on evaluation metrics compared to Evals
- Steeper learning curve due to more complex features
- Primarily designed for Azure ecosystem, potentially limiting flexibility
Code Comparison
PromptFlow example:
from promptflow import tool
@tool
def my_python_tool(input1: str, input2: int) -> str:
return f"Input 1 is {input1}, Input 2 is {input2}"
Evals example:
from evals.api import CompletionFn
from evals.elsuite import basic_eval
def eval_fn(sample, completion: CompletionFn):
prompt = f"Q: {sample['question']}\nA:"
result = completion(prompt=prompt)
return result.strip() == sample["answer"]
Summary
PromptFlow offers a more comprehensive solution for managing prompt engineering workflows, particularly within the Azure ecosystem. It provides integration with various AI services and supports multiple LLM providers. However, it may have a steeper learning curve and be less flexible for non-Azure users.
Evals, on the other hand, focuses more on evaluation metrics and benchmarking for language models. It's simpler to use for specific evaluation tasks but lacks the broader workflow management features of PromptFlow.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
OpenAI Evals
Evals provide a framework for evaluating large language models (LLMs) or systems built using LLMs. We offer an existing registry of evals to test different dimensions of OpenAI models and the ability to write your own custom evals for use cases you care about. You can also use your data to build private evals which represent the common LLMs patterns in your workflow without exposing any of that data publicly.
If you are building with LLMs, creating high quality evals is one of the most impactful things you can do. Without evals, it can be very difficult and time intensive to understand how different model versions might affect your use case. In the words of OpenAI's President Greg Brockman:
Setup
To run evals, you will need to set up and specify your OpenAI API key. After you obtain an API key, specify it using the OPENAI_API_KEY
environment variable. Please be aware of the costs associated with using the API when running evals. You can also run and create evals using Weights & Biases.
Minimum Required Version: Python 3.9
Downloading evals
Our evals registry is stored using Git-LFS. Once you have downloaded and installed LFS, you can fetch the evals (from within your local copy of the evals repo) with:
cd evals
git lfs fetch --all
git lfs pull
This will populate all the pointer files under evals/registry/data
.
You may just want to fetch data for a select eval. You can achieve this via:
git lfs fetch --include=evals/registry/data/${your eval}
git lfs pull
Making evals
If you are going to be creating evals, we suggest cloning this repo directly from GitHub and installing the requirements using the following command:
pip install -e .
Using -e
, changes you make to your eval will be reflected immediately without having to reinstall.
Optionally, you can install the formatters for pre-committing with:
pip install -e .[formatters]
Then run pre-commit install
to install pre-commit into your git hooks. pre-commit will now run on every commit.
If you want to manually run all pre-commit hooks on a repository, run pre-commit run --all-files
. To run individual hooks use pre-commit run <hook_id>
.
Running evals
If you don't want to contribute new evals, but simply want to run them locally, you can install the evals package via pip:
pip install evals
You can find the full instructions to run existing evals in run-evals.md
and our existing eval templates in eval-templates.md
. For more advanced use cases like prompt chains or tool-using agents, you can use our Completion Function Protocol.
We provide the option for you to log your eval results to a Snowflake database, if you have one or wish to set one up. For this option, you will further have to specify the SNOWFLAKE_ACCOUNT
, SNOWFLAKE_DATABASE
, SNOWFLAKE_USERNAME
, and SNOWFLAKE_PASSWORD
environment variables.
Writing evals
We suggest getting starting by:
- Walking through the process for building an eval:
build-eval.md
- Exploring an example of implementing custom eval logic:
custom-eval.md
- Writing your own completion functions:
completion-fns.md
- Review our starter guide for writing evals: Getting Started with OpenAI Evals
Please note that we are currently not accepting evals with custom code! While we ask you to not submit such evals at the moment, you can still submit model-graded evals with custom model-graded YAML files.
If you think you have an interesting eval, please open a pull request with your contribution. OpenAI staff actively review these evals when considering improvements to upcoming models.
FAQ
Do you have any examples of how to build an eval from start to finish?
- Yes! These are in the
examples
folder. We recommend that you also read throughbuild-eval.md
in order to gain a deeper understanding of what is happening in these examples.
Do you have any examples of evals implemented in multiple different ways?
- Yes! In particular, see
evals/registry/evals/coqa.yaml
. We have implemented small subsets of the CoQA dataset for various eval templates to help illustrate the differences.
When I run an eval, it sometimes hangs at the very end (after the final report). What's going on?
- This is a known issue, but you should be able to interrupt it safely and the eval should finish immediately after.
There's a lot of code, and I just want to spin up a quick eval. Help? OR,
I am a world-class prompt engineer. I choose not to code. How can I contribute my wisdom?
- If you follow an existing eval template to build a basic or model-graded eval, you don't need to write any evaluation code at all! Just provide your data in JSON format and specify your eval parameters in YAML. build-eval.md walks you through these steps, and you can supplement these instructions with the Jupyter notebooks in the
examples
folder to help you get started quickly. Keep in mind, though, that a good eval will inevitably require careful thought and rigorous experimentation!
Disclaimer
By contributing to evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI evals will be subject to our usual Usage Policies: https://platform.openai.com/docs/usage-policies.
Top Related Projects
Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models
A framework for few-shot evaluation of language models.
🤗 Evaluate: A library for easily evaluating machine learning models and datasets.
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot