Convert Figma logo to code with AI

TechEmpower logoFrameworkBenchmarks

Source for the TechEmpower Framework Benchmarks project

7,615
1,942
7,615
154

Top Related Projects

37,830

Modern HTTP benchmarking tool

24,772

Write scalable load tests in plain Python 🚗💨

Fast cross-platform HTTP benchmarking tool written in Go

18,056

HTTP load generator, ApacheBench (ab) replacement

2,077

Drill is an HTTP load testing application written in Rust

23,459

HTTP load testing tool and library. It's over 9000!

Quick Overview

TechEmpower/FrameworkBenchmarks is a project that conducts comprehensive performance comparisons of web application frameworks and platforms. It provides a standardized set of tests to measure and compare the performance of various web frameworks across different programming languages and technologies.

Pros

  • Offers an extensive and unbiased comparison of numerous web frameworks
  • Regularly updated with new frameworks and test results
  • Provides detailed performance metrics and analysis
  • Helps developers make informed decisions when choosing a framework

Cons

  • The complexity of the benchmark setup can make it challenging for newcomers to contribute
  • Results may not always reflect real-world performance scenarios
  • Some frameworks may be optimized specifically for these benchmarks, potentially skewing results
  • The sheer number of frameworks tested can make it overwhelming to interpret the results

Getting Started

To run the benchmarks locally:

  1. Clone the repository:

    git clone https://github.com/TechEmpower/FrameworkBenchmarks.git
    
  2. Install Docker and Docker Compose

  3. Run the benchmarks:

    ./tfb --test <framework-name>
    

Replace <framework-name> with the specific framework you want to test. For a full list of available frameworks, check the frameworks directory in the repository.

Note: Running all benchmarks can be resource-intensive and time-consuming. It's recommended to start with a single framework for testing purposes.

Competitor Comparisons

37,830

Modern HTTP benchmarking tool

Pros of wrk

  • Lightweight and focused on HTTP benchmarking
  • Easy to use with a simple command-line interface
  • Supports Lua scripting for custom request generation

Cons of wrk

  • Limited to HTTP benchmarking only
  • Lacks comprehensive framework comparisons
  • Doesn't provide detailed analysis of server-side performance

Code Comparison

wrk:

wrk.method = "POST"
wrk.body   = '{"key": "value"}'
wrk.headers["Content-Type"] = "application/json"

FrameworkBenchmarks:

{
  "framework": "express",
  "tests": [
    {
      "default": {
        "json_url": "/json"
      }
    }
  ]
}

Summary

wrk is a lightweight HTTP benchmarking tool, while FrameworkBenchmarks is a comprehensive suite for comparing web application frameworks. wrk excels in simplicity and ease of use for quick HTTP performance testing, but lacks the breadth and depth of framework comparisons offered by FrameworkBenchmarks. The latter provides a standardized environment for testing multiple frameworks across various scenarios, making it more suitable for in-depth performance analysis and framework selection. However, wrk's simplicity and Lua scripting capabilities make it a valuable tool for focused HTTP benchmarking tasks.

24,772

Write scalable load tests in plain Python 🚗💨

Pros of Locust

  • User-friendly, Python-based scripting for defining test scenarios
  • Real-time web interface for monitoring and adjusting tests
  • Distributed testing capabilities for simulating large numbers of users

Cons of Locust

  • Limited to HTTP/HTTPS protocols, less versatile than FrameworkBenchmarks
  • Primarily focused on load testing, not comprehensive framework comparisons
  • May require more setup for complex scenarios compared to FrameworkBenchmarks

Code Comparison

Locust example:

from locust import HttpUser, task, between

class WebsiteUser(HttpUser):
    wait_time = between(1, 5)

    @task
    def index_page(self):
        self.client.get("/")

FrameworkBenchmarks example:

{
  "framework": "flask",
  "tests": [
    {
      "default": {
        "json_url": "/json",
        "db_url": "/db",
        "query_url": "/queries?queries=",
        "fortune_url": "/fortunes",
        "update_url": "/updates?queries="
      }
    }
  ]
}

The Locust code defines a user behavior for load testing, while FrameworkBenchmarks uses a configuration file to specify endpoints for benchmarking different frameworks. FrameworkBenchmarks offers a more standardized approach to comparing multiple frameworks, whereas Locust provides flexibility for custom load testing scenarios.

Fast cross-platform HTTP benchmarking tool written in Go

Pros of Bombardier

  • Lightweight and focused on HTTP(S) benchmarking
  • Easy to install and use as a single binary
  • Supports various protocols and customizable request parameters

Cons of Bombardier

  • Limited to HTTP(S) benchmarking, not a comprehensive framework comparison tool
  • Lacks built-in support for testing multiple frameworks or languages
  • Does not provide detailed analysis or comparison reports

Code Comparison

FrameworkBenchmarks (config.toml):

[framework]
name = "aspcore"

[main]
urls.plaintext = "/plaintext"
urls.json = "/json"
approach = "Realistic"
classification = "Fullstack"
database = "None"

Bombardier (command-line usage):

bombardier -c 125 -n 100000 http://localhost:8080/plaintext

Summary

FrameworkBenchmarks is a comprehensive suite for comparing web application frameworks across multiple languages and platforms. It provides a standardized testing environment and detailed reports. Bombardier, on the other hand, is a focused HTTP(S) benchmarking tool that's easy to use for quick performance tests. While FrameworkBenchmarks offers a broader scope and more detailed analysis, Bombardier excels in simplicity and ease of use for specific HTTP(S) benchmarking tasks.

18,056

HTTP load generator, ApacheBench (ab) replacement

Pros of hey

  • Lightweight and easy to use for quick HTTP load testing
  • Single binary with no dependencies, making it portable across systems
  • Supports custom headers and request methods

Cons of hey

  • Limited to HTTP/HTTPS benchmarking only
  • Lacks the comprehensive framework comparisons offered by FrameworkBenchmarks
  • Does not provide detailed analysis or visualization of results

Code Comparison

hey:

func main() {
    n := flag.Int("n", 200, "Number of requests to run")
    c := flag.Int("c", 50, "Number of workers to run concurrently")
    q := flag.Float64("q", 0, "Rate limit, in queries per second (QPS)")
    flag.Parse()
}

FrameworkBenchmarks:

def parse_args():
    parser = argparse.ArgumentParser(description="Run the benchmark toolset")
    parser.add_argument('-s', '--server-host', help='Run server on HOST')
    parser.add_argument('-c', '--client-host', help='Run client on HOST')
    parser.add_argument('-i', '--identity-file', help='SSH identity file')
    return parser.parse_args()

hey is a focused tool for HTTP load testing, while FrameworkBenchmarks provides a comprehensive suite for comparing web application frameworks across multiple languages and platforms. hey is simpler to use for quick tests, but FrameworkBenchmarks offers more in-depth analysis and a wider range of benchmarking scenarios.

2,077

Drill is an HTTP load testing application written in Rust

Pros of drill

  • Lightweight and focused on HTTP/HTTPS benchmarking
  • Easy to use with a simple YAML configuration file
  • Supports concurrent requests and custom headers

Cons of drill

  • Limited to HTTP/HTTPS benchmarking, unlike FrameworkBenchmarks' broader scope
  • Fewer built-in testing scenarios compared to FrameworkBenchmarks
  • Less comprehensive in terms of framework and language coverage

Code comparison

drill configuration example:

concurrency: 4
base: 'http://localhost:9000'
iterations: 10

plan:
  - name: Fetch users
    request:
      url: /api/users

FrameworkBenchmarks test implementation example:

def json():
    return jsonify(message='Hello, World!')

def db():
    worlds = World.query.order_by(func.random()).limit(1).all()
    return jsonify(worlds[0].serialize())

While drill focuses on HTTP request benchmarking with a simple configuration, FrameworkBenchmarks provides a more comprehensive testing environment for various web frameworks and languages. drill is easier to set up and use for quick HTTP benchmarks, but FrameworkBenchmarks offers a wider range of tests and comparisons across different technologies.

23,459

HTTP load testing tool and library. It's over 9000!

Pros of Vegeta

  • Lightweight and focused on HTTP load testing
  • Easy to use with a simple CLI interface
  • Supports various output formats for result analysis

Cons of Vegeta

  • Limited to HTTP/HTTPS protocols
  • Lacks the comprehensive framework comparisons offered by FrameworkBenchmarks
  • May not provide as detailed insights into server-side performance

Code Comparison

Vegeta (running a simple load test):

echo "GET http://example.com" | vegeta attack -duration=30s | vegeta report

FrameworkBenchmarks (running a test for a specific framework):

./tfb --test framework_name

Key Differences

  • Purpose: Vegeta is a standalone HTTP load testing tool, while FrameworkBenchmarks is a comprehensive suite for comparing web application frameworks
  • Scope: Vegeta focuses on client-side load generation, whereas FrameworkBenchmarks provides end-to-end benchmarking including server-side metrics
  • Flexibility: Vegeta is more flexible for quick, custom HTTP load tests, while FrameworkBenchmarks offers standardized tests across multiple frameworks

Use Cases

  • Vegeta: Ideal for developers and DevOps engineers needing quick, targeted HTTP load tests
  • FrameworkBenchmarks: Better suited for framework authors, architects, and teams evaluating different web technologies for large-scale projects

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Welcome to TechEmpower Framework Benchmarks (TFB)

Build Status

If you're new to the project, welcome! Please feel free to ask questions here. We encourage new frameworks and contributors to ask questions. We're here to help!

This project provides representative performance measures across a wide field of web application frameworks. With much help from the community, coverage is quite broad and we are happy to broaden it further with contributions. The project presently includes frameworks on many languages including Go, Python, Java, Ruby, PHP, C#, F#,Clojure, Groovy, Dart, JavaScript, Erlang, Haskell, Scala, Perl, Lua, C, and others. The current tests exercise plaintext responses, JSON serialization, database reads and writes via the object-relational mapper (ORM), collections, sorting, server-side templates, and XSS counter-measures. Future tests will exercise other components and greater computation.

Read more and see the results of our tests on cloud and physical hardware. For descriptions of the test types that we run, see the test requirements section.

If you find yourself in a directory or file that you're not sure what the purpose is, checkout our file structure in our documentation, which will briefly explain the use of relevant directories and files.

Quick Start Guide

To get started developing you'll need to install docker or see our Quick Start Guide using vagrant

  1. Clone TFB.

     $ git clone https://github.com/TechEmpower/FrameworkBenchmarks.git
    
  2. Change directories

     $ cd FrameworkBenchmarks
    
  3. Run a test.

     $ ./tfb --mode verify --test gemini
    

Explanation of the ./tfb script

The run script is pretty wordy, but each and every flag is required. If you are using windows, either adapt the docker command at the end of the ./tfb shell script (replacing ${SCRIPT_ROOT} with /c/path/to/FrameworkBenchmarks), or use vagrant.

The command looks like this: docker run -it --rm --network tfb -v /var/run/docker.sock:/var/run/docker.sock -v [FWROOT]:/FrameworkBenchmarks techempower/tfb [ARGS]

  • -it tells docker to run this in 'interactive' mode and simulate a TTY, so that ctrl+c is propagated.
  • --rm tells docker to remove the container as soon as the toolset finishes running, meaning there aren't hundreds of stopped containers lying around.
  • --network=tfb tells the container to join the 'tfb' Docker virtual network
  • The first -v specifies which Docker socket path to mount as a volume in the running container. This allows docker commands run inside this container to use the host container's docker to create/run/stop/remove containers.
  • The second -v mounts the FrameworkBenchmarks source directory as a volume to share with the container so that rebuilding the toolset image is unnecessary and any changes you make on the host system are available in the running toolset container.
  • techempower/tfb is the name of toolset container to run

A note on Windows

  • Docker expects Linux-style paths. If you cloned on your C:\ drive, then [ABS PATH TO THIS DIR] would be /c/FrameworkBenchmarks.
  • Docker for Windows understands /var/run/docker.sock even though that is not a valid path on Windows, but only when using Linux containers (it doesn't work with Windows containers and LCOW). Docker Toolbox may not understand /var/run/docker.sock, even when using Linux containers - use at your own risk.

Quick Start Guide (Vagrant)

Get started developing quickly by utilizing vagrant with TFB. Git, Virtualbox and vagrant are required.

  1. Clone TFB.

     $ git clone https://github.com/TechEmpower/FrameworkBenchmarks.git
    
  2. Change directories

     $ cd FrameworkBenchmarks/deployment/vagrant
    
  3. Build the vagrant virtual machine

     $ vagrant up
    
  4. Run a test

     $ vagrant ssh
     $ tfb --mode verify --test gemini
    

Add a New Test

Either on your computer, or once you open an SSH connection to your vagrant box, start the new test initialization wizard.

    vagrant@TFB-all:~/FrameworkBenchmarks$ ./tfb --new

This will walk you through the entire process of creating a new test to include in the suite.

Resources

Official Documentation

Our official documentation can be found in the wiki. If you find any errors or areas for improvement within the docs, feel free to open an issue in this repo.

Live Results

Results of continuous benchmarking runs are available in real time here.

Data Visualization

If you have a results.json file that you would like to visualize, you can do that here. You can also attach a runid parameter to that url where runid is a run listed on tfb-status like so: https://www.techempower.com/benchmarks/#section=test&runid=fd07b64e-47ce-411e-8b9b-b13368e988c6. If you want to visualize them or compare different results files on bash, here is an unofficial plaintext results parser

Contributing

The community has consistently helped in making these tests better, and we welcome any and all changes. Reviewing our contribution practices and guidelines will help to keep us all on the same page. The contribution guide can be found in the TFB documentation.

Join in the conversation in the Discussions tab, on Twitter, or chat with us on Freenode at #techempower-fwbm.