Convert Figma logo to code with AI

Amanieu logoasyncplusplus

Async++ concurrency framework for C++11

1,333
197
1,333
12

Top Related Projects

1,785

Modern and efficient C++ Thread Pool Library

A simple C++11 Thread Pool implementation

A fast multi-producer, multi-consumer lock-free concurrent queue for C++11

10,061

A General-purpose Task-parallel Programming System using Modern C++

10,061

A General-purpose Task-parallel Programming System using Modern C++

5,580

oneAPI Threading Building Blocks (oneTBB)

Quick Overview

Async++ is a lightweight C++ library for asynchronous programming. It provides a set of tools for writing concurrent code using futures and tasks, similar to C++11 futures but with additional features and better performance.

Pros

  • Lightweight and efficient implementation
  • Supports both task-based and event-based asynchronous programming
  • Compatible with C++11 and later versions
  • Provides cancellation support for tasks

Cons

  • Limited documentation compared to some larger concurrency libraries
  • May require additional learning for developers not familiar with asynchronous programming concepts
  • Less mature compared to standard library concurrency features

Code Examples

Example 1: Creating and running a simple task

#include <async++.h>
#include <iostream>

int main() {
    auto task = async::spawn([] {
        return 42;
    });
    std::cout << "Result: " << task.get() << std::endl;
    return 0;
}

Example 2: Chaining tasks

#include <async++.h>
#include <iostream>

int main() {
    auto task = async::spawn([] {
        return 10;
    }).then([](int x) {
        return x * 2;
    }).then([](int x) {
        return std::to_string(x);
    });
    std::cout << "Result: " << task.get() << std::endl;
    return 0;
}

Example 3: Parallel execution of multiple tasks

#include <async++.h>
#include <iostream>
#include <vector>

int main() {
    std::vector<async::task<int>> tasks;
    for (int i = 0; i < 5; i++) {
        tasks.push_back(async::spawn([i] {
            return i * i;
        }));
    }
    auto results = async::when_all(tasks.begin(), tasks.end()).get();
    for (const auto& result : results) {
        std::cout << result << " ";
    }
    std::cout << std::endl;
    return 0;
}

Getting Started

  1. Clone the repository:

    git clone https://github.com/Amanieu/asyncplusplus.git
    
  2. Include the library in your project:

    • Add the include directory to your include path
    • Add the source files in the src directory to your project
  3. Include the main header in your C++ file:

    #include <async++.h>
    
  4. Compile with C++11 or later:

    g++ -std=c++11 your_file.cpp -I/path/to/asyncplusplus/include /path/to/asyncplusplus/src/*.cpp -pthread
    

Now you can start using Async++ in your C++ projects for asynchronous programming.

Competitor Comparisons

1,785

Modern and efficient C++ Thread Pool Library

Pros of CTPL

  • Simpler implementation with a focus on thread pool functionality
  • Lightweight and easy to integrate into existing projects
  • Supports both C++11 and C++14 standards

Cons of CTPL

  • Less feature-rich compared to asyncplusplus
  • Limited to thread pool operations without advanced task management
  • May require additional implementation for complex asynchronous workflows

Code Comparison

CTPL:

#include "ctpl_stl.h"
ctpl::thread_pool p(4);
auto future = p.push([](int id){ return id; });

asyncplusplus:

#include <async++.h>
async::task<int> t = async::spawn([]{ return 42; });
int result = t.get();

CTPL focuses on a straightforward thread pool implementation, while asyncplusplus provides a more comprehensive asynchronous programming framework. CTPL is suitable for projects requiring simple thread pooling, whereas asyncplusplus offers advanced features for complex asynchronous tasks and workflows. The choice between the two depends on the specific requirements of the project and the desired level of abstraction for handling concurrent operations.

A simple C++11 Thread Pool implementation

Pros of ThreadPool

  • Simpler and more lightweight implementation
  • Easy to understand and integrate into existing projects
  • Supports both C++11 and C++14 standards

Cons of ThreadPool

  • Limited functionality compared to asyncplusplus
  • Lacks advanced features like task continuations and cancellation
  • No built-in support for exception handling in tasks

Code Comparison

ThreadPool:

ThreadPool pool(4);
auto result = pool.enqueue([](int answer) { return answer; }, 42);

asyncplusplus:

async::thread_pool pool(4);
auto task = async::spawn(pool, []() { return 42; });
auto result = task.get();

Key Differences

  • ThreadPool focuses on simplicity and ease of use, while asyncplusplus offers more advanced features and flexibility.
  • asyncplusplus provides a more comprehensive set of tools for asynchronous programming, including task continuations, cancellation, and exception handling.
  • ThreadPool is better suited for simpler use cases, while asyncplusplus is more appropriate for complex asynchronous workflows.

Conclusion

Choose ThreadPool for straightforward thread pooling needs in smaller projects. Opt for asyncplusplus when working on larger, more complex applications that require advanced asynchronous programming features and better error handling capabilities.

A fast multi-producer, multi-consumer lock-free concurrent queue for C++11

Pros of concurrentqueue

  • Focused on high-performance lock-free queue implementation
  • Extensively benchmarked and optimized for various use cases
  • Provides both single-producer/single-consumer and multi-producer/multi-consumer variants

Cons of concurrentqueue

  • Limited to queue data structure, while asyncplusplus offers broader async programming utilities
  • May have a steeper learning curve for users unfamiliar with lock-free programming concepts
  • Less suitable for general-purpose asynchronous programming tasks

Code Comparison

concurrentqueue:

moodycamel::ConcurrentQueue<int> q;
q.enqueue(25);
int item;
bool success = q.try_dequeue(item);

asyncplusplus:

async::task<int> t = async::spawn([]() {
    return 25;
});
int result = t.get();

Summary

concurrentqueue excels in providing a highly optimized concurrent queue implementation, ideal for scenarios requiring efficient producer-consumer patterns. asyncplusplus, on the other hand, offers a more comprehensive toolkit for asynchronous programming, including tasks, coroutines, and parallel algorithms. The choice between the two depends on the specific requirements of the project, with concurrentqueue being more suitable for specialized queue-based concurrency needs, while asyncplusplus is better suited for general-purpose asynchronous programming tasks.

10,061

A General-purpose Task-parallel Programming System using Modern C++

Pros of taskflow

  • More comprehensive task management system with support for complex workflows and dependencies
  • Better performance for large-scale parallel and heterogeneous computing
  • Active development and regular updates

Cons of taskflow

  • Steeper learning curve due to more complex API
  • Potentially overkill for simpler asynchronous programming needs

Code Comparison

taskflow:

tf::Taskflow taskflow;
tf::Executor executor;

taskflow.emplace([](){ std::cout << "Task A\n"; })
        .name("A");

taskflow.emplace([](){ std::cout << "Task B\n"; })
        .name("B");

executor.run(taskflow).wait();

asyncplusplus:

async::task<void> taskA([]{ std::cout << "Task A\n"; });
async::task<void> taskB([]{ std::cout << "Task B\n"; });

taskA.then(taskB);
taskA.wait();

taskflow offers a more structured approach to defining task dependencies, while asyncplusplus provides a simpler, promise-based API for basic asynchronous operations. taskflow is better suited for complex parallel computing scenarios, whereas asyncplusplus is more appropriate for straightforward asynchronous programming tasks.

10,061

A General-purpose Task-parallel Programming System using Modern C++

Pros of taskflow

  • More comprehensive task management system with support for complex workflows and dependencies
  • Better performance for large-scale parallel and heterogeneous computing
  • Active development and regular updates

Cons of taskflow

  • Steeper learning curve due to more complex API
  • Potentially overkill for simpler asynchronous programming needs

Code Comparison

taskflow:

tf::Taskflow taskflow;
tf::Executor executor;

taskflow.emplace([](){ std::cout << "Task A\n"; })
        .name("A");

taskflow.emplace([](){ std::cout << "Task B\n"; })
        .name("B");

executor.run(taskflow).wait();

asyncplusplus:

async::task<void> taskA([]{ std::cout << "Task A\n"; });
async::task<void> taskB([]{ std::cout << "Task B\n"; });

taskA.then(taskB);
taskA.wait();

taskflow offers a more structured approach to defining task dependencies, while asyncplusplus provides a simpler, promise-based API for basic asynchronous operations. taskflow is better suited for complex parallel computing scenarios, whereas asyncplusplus is more appropriate for straightforward asynchronous programming tasks.

5,580

oneAPI Threading Building Blocks (oneTBB)

Pros of oneTBB

  • More comprehensive parallel programming library with a wider range of features
  • Better performance optimization for Intel processors
  • Actively maintained by Intel with regular updates and improvements

Cons of oneTBB

  • Steeper learning curve due to its extensive feature set
  • Larger codebase and potential overhead for simpler parallel tasks
  • May have less portability across different platforms compared to asyncplusplus

Code Comparison

asyncplusplus:

#include <async++.h>

async::parallel_for(0, 100, [](int i) {
    // Parallel task
});

oneTBB:

#include <tbb/parallel_for.h>

tbb::parallel_for(0, 100, [](int i) {
    // Parallel task
});

Both libraries provide similar syntax for parallel_for loops, but oneTBB offers more advanced features and optimizations for complex parallel programming scenarios. asyncplusplus focuses on simplicity and ease of use for basic parallel tasks, while oneTBB provides a more comprehensive toolkit for parallel programming across various domains.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Async++

Async++ is a lightweight concurrency framework for C++11. The concept was inspired by the Microsoft PPL library and the N3428 C++ standard proposal.

Example

Here is a short example which shows some features of Async++:

#include <iostream>
#include <async++.h>

int main()
{
    auto task1 = async::spawn([] {
        std::cout << "Task 1 executes asynchronously" << std::endl;
    });
    auto task2 = async::spawn([]() -> int {
        std::cout << "Task 2 executes in parallel with task 1" << std::endl;
        return 42;
    });
    auto task3 = task2.then([](int value) -> int {
        std::cout << "Task 3 executes after task 2, which returned "
                  << value << std::endl;
        return value * 3;
    });
    auto task4 = async::when_all(task1, task3);
    auto task5 = task4.then([](std::tuple<async::task<void>,
                                          async::task<int>> results) {
        std::cout << "Task 5 executes after tasks 1 and 3. Task 3 returned "
                  << std::get<1>(results).get() << std::endl;
    });

    task5.get();
    std::cout << "Task 5 has completed" << std::endl;

    async::parallel_invoke([] {
        std::cout << "This is executed in parallel..." << std::endl;
    }, [] {
        std::cout << "with this" << std::endl;
    });

    async::parallel_for(async::irange(0, 5), [](int x) {
        std::cout << x;
    });
    std::cout << std::endl;

    int r = async::parallel_reduce({1, 2, 3, 4}, 0, [](int x, int y) {
        return x + y;
    });
    std::cout << "The sum of {1, 2, 3, 4} is " << r << std::endl;
}

// Output (order may vary in some places):
// Task 1 executes asynchronously
// Task 2 executes in parallel with task 1
// Task 3 executes after task 2, which returned 42
// Task 5 executes after tasks 1 and 3. Task 3 returned 126
// Task 5 has completed
// This is executed in parallel...
// with this
// 01234
// The sum of {1, 2, 3, 4} is 10

Supported Platforms

The only requirement to use Async++ is a C++11 compiler and standard library. Unfortunately C++11 is not yet fully implemented on most platforms. Here is the list of OS and compiler combinations which are known to work.

  • Linux: Works with GCC 4.7+, Clang 3.2+ and Intel compiler 15+.
  • Mac: Works with Apple Clang (using libc++). GCC also works but you must get a recent version (4.7+).
  • iOS: Works with Apple Clang (using libc++). Note: because iOS has no thread local support, the library uses a workaround based on pthreads.
  • Windows: Works with GCC 4.8+ (with pthread-win32) and Visual Studio 2013+.

Building and Installing

Instructions for compiling Async++ and using it in your code are available on the Building and Installing page.

Documentation

The Async++ documentation is split into four parts:

  • Tasks: This describes task objects which are the core Async++. Reading this first is strongly recommended.
  • Parallel algorithms: This describes functions to run work on ranges in parallel.
  • Schedulers: This describes the low-level details of Async++ and how to customize it.
  • API Reference: This gives detailed descriptions of all the classes and functions available in Async++.

Contact

You can contact me by email at amanieu@gmail.com.