Convert Figma logo to code with AI

lzpong logothreadpool

based on C++11 , a mini threadpool , accept variable number of parameters 基于C++11的线程池,简洁且可以带任意多的参数

1,002
345
1,002
5

Top Related Projects

1,785

Modern and efficient C++ Thread Pool Library

Thread pool implementation using c++11 threads

Quick Overview

The lzpong/threadpool repository is a C++11 thread pool implementation. It provides a simple and efficient way to manage and execute tasks concurrently using a fixed number of worker threads, making it easier to parallelize workloads in C++ applications.

Pros

  • Easy to use and integrate into existing C++ projects
  • Supports both function pointers and lambda expressions for task submission
  • Allows retrieval of task results using std::future
  • Lightweight implementation with minimal dependencies

Cons

  • Limited documentation and examples
  • No built-in support for task prioritization
  • Lacks advanced features like thread affinity or work-stealing
  • Not actively maintained (last commit was in 2018)

Code Examples

  1. Creating a thread pool and submitting a simple task:
#include "threadpool.h"

ThreadPool pool(4); // Create a thread pool with 4 worker threads
auto result = pool.enqueue([](int a, int b) { return a + b; }, 3, 4);
std::cout << "Result: " << result.get() << std::endl; // Output: Result: 7
  1. Submitting multiple tasks and retrieving results:
std::vector<std::future<int>> results;
for (int i = 0; i < 8; ++i) {
    results.emplace_back(
        pool.enqueue([i] {
            std::this_thread::sleep_for(std::chrono::seconds(1));
            return i * i;
        })
    );
}

for (auto& result : results) {
    std::cout << result.get() << ' ';
}
// Output: 0 1 4 9 16 25 36 49
  1. Using the thread pool with a custom function:
int fibonacci(int n) {
    if (n <= 1) return n;
    return fibonacci(n - 1) + fibonacci(n - 2);
}

auto fib_result = pool.enqueue(fibonacci, 20);
std::cout << "Fibonacci(20) = " << fib_result.get() << std::endl;
// Output: Fibonacci(20) = 6765

Getting Started

To use the thread pool in your C++ project:

  1. Download the threadpool.h file from the repository.
  2. Include the header in your C++ file:
    #include "threadpool.h"
    
  3. Create a ThreadPool object with the desired number of worker threads:
    ThreadPool pool(std::thread::hardware_concurrency());
    
  4. Submit tasks to the pool using the enqueue method:
    auto result = pool.enqueue([]() { return "Hello, Thread Pool!"; });
    std::cout << result.get() << std::endl;
    

That's it! You can now use the thread pool to parallelize your workloads efficiently.

Competitor Comparisons

1,785

Modern and efficient C++ Thread Pool Library

Pros of CTPL

  • More feature-rich with support for future/promise-like functionality
  • Better exception handling and propagation
  • More flexible task queueing with priority support

Cons of CTPL

  • Slightly more complex API compared to threadpool
  • Potentially higher memory overhead due to additional features
  • May have a steeper learning curve for beginners

Code Comparison

CTPL:

ctpl::thread_pool p(4);
auto future = p.push([](int id){ return id; });
std::cout << future.get() << std::endl;

threadpool:

threadpool pool(4);
pool.enqueue([](){ /* task */ });

Key Differences

  • CTPL offers more advanced features like futures and priorities
  • threadpool has a simpler, more straightforward API
  • CTPL provides better control over task execution and results
  • threadpool is lighter-weight and easier to integrate for basic use cases

Both libraries aim to provide thread pool functionality for C++, but CTPL offers a more comprehensive set of features at the cost of increased complexity. The choice between the two depends on the specific requirements of the project and the desired level of control over thread management and task execution.

Thread pool implementation using c++11 threads

Pros of thread-pool

  • More comprehensive documentation and examples
  • Supports both C++11 and C++14 standards
  • Includes unit tests for better reliability

Cons of thread-pool

  • Larger codebase, potentially more complex to integrate
  • Lacks some advanced features like thread affinity

Code Comparison

thread-pool:

void ThreadPool::Init() {
    const uint32_t num_threads = std::thread::hardware_concurrency();
    for (uint32_t i = 0; i < num_threads; i++) {
        m_threads.emplace_back(&ThreadPool::WorkerThread, this);
    }
}

threadpool:

void ThreadPool::start(int threads) {
    for (int i = 0; i < threads; ++i)
        workers.emplace_back([this] {
            while (true) {
                std::function<void()> task;
                {
                    std::unique_lock<std::mutex> lock(this->queue_mutex);
                    this->condition.wait(lock, [this] { return this->stop || !this->tasks.empty(); });
                    if (this->stop && this->tasks.empty()) return;
                    task = std::move(this->tasks.front());
                    this->tasks.pop();
                }
                task();
            }
        });
}

The thread-pool implementation uses a separate initialization method, while threadpool combines thread creation and task execution logic in a single method. threadpool's approach may be more compact but potentially less flexible for customization.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

threadpool

based on C++11 , a mini threadpool , accept variable number of parameters. 基于C++11的线程池,简洁且可以带任意多的参数

管理一个任务队列,一个线程队列,然后每次取一个任务分配给一个线程去做,循环往复。 有意思的是,限制只创建一个线程,这样就是一个完全的任务队列了。

线程池,可以提交变参函数或拉姆达表达式的匿名函数执行,可以获取执行返回值

代码不多,**上百行代码就完成了 线程池**, 并且, 看看 commit, 哈, 不是固定参数的, 无参数数量限制! 这得益于可变参数模板.

支持自动释放多余空闲线程,避免峰值过后很多多余的空闲进程, 线程更优雅的结束.

*为了避嫌,先进行一下版权说明:代码是 me “写”的,但是思路来自 Internet, 特别是这个线程池实现(基本 copy 了这个实现,加上这位同学的实现和解释,好东西值得 copy ! * 然后综合更改了下,更加简洁 )。

##C++11语言细节 即使懂原理也不代表能写出程序,上面用了众多c++11的“奇技淫巧”,下面简单描述之。

  1. using Task = function<void()> 是类型别名,简化了 typedef 的用法。function<void()> 可以认为是一个函数类型,接受任意原型是 void() 的函数,或是函数对象,或是匿名函数。void() 意思是不带参数,没有返回值。
  2. pool.emplace_back([this]{...}) 和 pool.push_back([this]{...}) 功能一样,只不过前者性能会更好;
  3. pool.emplace_back([this]{...}) 是构造了一个线程对象,执行函数是拉姆达匿名函数 ;
  4. 所有对象的初始化方式均采用了 {},而不再使用 () 方式,因为风格不够一致且容易出错;
  5. 匿名函数: [this]{...} 不多说。[] 是捕捉器,this 是引用域外的变量 this指针, 内部使用死循环, 由cv_task.wait(lock,[this]{...}) 来阻塞线程;
  6. delctype(expr) 用来推断 expr 的类型,和 auto 是类似的,相当于类型占位符,占据一个类型的位置;auto f(A a, B b) -> decltype(a+b) 是一种用法,不能写作 decltype(a+b) f(A a, B b),为啥?! c++ 就是这么规定的!
  7. commit 方法是不是略奇葩!可以带任意多的参数,第一个参数是 f,后面依次是函数 f 的参数(注意:参数要传struct/class的话,建议用pointer,小心变量的作用域)! 可变参数模板是 c++11 的一大亮点,够亮!至于为什么是 Arg... 和 arg... ,因为规定就是这么用的!
  8. commit 直接使用智能调用stdcall函数,但有两种方法可以实现调用类成员,一种是使用 bind: .commit(std::bind(&Dog::sayHello, &dog)); 一种是用 mem_fn: .commit(std::mem_fn(&Dog::sayHello), &dog);
  9. make_shared 用来构造 shared_ptr 智能指针。用法大体是 shared_ptr p = make_shared(4) 然后 *p == 4 。智能指针的好处就是, 自动 delete !
  10. bind 函数,接受函数 f 和部分参数,返回currying后的匿名函数,譬如 bind(add, 4) 可以实现类似 add4 的函数!
  11. forward() 函数,类似于 move() 函数,后者是将参数右值化,前者是... 肿么说呢?大概意思就是:不改变最初传入的类型的引用类型(左值还是左值,右值还是右值);
  12. packaged_task 就是任务函数的封装类,通过 get_future 获取 future , 然后通过 future 可以获取函数的返回值(future.get());packaged_task 本身可以像函数一样调用 () ;
  13. queue 是队列类, front() 获取头部元素, pop() 移除头部元素;back() 获取尾部元素,push() 尾部添加元素;
  14. lock_guard 是 mutex 的 stack 封装类,构造的时候 lock(),析构的时候 unlock(),是 c++ RAII 的 idea;
  15. condition_variable cv; 条件变量, 需要配合 unique_lock 使用;unique_lock 相比 lock_guard 的好处是:可以随时 unlock() 和 lock()。 cv.wait() 之前需要持有 mutex,wait 本身会 unlock() mutex,如果条件满足则会重新持有 mutex。
  16. 最后线程池析构的时候,join() 可以等待任务都执行完在结束,很安全!