Convert Figma logo to code with AI

oneapi-src logooneTBB

oneAPI Threading Building Blocks (oneTBB)

5,580
1,010
5,580
170

Top Related Projects

5,583

oneAPI Threading Building Blocks (oneTBB)

4,907

[ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl

20,298

ncnn is a high-performance neural network inference framework optimized for the mobile platform

DirectXMath is an all inline SIMD C++ linear algebra library for use in games and graphics apps

Quick Overview

oneTBB (Threading Building Blocks) is an open-source C++ library developed by Intel for parallel programming on multi-core processors. It provides a rich set of components for efficient development of parallel applications, including parallel algorithms, concurrent containers, and low-level synchronization primitives.

Pros

  • High-level abstractions for parallel programming, reducing complexity
  • Efficient task scheduling and load balancing
  • Cross-platform support (Windows, Linux, macOS)
  • Seamless integration with other parallel programming models (e.g., OpenMP)

Cons

  • Steeper learning curve compared to simpler threading libraries
  • May introduce overhead for very small tasks
  • Limited support for distributed memory systems
  • Requires careful design to avoid race conditions and deadlocks

Code Examples

  1. Parallel for loop:
#include <oneapi/tbb.h>
#include <vector>

void parallel_increment(std::vector<int>& vec) {
    tbb::parallel_for(tbb::blocked_range<size_t>(0, vec.size()),
        [&](const tbb::blocked_range<size_t>& r) {
            for (size_t i = r.begin(); i != r.end(); ++i) {
                vec[i]++;
            }
        });
}
  1. Parallel reduction:
#include <oneapi/tbb.h>
#include <vector>

int parallel_sum(const std::vector<int>& vec) {
    return tbb::parallel_reduce(tbb::blocked_range<size_t>(0, vec.size()), 0,
        [&](const tbb::blocked_range<size_t>& r, int local_sum) {
            for (size_t i = r.begin(); i != r.end(); ++i) {
                local_sum += vec[i];
            }
            return local_sum;
        },
        std::plus<int>());
}
  1. Concurrent container usage:
#include <oneapi/tbb.h>
#include <iostream>

void concurrent_queue_example() {
    tbb::concurrent_queue<int> queue;
    
    tbb::parallel_for(0, 10, [&](int i) {
        queue.push(i);
    });

    int value;
    while (queue.try_pop(value)) {
        std::cout << value << " ";
    }
}

Getting Started

  1. Install oneTBB:

    • On Ubuntu: sudo apt-get install libtbb-dev
    • On macOS with Homebrew: brew install tbb
    • On Windows, download from the official Intel website
  2. Include oneTBB in your C++ project:

    • Add #include <oneapi/tbb.h> to your source files
    • Link against the TBB library (e.g., -ltbb on Linux)
  3. Compile and run your program:

    g++ -std=c++17 your_program.cpp -ltbb -o your_program
    ./your_program
    

Competitor Comparisons

5,583

oneAPI Threading Building Blocks (oneTBB)

Pros of oneTBB

  • Identical codebase and functionality
  • Same level of performance and optimization
  • Consistent API and documentation

Cons of oneTBB

  • No unique features or improvements
  • Identical limitations and potential issues
  • Same learning curve and complexity

Code Comparison

Both repositories contain identical code, so there are no differences to highlight. Here's a sample from both:

#include <tbb/parallel_for.h>
#include <tbb/blocked_range.h>

void ParallelFunction(const tbb::blocked_range<size_t>& range) {
    for (size_t i = range.begin(); i != range.end(); ++i) {
        // Perform parallel operation
    }
}

int main() {
    tbb::parallel_for(tbb::blocked_range<size_t>(0, 100), ParallelFunction);
    return 0;
}

Summary

The repositories oneapi-src/oneTBB and oneapi-src/oneTBB are identical in every aspect. They share the same codebase, functionality, performance characteristics, and API. This means that users can expect the same experience, benefits, and potential drawbacks from both repositories. The choice between them is arbitrary, as they offer the same Threading Building Blocks (TBB) library for parallel programming in C++.

4,907

[ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl

Pros of Thrust

  • Specifically designed for GPU acceleration, offering excellent performance on NVIDIA hardware
  • Provides a high-level, STL-like interface for parallel algorithms
  • Supports multiple backends (CUDA, OpenMP, TBB) for flexibility

Cons of Thrust

  • Primarily focused on NVIDIA GPUs, limiting portability to other hardware
  • Less comprehensive than oneTBB for general-purpose parallel computing tasks
  • May require more expertise in GPU programming for optimal performance

Code Comparison

Thrust:

#include <thrust/device_vector.h>
#include <thrust/sort.h>

thrust::device_vector<int> d_vec(input.begin(), input.end());
thrust::sort(d_vec.begin(), d_vec.end());

oneTBB:

#include <tbb/parallel_sort.h>

std::vector<int> vec(input.begin(), input.end());
tbb::parallel_sort(vec.begin(), vec.end());

Both libraries offer parallel sorting, but Thrust focuses on GPU execution while oneTBB targets CPU parallelism. Thrust requires explicit device memory management, whereas oneTBB works with standard containers. oneTBB provides a more general-purpose parallel computing solution, while Thrust excels in GPU-accelerated tasks, particularly on NVIDIA hardware.

20,298

ncnn is a high-performance neural network inference framework optimized for the mobile platform

Pros of ncnn

  • Lightweight and optimized for mobile and embedded devices
  • Supports a wide range of neural network operations
  • Cross-platform compatibility (Android, iOS, Windows, Linux, macOS)

Cons of ncnn

  • Limited to neural network inference, not a general-purpose parallel computing library
  • Smaller community and fewer resources compared to oneTBB
  • May require more manual optimization for specific use cases

Code Comparison

ncnn example (model inference):

ncnn::Net net;
net.load_param("model.param");
net.load_model("model.bin");

ncnn::Mat in = ncnn::Mat::from_pixels(image_data, ncnn::Mat::PIXEL_BGR, width, height);
ncnn::Mat out;
net.extract("output", out);

oneTBB example (parallel for loop):

#include <tbb/parallel_for.h>

tbb::parallel_for(0, n, [&](int i) {
    // Parallel computation
    result[i] = compute(data[i]);
});

Summary

ncnn is specialized for neural network inference on mobile and embedded devices, while oneTBB is a more general-purpose parallel computing library. ncnn offers lightweight performance for specific AI tasks, whereas oneTBB provides broader parallel processing capabilities for various applications.

DirectXMath is an all inline SIMD C++ linear algebra library for use in games and graphics apps

Pros of DirectXMath

  • Specialized for DirectX graphics programming, offering optimized math operations for 3D rendering
  • Lightweight and header-only library, easy to integrate into existing projects
  • Extensive documentation and examples provided by Microsoft

Cons of DirectXMath

  • Limited to Windows platforms and DirectX ecosystem
  • Narrower scope compared to oneTBB, focusing primarily on 3D math operations
  • Less emphasis on parallelism and concurrency features

Code Comparison

DirectXMath:

XMVECTOR v1 = XMLoadFloat3(&float3);
XMVECTOR v2 = XMVectorSet(1.0f, 2.0f, 3.0f, 0.0f);
XMVECTOR result = XMVectorAdd(v1, v2);

oneTBB:

tbb::parallel_for(0, n, [&](int i) {
    result[i] = vector1[i] + vector2[i];
});

DirectXMath focuses on SIMD-optimized vector operations, while oneTBB emphasizes parallel processing across multiple cores. DirectXMath is tailored for graphics programming, offering specialized functions for 3D math. In contrast, oneTBB provides a more general-purpose approach to parallel computing, applicable to a wider range of scenarios beyond graphics.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

oneAPI Threading Building Blocks (oneTBB)

Apache License Version 2.0 oneTBB CI Join the community on GitHub Discussions OpenSSF Best Practices OpenSSF Scorecard

oneTBB is a flexible C++ library that simplifies the work of adding parallelism to complex applications, even if you are not a threading expert.

The library lets you easily write parallel programs that take full advantage of the multi-core performance. Such programs are portable, composable and have a future-proof scalability. oneTBB provides you with functions, interfaces, and classes to parallelize and scale the code. All you have to do is to use the templates.

The library differs from typical threading packages in the following ways:

  • oneTBB enables you to specify logical parallelism instead of threads.
  • oneTBB targets threading for performance.
  • oneTBB is compatible with other threading packages.
  • oneTBB emphasizes scalable, data parallel programming.
  • oneTBB relies on generic programming.

Refer to oneTBB examples and samples to see how you can use the library.

oneTBB is a part of the UXL Foundation and is an implementation of oneAPI specification.

NOTE: Threading Building Blocks (TBB) is now called oneAPI Threading Building Blocks (oneTBB) to highlight that the tool is a part of the oneAPI ecosystem.

Release Information

See Release Notes and System Requirements.

Documentation

Installation

See Installation from Sources to learn how to install oneTBB.

Governance

The oneTBB project is governed by the UXL Foundation. You can get involved in this project in following ways:

Support

See our documentation to learn how to request help.

How to Contribute

We welcome community contributions, so check our Contributing Guidelines to learn more.

Use GitHub Issues for feature requests, bug reports, and minor inquiries. For broader questions and development-related discussions, use GitHub Discussions.

License

oneAPI Threading Building Blocks is licensed under Apache License, Version 2.0. By its terms, contributions submitted to the project are also done under that license.

Engineering team contacts


* All names and brands may be claimed as the property of others.