Convert Figma logo to code with AI

NVIDIA logocutlass

CUDA Templates for Linear Algebra Subroutines

5,319
897
5,319
152

Top Related Projects

30,390

A library for efficient similarity search and clustering of dense vectors.

3,577

oneAPI Deep Neural Network Library (oneDNN)

DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.

1,813

High-efficiency floating-point neural network inference operators for mobile, server, and Web

1,161

FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/

Quick Overview

CUTLASS (CUDA Templates for Linear Algebra Subroutines) is a CUDA C++ template library for efficient linear algebra computations at all levels and scales. It provides a collection of CUDA C++ templates and abstractions for implementing high-performance matrix-multiplication (GEMM) and related computations on NVIDIA GPUs.

Pros

  • Highly optimized for NVIDIA GPUs, offering excellent performance for linear algebra operations
  • Flexible and customizable, allowing users to tailor computations to their specific needs
  • Supports a wide range of data types, including mixed-precision computations
  • Integrates well with other CUDA libraries and frameworks

Cons

  • Steep learning curve due to its template-based design and CUDA-specific concepts
  • Limited to NVIDIA GPUs, not portable to other hardware platforms
  • Documentation can be complex for beginners or those unfamiliar with CUDA programming
  • Requires careful tuning and understanding of GPU architecture for optimal performance

Code Examples

  1. Basic GEMM operation:
#include <cutlass/gemm/device/gemm.h>

using namespace cutlass;

// Define the GEMM computation
using Gemm = gemm::device::Gemm<float, layout::RowMajor, float, layout::RowMajor, float, layout::RowMajor>;

// Initialize matrices
float *A, *B, *C;
// ... (allocate and initialize matrices)

// Launch GEMM
Gemm gemm_op;
gemm_op({m, n, k}, alpha, A, lda, B, ldb, beta, C, ldc);
  1. Mixed-precision GEMM:
#include <cutlass/gemm/device/gemm.h>

using namespace cutlass;

// Define mixed-precision GEMM (FP16 inputs, FP32 accumulation and output)
using Gemm = gemm::device::Gemm<half_t, layout::RowMajor, half_t, layout::ColumnMajor, float, layout::RowMajor, float>;

// Initialize matrices
half_t *A, *B;
float *C;
// ... (allocate and initialize matrices)

// Launch mixed-precision GEMM
Gemm gemm_op;
gemm_op({m, n, k}, alpha, A, lda, B, ldb, beta, C, ldc);
  1. Batched GEMM:
#include <cutlass/gemm/device/gemm_batched.h>

using namespace cutlass;

// Define batched GEMM
using Gemm = gemm::device::GemmBatched<float, layout::RowMajor, float, layout::ColumnMajor, float, layout::RowMajor>;

// Initialize batched matrices
float **A_ptrs, **B_ptrs, **C_ptrs;
// ... (allocate and initialize batched matrices)

// Launch batched GEMM
Gemm gemm_op;
gemm_op({m, n, k, batch_count}, alpha, A_ptrs, lda, B_ptrs, ldb, beta, C_ptrs, ldc);

Getting Started

  1. Clone the CUTLASS repository:

    git clone https://github.com/NVIDIA/cutlass.git
    
  2. Build CUTLASS (requires CMake and CUDA Toolkit):

    cd cutlass
    mkdir build && cd build
    cmake .. -DCUTLASS_NVCC_ARCHS=<your_gpu_arch>
    make -j
    
  3. Include CUTLASS in your project:

    #include <cutlass/cutlass.h>
    // Add other necessary headers
    
  4. Compile your project with CUDA support and link against CUTLASS.

Competitor Comparisons

30,390

A library for efficient similarity search and clustering of dense vectors.

Pros of Faiss

  • Specialized for efficient similarity search and clustering of dense vectors
  • Supports both CPU and GPU implementations for flexibility
  • Includes a wide range of indexing algorithms for different use cases

Cons of Faiss

  • More focused on vector search, less versatile for general matrix operations
  • May require more setup and configuration for specific use cases
  • Limited to C++ and Python bindings

Code Comparison

Faiss (vector search):

import faiss
index = faiss.IndexFlatL2(dimension)
index.add(vectors)
distances, indices = index.search(query_vectors, k)

CUTLASS (matrix multiplication):

#include <cutlass/gemm/device/gemm.h>
cutlass::gemm::device::Gemm<float> gemm_op;
gemm_op(args...);

Key Differences

  • Faiss is optimized for vector search and clustering, while CUTLASS focuses on efficient matrix computations
  • CUTLASS provides low-level building blocks for CUDA kernels, whereas Faiss offers higher-level abstractions for similarity search
  • Faiss has a broader range of indexing algorithms, while CUTLASS excels in performance-critical linear algebra operations
3,577

oneAPI Deep Neural Network Library (oneDNN)

Pros of oneDNN

  • Supports a wider range of hardware, including CPUs and GPUs from multiple vendors
  • Offers a more comprehensive set of deep learning primitives and operations
  • Provides better integration with popular deep learning frameworks like TensorFlow and PyTorch

Cons of oneDNN

  • May have slightly lower performance on NVIDIA GPUs compared to CUTLASS
  • Less focused on tensor core optimizations for NVIDIA hardware
  • Potentially more complex API due to its broader hardware support

Code Comparison

CUTLASS (GEMM operation):

using Gemm = cutlass::gemm::device::Gemm<float, float, float, float>;
Gemm gemm_op;
gemm_op(args...);

oneDNN (GEMM operation):

auto gemm_pd = dnnl::matmul::primitive_desc(gemm_d, engine);
auto gemm = dnnl::matmul(gemm_pd);
gemm.execute(stream, args...);

Both libraries provide high-performance implementations for deep learning operations, but CUTLASS is more specialized for NVIDIA GPUs, while oneDNN offers broader hardware support and integration with popular frameworks.

DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.

Pros of DirectML

  • Cross-platform support for various hardware accelerators, not limited to NVIDIA GPUs
  • Integration with DirectX ecosystem, beneficial for Windows developers
  • Higher-level abstraction, potentially easier for beginners to use

Cons of DirectML

  • Less specialized for specific GPU architectures, potentially lower performance in some cases
  • Smaller community and fewer resources compared to CUTLASS
  • Limited to Windows and Xbox platforms, less suitable for Linux or macOS development

Code Comparison

CUTLASS (GEMM operation):

using Gemm = cutlass::gemm::device::Gemm<float, float, float, float>;
Gemm gemm_op;
gemm_op(args...);

DirectML (GEMM operation):

auto desc = DML_GEMM_OPERATOR_DESC{...};
IDMLOperator* gemmOp;
device->CreateOperator(&desc, IID_PPV_ARGS(&gemmOp));

Both libraries provide efficient implementations for matrix operations, but CUTLASS offers more fine-grained control over CUDA kernels, while DirectML provides a higher-level API integrated with DirectX.

1,813

High-efficiency floating-point neural network inference operators for mobile, server, and Web

Pros of XNNPACK

  • Supports a wider range of hardware platforms, including mobile and embedded devices
  • Offers optimizations for ARM processors, making it suitable for mobile development
  • Provides a more extensive set of neural network operators and functions

Cons of XNNPACK

  • May not achieve the same level of performance as CUTLASS on NVIDIA GPUs
  • Lacks specialized optimizations for tensor core operations found in modern NVIDIA GPUs

Code Comparison

XNNPACK example (C++):

xnn_initialize(nullptr);
xnn_operator_t conv_op = nullptr;
xnn_status status = xnn_create_convolution2d_nhwc_f32(
  /* ... parameters ... */
  &conv_op);

CUTLASS example (C++):

using Gemm = cutlass::gemm::device::Gemm<float, cutlass::layout::ColumnMajor,
                                         float, cutlass::layout::ColumnMajor,
                                         float, cutlass::layout::ColumnMajor>;
Gemm gemm_op;
gemm_op(args...);

Both libraries provide low-level primitives for efficient neural network computations, but CUTLASS is more focused on NVIDIA GPU optimizations, while XNNPACK offers broader platform support and a more extensive set of operators.

1,161

FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/

Pros of FBGEMM

  • Optimized for CPU performance, especially on x86 architectures
  • Supports quantization-aware training and inference
  • Integrated with PyTorch, offering seamless compatibility

Cons of FBGEMM

  • Limited GPU support compared to CUTLASS
  • Narrower focus on specific use cases (e.g., recommendation systems)

Code Comparison

FBGEMM (C++):

fbgemm::PackAMatrix<int8_t> packA(
    matrix_op_t::NoTranspose, M, K, A, K, nullptr, 1);
fbgemm::PackBMatrix<int8_t> packB(
    matrix_op_t::NoTranspose, K, N, B, N, nullptr, 1);
fbgemm::DoNothing<> doNothingObj{};
fbgemm::ReQuantizeForFloat<> outputProcObj(doNothingObj);
fbgemm::fbgemmPacked(packA, packB, C, N, outputProcObj, 0, 1);

CUTLASS (C++):

using Gemm = cutlass::gemm::device::Gemm<
    cutlass::half_t, cutlass::layout::ColumnMajor,
    cutlass::half_t, cutlass::layout::ColumnMajor,
    cutlass::half_t, cutlass::layout::ColumnMajor,
    float>;
Gemm gemm_op;
gemm_op(args...);

Both libraries offer high-performance matrix multiplication, but CUTLASS is more focused on GPU acceleration, while FBGEMM targets CPU optimization. CUTLASS provides a more flexible and extensive set of GEMM operations, whereas FBGEMM specializes in quantized operations for specific use cases.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

ALT

CUTLASS 3.5.1

CUTLASS 3.5.1 - July 2024

CUTLASS is a collection of CUDA C++ template abstractions for implementing high-performance matrix-matrix multiplication (GEMM) and related computations at all levels and scales within CUDA. It incorporates strategies for hierarchical decomposition and data movement similar to those used to implement cuBLAS and cuDNN. CUTLASS decomposes these "moving parts" into reusable, modular software components abstracted by C++ template classes. Primitives for different levels of a conceptual parallelization hierarchy can be specialized and tuned via custom tiling sizes, data types, and other algorithmic policy. The resulting flexibility simplifies their use as building blocks within custom kernels and applications.

To support a wide variety of applications, CUTLASS provides extensive support for mixed-precision computations, providing specialized data-movement and multiply-accumulate abstractions for half-precision floating point (FP16), BFloat16 (BF16), Tensor Float 32 (TF32), single-precision floating point (FP32), FP32 emulation via tensor core instruction, double-precision floating point (FP64) types, integer data types (4b and 8b), and binary data types (1b). CUTLASS demonstrates warp-synchronous matrix multiply operations targeting the programmable, high-throughput Tensor Cores implemented by NVIDIA's Volta, Turing, Ampere, and Hopper architectures.

See the Quick Start Guide to get started quickly.

See the functionality listing for the list of operations supported at each level of the execution model hierarchy.

CUTLASS 3.0 introduced a new core library, CuTe, to describe and manipulate tensors of threads and data. CuTe is a collection of C++ CUDA template abstractions for defining and operating on hierarchically multidimensional layouts of threads and data. CuTe provides Layout and Tensor objects that compactly package the type, shape, memory space, and layout of data, while performing the complicated indexing for the user. This lets programmers focus on the logical descriptions of their algorithms while CuTe does the mechanical bookkeeping for them. With these tools, we can quickly design, implement, and modify all dense linear algebra operations.

The core abstractions of CuTe are hierarchically multidimensional layouts which can be composed with data arrays to represent tensors. The representation of layouts is powerful enough to represent nearly everything we need to implement efficient dense linear algebra. Layouts can also be combined and manipulated via functional composition, on which we build a large set of common operations such as tiling and partitioning.

CUTLASS 3.0 and beyond adopts CuTe throughout the GEMM hierarchy in its templates. This greatly simplifies the design and improves code composability and readability. More documentation specific to CuTe can be found in its dedicated documentation directory.

In addition to GEMMs, CUTLASS implements high-performance convolution via the implicit GEMM algorithm. Implicit GEMM is the formulation of a convolution operation as a GEMM thereby taking advantage of CUTLASS's modular GEMM pipeline. This allows CUTLASS to build convolutions by reusing highly-optimized GEMM components.

What's New in CUTLASS 3.5

CUTLASS 3.5.1 is an update to CUTLASS adding:

CUTLASS 3.5.0 is an update to CUTLASS adding:

  • Implicit GEMM Convolutions targeting Hopper SM90A via WGMMA + TMA im2col.
    • Native implementation in CUTLASS 3.x using CuTe, mirroring the same design hierarchy as that of GEMMs.
    • Support for 1D, 2D, and 3D convolutions in a rank-agnostic fashion.
    • Support for Fprop, Dgrad, and Wgrad algorithms.
    • CUTLASS profiler support for 2D and 3D convolutions implemented via the 3.x API.
    • NOTE: this is a beta release. Further updates to CUTLASS will include major performance improvements, feature enablement, and possible breaking changes to the API until 3.7 release. Your feedback is welcome on the design!
  • Support for Ada (SM89) FP8 tensor cores via the 2.x API. Requires CUDA 12.4 or newer.
  • Ampere gather/scatter convolution example in CuTe and CUTLASS 3.x.
    • Showcasing how custom kernels can be written and optimized using CUTLASS 3.x and CuTe and the general strategy for implementing convolutions as specializations of GETTs.
    • Implementation of a coarse grained sparse gather/scatter kernel achieving peak performance on Ampere class tensor cores.
  • 32x and 16x tile sizes are added to CUTLASS 2.x to improve the performance of narrow-tall and wide-short matrices.
  • Updates to CuTe documentation for cute::Tensor<>, MMA atoms, and an overhauled CuTe GEMM tutorial series.
  • Extensions to CuTe to support L2 prefetching and TMA store+reductions.
  • Remove C++11 requirement on a few CUTLASS 2.x API header files. All CUTLASS files now require C++17.
  • Fixes to greatly reduce build warnings.
  • Updates and bugfixes from the community (thanks!)
  • CUTLASS 3.5.1 is a minor update to CUTLASS containing small bug fixes and improvements, including fixes for FlashAttention-2 builds.

Minimum requirements:

  • Architecture: Volta
  • Compiler: Must support at least C++17
  • CUDA Toolkit version: 11.4

Starting from CUTLASS 3.0, CUTLASS removed support for the following:

  • Maxwell and Pascal GPU architectures
  • Ubuntu 16.04
  • CUDA 10.2
  • C++ language versions less than 17.

See the CHANGELOG for a detailed listing of releases and updates.

Performance

CUTLASS primitives are very efficient. When used to construct device-wide GEMM kernels, they exhibit peak performance comparable to cuBLAS for scalar GEMM computations. The above figure shows the continual CUTLASS performance improvements on an NVIDIA H100 (NVIDIA Hopper architecture) since CUTLASS 3.1. CUTLASS 3.5.1 was compiled with the CUDA 12.5u1 Toolkit. Tensor Core operations are implemented using CUDA's mma and wgmma instructions.

When using CUTLASS building blocks to construct device-wide implicit gemm (Fprop, Dgrad, and Wgrad) kernels, CUTLASS performance is also comparable to cuDNN when running Resnet-50 layers on an NVIDIA A100 as shown in the above figure. Tensor Core operations are implemented using CUDA's mma instruction.

Compatibility

CUTLASS requires a C++17 host compiler and performs best when built with the CUDA 12.4 Toolkit. It is also compatible with CUDA 11.4, CUDA 11.5, CUDA 11.6, CUDA 11.7, CUDA 11.8, CUDA 12.0, CUDA 12.1, CUDA 12.2.2, CUDA 12.3.1 and CUDA 12.3.2.

Operating Systems

We have tested the following environments.

Operating SystemCompiler
Ubuntu 18.04GCC 7.5.0
Ubuntu 20.04GCC 10.3.0
Ubuntu 22.04GCC 11.2.0
Ubuntu 22.04Clang 10.0.0
Ubuntu 22.04Clang 14.0.6
Ubuntu 22.04Clang 17.0.6
Windows 10.0Visual Studio 2019 v16.11.27

Note: GCC 8.5.0 has known regressions regarding fold expressions and overloaded operators. Using GCC 7.5.0 or (preferred) GCC >= 9 is recommended.

Hardware

CUTLASS runs successfully on the following NVIDIA GPUs, and it is expected to be efficient on Volta, Turing, Ampere, Ada, and Hopper architecture based NVIDIA GPUs.

GPUCUDA Compute CapabilityMinimum CUDA Toolkit Required by CUTLASS-3
NVIDIA V100 Tensor Core GPU7.011.4
NVIDIA TitanV7.011.4
NVIDIA GeForce RTX 2080 TI, 2080, 20707.511.4
NVIDIA T47.511.4
NVIDIA A100 Tensor Core GPU8.011.4
NVIDIA A108.611.4
NVIDIA GeForce RTX 30908.611.4
NVIDIA GeForce RTX 40908.911.8
NVIDIA L408.911.8
NVIDIA H100 Tensor Core GPU9.011.8

Target Architecture

In general, PTX code generated for one target architecture can be run on future architectures (i.e., it is forward compatible). However, CUDA 12.0 introduced the concept of "architecture-accelerated features" whose PTX does not have forward compatibility guarantees. Several Hopper PTX instructions fall under this category of architecture-accelerated features, and thus require a sm_90a target architecture (note the "a" appended). For more details on this and other architecture-accelerated instructions, please refer to the CUDA Documentation.

The target architecture information is passed on to CUTLASS via the cmake flag CUTLASS_NVCC_ARCHS. In order to maximize performance on Hopper GH100, users are required to build CUTLASS with 90a as the target architecture. If a user accidentally builds a kernel which uses SM90a features (e.g. Hopper Tensor Core Instructions), using the SM90 target (note the lack of "a"), with either CTK 12 or 11.8, the kernel is expected to fail with a runtime error.

cmake .. -DCUTLASS_NVCC_ARCHS="90a" 

Please refer to the functionality documentation for details on which kernels require which target architectures.

Documentation

CUTLASS is described in the following documents and the accompanying Doxygen documentation.

Resources

We have also described the structure of an efficient GEMM in our talk at the GPU Technology Conference 2018.

Building CUTLASS

CUTLASS is a header-only template library and does not need to be built to be used by other projects. Client applications should target CUTLASS's include/ directory in their include paths.

CUTLASS unit tests, examples, and utilities can be build with CMake. The minimum version of CMake is given in the Quickstart guide. Make sure the CUDACXX environment variable points to NVCC in the CUDA Toolkit installed on your system.

$ export CUDACXX=${CUDA_INSTALL_PATH}/bin/nvcc

Create a build directory within the CUTLASS project, then run CMake. By default CUTLASS will build kernels for CUDA architecture versions 5.0, 6.0, 6.1, 7.0, 7.5, 8.0, 8.6, 8.9, and 9.0. To reduce compile time you can specify the architectures to build CUTLASS for by changing the CMake configuration setting CUTLASS_NVCC_ARCHS.

$ mkdir build && cd build

$ cmake .. -DCUTLASS_NVCC_ARCHS=80               # compiles for NVIDIA's Ampere Architecture

From the build/ directory, compile and run the CUTLASS unit tests by building the target test_unit with make.

The unit tests are organized as several binaries mirroring the top-level namespaces of CUTLASS, and they may be executed in parallel via make's -j command line argument.

$ make test_unit -j
...
...
...
[----------] Global test environment tear-down
[==========] 946 tests from 57 test cases ran. (10812 ms total)
[  PASSED  ] 946 tests.

All tests should pass on supported platforms, though the exact number of tests may vary over time.

Project Structure

CUTLASS is arranged as a header-only library along with Utilities, Tools, Examples, and unit tests. Doxygen documentation provides a complete list of files, classes, and template concepts defined in the CUTLASS project.

A detailed explanation of the source code organization may be found in the CUTLASS documentation, but several main components are summarized below.

CUTLASS Template Library

include/                     # client applications should target this directory in their build's include paths

  cutlass/                   # CUDA Templates for Linear Algebra Subroutines and Solvers - headers only

    arch/                    # direct exposure of architecture features (including instruction-level GEMMs)

    conv/                    # code specialized for convolution

    epilogue/                # code specialized for the epilogue of gemm/convolution

    gemm/                    # code specialized for general matrix product computations

    layout/                  # layout definitions for matrices, tensors, and other mathematical objects in memory

    platform/                # CUDA-capable Standard Library components

    reduction/               # bandwidth-limited reduction kernels that do not fit the "gemm" model

    thread/                  # simt code that can be performed within a CUDA thread
    
    transform/               # code specialized for layout, type, and domain transformations

    *                        # core vocabulary types, containers, and basic numeric operations

  cute/                      # CuTe Layout, layout algebra, MMA/Copy atoms, tiled MMA/Copy

    algorithm/               # Definitions of core operations such as copy, gemm, and operations on cute::tuples

    arch/                    # Bare bones PTX wrapper structs for copy and math instructions

    atom/                    # Meta-information either link to or built from arch/ operators

      mma_atom.hpp           # cute::Mma_Atom and cute::TiledMma

      copy_atom.hpp          # cute::Copy_Atom and cute::TiledCopy

      *sm*.hpp               # Arch specific meta-information for copy and math operations

    *                        # Core library types such as Shape, Stride, Layout, Tensor, and associated operations

CUTLASS SDK Examples

CUTLASS SDK examples apply CUTLASS templates to implement basic computations.

Tools

tools/
  library/                   # CUTLASS Instance Library - contains instantiations of all supported CUTLASS templates
    include/
      cutlass/
        library/

  profiler/                  # CUTLASS Profiler         - command-line utility for executing operations in the
                             #                            CUTLASS Library
  
  util/                      # CUTLASS Utilities        - contains numerous helper classes for
    include/                 #                            manging tensors in device memory, reference
      cutlass/               #                            implementations for GEMM, random initialization
        util/                #                            of tensors, and I/O.

Test

The test/unit/ directory consist of unit tests implemented with Google Test that demonstrate basic usage of Core API components and complete tests of the CUTLASS GEMM computations.

Instructions for building and running the Unit tests are described in the Quickstart guide.

Performance Profiling

The tools/profiler/ directory contains a command-line utility for launching each of the GEMM kernels. It can be built as follows:

$ make cutlass_profiler -j16

Building all GEMM and Convolution kernels (long build times)

By default, only one tile size is instantiated for each data type, math instruction, and layout. To instantiate all, set the following environment variable when running CMake from an empty build/ directory. Beware, this results in tens of thousands of kernels and long build times. This would also result in a large binary size and on some platforms linker to fail on building the library. Therefore, it's highly recommended to generate only a subset of kernels as demonstrated in the sub-section below.

$ cmake .. -DCUTLASS_NVCC_ARCHS=90a -DCUTLASS_LIBRARY_KERNELS=all
...
$ make cutlass_profiler -j16

Building a subset of GEMM and Convolution kernels (reduced build times)

To compile strictly one kernel or a small set of kernels, a comma-delimited list of kernel names with wildcard characters may be used to reduce the set of kernels. The following examples show building exactly one or a subset of kernels for NVIDIA Ampere and Turing architecture:

Building a subset Tensor Core GEMM kernels

To compile a subset of Tensor Core GEMM kernels with FP32 accumulation and FP16 input targeting NVIDIA Ampere and Turing architecture, use the below cmake command line:

$ cmake .. -DCUTLASS_NVCC_ARCHS='75;80' -DCUTLASS_LIBRARY_KERNELS=cutlass_tensorop_s*gemm_f16_*_nt_align8
...
$ make cutlass_profiler -j16

Example command line for profiling a subset of Tensor Core GEMM kernels is as follows:

./tools/profiler/cutlass_profiler --kernels=cutlass_tensorop_s*gemm_f16_*_nt_align8 --m=3456 --n=4096 --k=4096

...
=============================
  Problem ID: 1

        Provider: CUTLASS
   OperationKind: gemm
       Operation: cutlass_tensorop_s1688gemm_f16_256x128_32x2_nt_align8

          Status: Success
    Verification: ON
     Disposition: Passed

reference_device: Passed
          cuBLAS: Passed

       Arguments: --gemm_kind=universal --m=3456 --n=4096 --k=4096 --A=f16:column --B=f16:row --C=f32:column --alpha=1  \
                  --beta=0 --split_k_slices=1 --batch_count=1 --op_class=tensorop --accum=f32 --cta_m=256 --cta_n=128  \
                  --cta_k=32 --stages=2 --warps_m=4 --warps_n=2 --warps_k=1 --inst_m=16 --inst_n=8 --inst_k=8 --min_cc=75  \
                  --max_cc=1024

           Bytes: 118489088  bytes
           FLOPs: 115992428544  flops

         Runtime: 1.55948  ms
          Memory: 70.7616 GiB/s

            Math: 74378.8 GFLOP/s



=============================
...

Building one CUDA Core GEMM kernel

To compile one SGEMM kernel targeting NVIDIA Ampere and Turing architecture, use the below cmake command line:

$ cmake .. -DCUTLASS_NVCC_ARCHS='75;80' -DCUTLASS_LIBRARY_KERNELS=cutlass_simt_sgemm_128x128_8x2_nn_align1
...
$ make cutlass_profiler -j16

Example command line for profiling single SGEMM CUDA kernel is as follows:

$ ./tools/profiler/cutlass_profiler --kernels=sgemm --m=3456 --n=4096 --k=4096

=============================
  Problem ID: 1

        Provider: CUTLASS
   OperationKind: gemm
       Operation: cutlass_simt_sgemm_128x128_8x2_nn_align1

          Status: Success
    Verification: ON
     Disposition: Passed

          cuBLAS: Passed

       Arguments: --m=3456 --n=4096 --k=4096 --A=f32:column --B=f32:column --C=f32:column --alpha=1 --beta=0 --split_k_slices=1  \
                  --batch_count=1 --op_class=simt --accum=f32 --cta_m=128 --cta_n=128 --cta_k=8 --stages=2 --warps_m=4  \
                  --warps_n=2 --warps_k=1 --inst_m=1 --inst_n=1 --inst_k=1 --min_cc=50 --max_cc=1024

           Bytes: 180355072  bytes
           FLOPs: 115992428544  flops

         Runtime: 6.73655  ms
          Memory: 24.934 GiB/s

            Math: 17218.4 GFLOP/s

=============================

Building a subset of Tensor Core Convolution kernels

To compile a subset of Tensor core convolution kernels implementing forward propagation (fprop) with FP32 accumulation and FP16 input targeting NVIDIA Ampere and Turing architecture, use the below cmake command line:

$ cmake .. -DCUTLASS_NVCC_ARCHS='75;80' -DCUTLASS_LIBRARY_KERNELS=cutlass_tensorop_s*fprop_optimized_f16
...
$ make cutlass_profiler -j16

Example command line for profiling a subset of Tensor Core convolution kernels is as follows:

$ ./tools/profiler/cutlass_profiler --kernels=cutlass_tensorop_s*fprop_optimized_f16 --n=8 --h=224 --w=224 --c=128 --k=128 --r=3 --s=3

...
=============================
  Problem ID: 1

        Provider: CUTLASS
   OperationKind: conv2d
       Operation: cutlass_tensorop_s16816fprop_optimized_f16_128x128_32x5_nhwc

          Status: Success
    Verification: ON
     Disposition: Passed

reference_device: Passed

       Arguments: --conv_kind=fprop --n=8 --h=224 --w=224 --c=128 --k=128 --r=3 --s=3 --p=224 --q=224 --pad_h=1 --pad_w=1  \
                  --stride_h=1 --stride_w=1 --dilation_h=1 --dilation_w=1 --Activation=f16:nhwc --Filter=f16:nhwc --Output=f32:nhwc  \
                  --conv_mode=cross --iterator_algorithm=optimized --alpha=1 --beta=0 --split_k_mode=serial --split_k_slices=1  \
                  --eq_gemm_provider=none --op_class=tensorop --accum=f32 --cta_m=128 --cta_n=128 --cta_k=32 --stages=5  \
                  --warps_m=2 --warps_n=2 --warps_k=1 --inst_m=16 --inst_n=8 --inst_k=16 --min_cc=80 --max_cc=1024

           Bytes: 1130659840  bytes
           FLOPs: 118482796544  flops

         Runtime: 0.711496  ms
          Memory: 1479.99 GiB/s

            Math: 166526 GFLOP/s

=============================
...

Building one Convolution CUDA kernel

To compile and run one CUDA Core convolution kernel implementing forward propagation (fprop) with F32 accumulation and FP32 input targeting NVIDIA Ampere and Turing architecture, use the below cmake command line:

$ cmake .. -DCUTLASS_NVCC_ARCHS='75;80' -DCUTLASS_LIBRARY_KERNELS=cutlass_simt_sfprop_optimized_128x128_8x2_nhwc
...
$ make cutlass_profiler -j16

Example command line for profiling one CUDA Core convolution kernel:

$ ./tools/profiler/cutlass_profiler --kernels=cutlass_simt_sfprop_optimized_128x128_8x2_nhwc --n=8 --h=224 --w=224 --c=128 --k=128 --r=3 --s=3


=============================
  Problem ID: 1

        Provider: CUTLASS
   OperationKind: conv2d
       Operation: cutlass_simt_sfprop_optimized_128x128_8x2_nhwc

          Status: Success
    Verification: ON
     Disposition: Passed

reference_device: Passed

       Arguments: --conv_kind=fprop --n=8 --h=224 --w=224 --c=128 --k=128 --r=3 --s=3 --p=224 --q=224 --pad_h=1 --pad_w=1  \
                  --stride_h=1 --stride_w=1 --dilation_h=1 --dilation_w=1 --Activation=f32:nhwc --Filter=f32:nhwc --Output=f32:nhwc  \
                  --conv_mode=cross --iterator_algorithm=optimized --alpha=1 --beta=0 --split_k_mode=serial --split_k_slices=1  \
                  --eq_gemm_provider=none --op_class=simt --accum=f32 --cta_m=128 --cta_n=128 --cta_k=8 --stages=2 --warps_m=4  \
                  --warps_n=2 --warps_k=1 --inst_m=1 --inst_n=1 --inst_k=1 --min_cc=50 --max_cc=1024

           Bytes: 2055798784  bytes
           FLOPs: 118482796544  flops

         Runtime: 7.34266  ms
          Memory: 260.752 GiB/s

            Math: 16136.2 GFLOP/s


=============================

More Details on Compiling CUTLASS Kernels and CUTLASS Profiler

About

CUTLASS is released by NVIDIA Corporation as Open Source software under the 3-clause "New" BSD license.

Contributors

The official list of CUTLASS developers and contributors is available here: CONTRIBUTORS.

Copyright

Copyright (c) 2017 - 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved. SPDX-License-Identifier: BSD-3-Clause

  Redistribution and use in source and binary forms, with or without
  modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this
  list of conditions and the following disclaimer.

  2. Redistributions in binary form must reproduce the above copyright notice,
  this list of conditions and the following disclaimer in the documentation
  and/or other materials provided with the distribution.

  3. Neither the name of the copyright holder nor the names of its
  contributors may be used to endorse or promote products derived from
  this software without specific prior written permission.

  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
  AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
  IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
  DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
  FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
  DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
  SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
  CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
  OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.