Convert Figma logo to code with AI

ROCm logoROCm

AMD ROCm™ Software - GitHub Home

5,395
449
5,395
146

Top Related Projects

5,395

AMD ROCm™ Software - GitHub Home

Samples for Intel® oneAPI Toolkits

Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver

91,080

Tensors and Dynamic neural networks in Python with strong GPU acceleration

190,523

An Open Source Machine Learning Framework for Everyone

Quick Overview

The ROCm (Radeon Open Compute) project is an open-source software platform for GPU-accelerated computing on AMD hardware. It provides a comprehensive suite of tools, libraries, and runtime environments for developers to leverage the power of AMD's Radeon GPUs in their applications.

Pros

  • High-Performance Computing: ROCm is designed to deliver high-performance computing capabilities, making it suitable for a wide range of GPU-accelerated workloads, including machine learning, scientific computing, and data analytics.
  • AMD Hardware Support: ROCm is optimized for AMD's Radeon GPUs, providing seamless integration and support for the latest hardware advancements.
  • Open-Source: The ROCm project is open-source, allowing developers to contribute, collaborate, and customize the platform to suit their specific needs.
  • Ecosystem Integration: ROCm integrates with popular open-source frameworks and libraries, such as TensorFlow, PyTorch, and OpenCL, enabling developers to leverage existing tools and workflows.

Cons

  • Limited Platform Support: ROCm is primarily focused on AMD hardware, which may limit its adoption in environments where other GPU vendors (e.g., NVIDIA) are more prevalent.
  • Steep Learning Curve: Transitioning from other GPU computing platforms to ROCm may require a significant learning curve, as developers need to familiarize themselves with the ROCm-specific tools and workflows.
  • Compatibility Challenges: Maintaining compatibility with the latest hardware and software versions can be a challenge, as the ROCm ecosystem evolves rapidly.
  • Documentation and Community: While the ROCm project has a growing community, the documentation and community support may not be as extensive as some other GPU computing platforms.

Code Examples

Since ROCm is a software platform and not a specific code library, the following examples demonstrate how to use ROCm for GPU-accelerated computing:

# Example 1: Using ROCm with TensorFlow
import tensorflow as tf
with tf.device('/device:GPU:0'):
    a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3])
    b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2])
    c = tf.matmul(a, b)
    print(c)

This example shows how to use ROCm with TensorFlow to perform a matrix multiplication on the GPU.

// Example 2: Using ROCm with OpenCL
#include <CL/cl.h>
int main() {
    cl_platform_id platform;
    cl_device_id device;
    cl_context context;
    cl_command_queue queue;
    // Set up OpenCL environment using ROCm
    // ...
    // Enqueue kernel and execute
    // ...
    return 0;
}

This example demonstrates the integration of ROCm with OpenCL, allowing developers to leverage the GPU for general-purpose computing.

# Example 3: Using ROCm with PyTorch
import torch
import torch.nn as nn
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = nn.Linear(in_features=64, out_features=10).to(device)
input_tensor = torch.randn(1, 64).to(device)
output = model(input_tensor)
print(output)

This example shows how to use ROCm with PyTorch to run a simple neural network on the GPU.

Getting Started

To get started with ROCm, follow these steps:

  1. Install ROCm: Visit the ROCm GitHub repository and follow the installation instructions for your operating system.

  2. Set up the Development Environment: Ensure that your system is configured with the necessary dependencies and tools, such as compilers, libraries, and development tools.

  3. Explore the ROCm Ecosystem: Familiarize yourself with the various components of the ROCm platform, including the ROCm runtime, ROCm compiler

Competitor Comparisons

5,395

AMD ROCm™ Software - GitHub Home

Pros of ROCm

  • More comprehensive documentation and guides
  • Broader community support and contributions
  • Regular updates and maintenance

Cons of ROCm

  • Larger repository size, potentially slower cloning
  • More complex structure, steeper learning curve for newcomers
  • May include unnecessary components for some users

Code Comparison

ROCm:

#include <hip/hip_runtime.h>

__global__ void vectorAdd(const float *A, const float *B, float *C, int numElements)
{
    int i = blockDim.x * blockIdx.x + threadIdx.x;
    if (i < numElements)
    {
        C[i] = A[i] + B[i];
    }
}

ROCm>:

// No direct code comparison available as ROCm> is not a separate repository
// but rather a naming convention or typo in the original query.

Note: The comparison between ROCm/ROCm and ROCm/ROCm> appears to be based on a misunderstanding or typo. ROCm> is not a separate repository. The ROCm (Radeon Open Compute) project is maintained in the ROCm/ROCm repository on GitHub. As such, a direct comparison between these two is not possible. The information provided for ROCm is based on the actual repository, while no specific information can be given for ROCm> as it doesn't exist as a separate entity.

Samples for Intel® oneAPI Toolkits

Pros of oneAPI-samples

  • Broader scope, covering multiple hardware architectures (CPU, GPU, FPGA)
  • More extensive collection of samples and tutorials
  • Better documentation and learning resources for beginners

Cons of oneAPI-samples

  • Less focused on specific GPU optimization techniques
  • May require more setup and configuration for different hardware targets
  • Potentially steeper learning curve due to broader scope

Code Comparison

ROCm example (HIP):

#include <hip/hip_runtime.h>

__global__ void vectorAdd(float *a, float *b, float *c, int n) {
    int i = blockDim.x * blockIdx.x + threadIdx.x;
    if (i < n) c[i] = a[i] + b[i];
}

oneAPI example (DPC++):

#include <CL/sycl.hpp>

void vectorAdd(const float* a, const float* b, float* c, size_t n) {
    sycl::queue q;
    q.parallel_for(sycl::range<1>(n), [=](sycl::id<1> i) {
        c[i] = a[i] + b[i];
    }).wait();
}

Both repositories provide tools and samples for GPU programming, but ROCm focuses specifically on AMD GPUs, while oneAPI-samples offers a more hardware-agnostic approach. ROCm may be more suitable for developers targeting AMD hardware, while oneAPI-samples provides a broader foundation for heterogeneous computing across different architectures.

Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver

Pros of compute-runtime

  • Broader hardware support for Intel GPUs and integrated graphics
  • More extensive documentation and integration guides
  • Active community with frequent updates and bug fixes

Cons of compute-runtime

  • Limited to Intel hardware, less versatile than ROCm
  • Smaller ecosystem of tools and libraries compared to ROCm
  • Less focus on high-performance computing (HPC) workloads

Code Comparison

ROCm example (HIP):

#include <hip/hip_runtime.h>

__global__ void vectorAdd(float *a, float *b, float *c, int n) {
    int i = blockDim.x * blockIdx.x + threadIdx.x;
    if (i < n) c[i] = a[i] + b[i];
}

compute-runtime example (OpenCL):

#include <CL/cl.h>

const char* kernelSource = "__kernel void vectorAdd(__global float *a, __global float *b, __global float *c, int n) {"
                           "    int i = get_global_id(0);"
                           "    if (i < n) c[i] = a[i] + b[i];"
                           "}";

Both repositories provide GPU acceleration capabilities, but ROCm focuses on AMD hardware and HPC workloads, while compute-runtime targets Intel GPUs and integrated graphics. ROCm uses HIP (a CUDA-like API) for programming, whereas compute-runtime primarily uses OpenCL. The code examples demonstrate the syntax differences between the two approaches.

91,080

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Pros of PyTorch

  • Widely adopted in the machine learning community with extensive documentation and tutorials
  • Supports dynamic computational graphs, allowing for more flexible model architectures
  • Offers a Pythonic interface, making it easier for developers to learn and use

Cons of PyTorch

  • Generally slower performance on AMD GPUs compared to NVIDIA GPUs
  • Limited support for AMD hardware optimization out-of-the-box

Code Comparison

PyTorch example:

import torch

x = torch.tensor([1, 2, 3])
y = torch.tensor([4, 5, 6])
z = x + y
print(z)

ROCm example (using HIP):

#include <hip/hip_runtime.h>

__global__ void vectorAdd(float *a, float *b, float *c, int n) {
    int i = blockDim.x * blockIdx.x + threadIdx.x;
    if (i < n) c[i] = a[i] + b[i];
}

ROCm provides lower-level GPU programming capabilities, while PyTorch offers a high-level machine learning framework. ROCm is specifically designed for AMD GPUs, potentially offering better performance on AMD hardware. However, PyTorch has broader adoption and a more extensive ecosystem of tools and libraries for machine learning tasks.

190,523

An Open Source Machine Learning Framework for Everyone

Pros of TensorFlow

  • Larger community and ecosystem, with more resources and third-party libraries
  • Better documentation and tutorials for beginners
  • Supports a wider range of hardware platforms, including CPUs and GPUs

Cons of TensorFlow

  • Can be more complex and harder to learn for newcomers
  • Slower development cycle compared to some other frameworks
  • Larger memory footprint and potentially slower execution in some cases

Code Comparison

TensorFlow:

import tensorflow as tf

model = tf.keras.Sequential([
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.Dense(10, activation='softmax')
])

ROCm:

#include <hip/hip_runtime.h>

__global__ void vectorAdd(float *a, float *b, float *c, int n) {
    int i = blockDim.x * blockIdx.x + threadIdx.x;
    if (i < n) c[i] = a[i] + b[i];
}

Note that ROCm is a hardware acceleration platform, while TensorFlow is a machine learning framework. ROCm can be used to accelerate TensorFlow on AMD GPUs, but they serve different purposes. The code examples reflect their different use cases: TensorFlow for building neural networks, and ROCm for low-level GPU programming.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

AMD ROCm Software

ROCm is an open-source stack, composed primarily of open-source software, designed for graphics processing unit (GPU) computation. ROCm consists of a collection of drivers, development tools, and APIs that enable GPU programming from low-level kernel to end-user applications.

With ROCm, you can customize your GPU software to meet your specific needs. You can develop, collaborate, test, and deploy your applications in a free, open source, integrated, and secure software ecosystem. ROCm is particularly well-suited to GPU-accelerated high-performance computing (HPC), artificial intelligence (AI), scientific computing, and computer aided design (CAD).

ROCm is powered by AMD’s Heterogeneous-computing Interface for Portability (HIP), an open-source software C++ GPU programming environment and its corresponding runtime. HIP allows ROCm developers to create portable applications on different platforms by deploying code on a range of platforms, from dedicated gaming GPUs to exascale HPC clusters.

ROCm supports programming models, such as OpenMP and OpenCL, and includes all necessary open source software compilers, debuggers, and libraries. ROCm is fully integrated into machine learning (ML) frameworks, such as PyTorch and TensorFlow.

[!IMPORTANT] A new open source build platform for ROCm is under development at https://github.com/ROCm/TheRock, featuring a unified CMake build with bundled dependencies, Windows support, and more.

The instructions below describe the prior process for building from source which will be replaced once TheRock is mature enough.

Getting and Building ROCm from Source

Please use TheRock build system to build ROCm from source.

ROCm documentation

This repository contains the manifest file for ROCm releases, changelogs, and release information.

The default.xml file contains information for all repositories and the associated commit used to build the current ROCm release; default.xml uses the Manifest Format repository.

Source code for our documentation is located in the /docs folder of most ROCm repositories. The develop branch of our repositories contains content for the next ROCm release.

The ROCm documentation homepage is rocm.docs.amd.com.

For information on how to contribute to the ROCm documentation, see Contributing to the ROCm documentation.

Older ROCm releases

For release information for older ROCm releases, refer to the ROCm release history.