Convert Figma logo to code with AI

tensorflow logomlir

"Multi-Level Intermediate Representation" Compiler Infrastructure

1,736
260
1,736
57

Top Related Projects

The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.

2,905

A retargetable MLIR-based machine learning compiler and runtime toolkit.

17,765

Open standard for machine learning interoperability

85,015

Tensors and Dynamic neural networks in Python with strong GPU acceleration

11,694

Open deep learning compiler stack for cpu, gpu and specialized accelerators

Quick Overview

MLIR (Multi-Level Intermediate Representation) is a compiler infrastructure project within the TensorFlow ecosystem. It provides a flexible and extensible compiler framework for machine learning models and other domain-specific applications. MLIR aims to bridge the gap between high-level abstractions and low-level hardware-specific optimizations.

Pros

  • Highly modular and extensible architecture
  • Supports multiple levels of abstraction within a single IR
  • Enables efficient code generation for various hardware targets
  • Facilitates the development of domain-specific compilers

Cons

  • Steep learning curve for newcomers to compiler technology
  • Limited documentation and learning resources compared to more established compiler frameworks
  • Still evolving, which may lead to API changes and instability
  • Requires significant effort to integrate with existing ML frameworks and tools

Code Examples

  1. Defining a simple MLIR operation:
def MyOp : Op<"my_op", [Pure]> {
  let summary = "A custom operation";
  let description = [{
    This is a simple custom operation in MLIR.
  }];
  let arguments = (ins F32:$input);
  let results = (outs F32:$output);
}
  1. Creating an MLIR module with a function:
module {
  func @example_function(%arg0: f32) -> f32 {
    %result = "my_op"(%arg0) : (f32) -> f32
    return %result : f32
  }
}
  1. Applying a simple transformation pass:
#include "mlir/Pass/Pass.h"
#include "mlir/Transforms/Passes.h"

void applySimplePass(mlir::MLIRContext &context, mlir::ModuleOp module) {
  mlir::PassManager pm(&context);
  pm.addPass(mlir::createCanonicalizerPass());
  if (mlir::failed(pm.run(module))) {
    llvm::errs() << "Pass execution failed\n";
  }
}

Getting Started

To get started with MLIR:

  1. Clone the LLVM project with MLIR:

    git clone https://github.com/llvm/llvm-project.git
    cd llvm-project
    
  2. Build LLVM and MLIR:

    mkdir build && cd build
    cmake -G Ninja ../llvm -DLLVM_ENABLE_PROJECTS=mlir -DLLVM_BUILD_EXAMPLES=ON -DLLVM_TARGETS_TO_BUILD="X86;NVPTX;AMDGPU" -DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_ASSERTIONS=ON
    ninja
    
  3. Add MLIR tools to your PATH:

    export PATH="$PWD/bin:$PATH"
    
  4. Run MLIR tools:

    mlir-opt --help
    

Competitor Comparisons

The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.

Pros of llvm-project

  • Broader scope and application, covering a wide range of compiler technologies
  • Larger and more established community with extensive documentation
  • More comprehensive set of tools and libraries for compiler development

Cons of llvm-project

  • Steeper learning curve due to its complexity and extensive codebase
  • May be overkill for projects primarily focused on machine learning

Code Comparison

MLIR (from tensorflow/mlir):

func @example(%arg0: tensor<4xf32>) -> tensor<4xf32> {
  %0 = "tf.Tanh"(%arg0) : (tensor<4xf32>) -> tensor<4xf32>
  return %0 : tensor<4xf32>
}

LLVM IR (from llvm-project):

define <4 x float> @example(<4 x float> %arg0) {
  %1 = call <4 x float> @llvm.tanh.v4f32(<4 x float> %arg0)
  ret <4 x float> %1
}

Summary

MLIR is more focused on machine learning and AI-specific optimizations, while llvm-project offers a broader set of compiler tools and technologies. MLIR's syntax is higher-level and more domain-specific, whereas LLVM IR is lower-level and more general-purpose. Choose MLIR for ML-centric projects and llvm-project for more general compiler development needs.

2,905

A retargetable MLIR-based machine learning compiler and runtime toolkit.

Pros of IREE

  • Focuses on end-to-end ML compilation and runtime execution
  • Provides a more complete solution for deploying ML models across various hardware targets
  • Offers better integration with hardware-specific backends and optimizations

Cons of IREE

  • Narrower scope compared to MLIR's general-purpose compiler infrastructure
  • Less flexibility for non-ML use cases
  • Potentially steeper learning curve for developers not familiar with IREE's specific abstractions

Code Comparison

MLIR (dialect definition):

def TensorFlowDialect : Dialect {
  let name = "tf";
  let cppNamespace = "::mlir::tf";
}

IREE (module definition):

hal.executable @module {
  hal.interface @io {
    hal.interface.binding @arg0, set=0, binding=0, type="StorageBuffer"
    hal.interface.binding @ret0, set=0, binding=1, type="StorageBuffer"
  }
}

MLIR provides a more general-purpose infrastructure for defining dialects and transformations, while IREE focuses on end-to-end ML deployment with hardware-specific optimizations. MLIR offers greater flexibility for various compiler projects, whereas IREE provides a more integrated solution for ML model deployment across different hardware targets.

17,765

Open standard for machine learning interoperability

Pros of ONNX

  • Wider ecosystem support and compatibility across various frameworks
  • Simpler model representation, easier to understand and implement
  • More mature and established standard for model interoperability

Cons of ONNX

  • Less flexible for representing complex, custom operations
  • Limited support for control flow and dynamic shapes
  • Slower adoption of cutting-edge ML features compared to MLIR

Code Comparison

ONNX model definition:

import onnx

node = onnx.helper.make_node(
    'Relu',
    inputs=['x'],
    outputs=['y'],
)

graph = onnx.helper.make_graph(
    [node],
    'test-model',
    [onnx.helper.make_tensor_value_info('x', onnx.TensorProto.FLOAT, [1, 2, 3])],
    [onnx.helper.make_tensor_value_info('y', onnx.TensorProto.FLOAT, [1, 2, 3])]
)

MLIR representation:

func @main(%arg0: tensor<1x2x3xf32>) -> tensor<1x2x3xf32> {
  %0 = "tf.Relu"(%arg0) : (tensor<1x2x3xf32>) -> tensor<1x2x3xf32>
  return %0 : tensor<1x2x3xf32>
}

Both ONNX and MLIR serve as intermediate representations for machine learning models, but they have different focuses and strengths. ONNX aims for broad compatibility and ease of use, while MLIR provides more flexibility and power for representing complex operations and optimizations.

85,015

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Pros of PyTorch

  • More intuitive and Pythonic API, easier for beginners to learn and use
  • Dynamic computational graphs allow for more flexible model architectures
  • Stronger community support and ecosystem for research and experimentation

Cons of PyTorch

  • Generally slower inference speed compared to TensorFlow/MLIR
  • Less robust deployment options for production environments
  • Smaller ecosystem for mobile and edge device deployment

Code Comparison

PyTorch:

import torch

x = torch.tensor([1, 2, 3])
y = torch.tensor([4, 5, 6])
z = torch.matmul(x, y)

TensorFlow/MLIR:

import tensorflow as tf

x = tf.constant([1, 2, 3])
y = tf.constant([4, 5, 6])
z = tf.linalg.matmul(x, y)

Both frameworks offer similar functionality, but PyTorch's syntax is often considered more intuitive and closer to standard Python. TensorFlow/MLIR, on the other hand, provides a more structured approach that can be beneficial for large-scale projects and production deployments. The choice between the two often depends on the specific use case, team expertise, and project requirements.

11,694

Open deep learning compiler stack for cpu, gpu and specialized accelerators

Pros of TVM

  • Broader hardware support, including GPUs, CPUs, and specialized AI accelerators
  • More flexible and extensible for various deep learning frameworks
  • Stronger focus on end-to-end optimization and deployment

Cons of TVM

  • Steeper learning curve due to its more complex architecture
  • Less integrated with TensorFlow ecosystem
  • Potentially slower compilation times for some models

Code Comparison

TVM example:

import tvm
from tvm import relay

# Define a simple network
data = relay.var("data", relay.TensorType((1, 3, 224, 224), "float32"))
weight = relay.var("weight")
conv2d = relay.nn.conv2d(data, weight)
func = relay.Function([data, weight], conv2d)

MLIR example:

func @simple_conv(%arg0: tensor<1x3x224x224xf32>, %arg1: tensor<32x3x3x3xf32>) -> tensor<1x32x222x222xf32> {
  %0 = "tf.Conv2D"(%arg0, %arg1) {strides = [1, 1, 1, 1], padding = "VALID"} : (tensor<1x3x224x224xf32>, tensor<32x3x3x3xf32>) -> tensor<1x32x222x222xf32>
  return %0 : tensor<1x32x222x222xf32>
}

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

301 - Moved

MLIR is now part of LLVM, more information on https://mlir.llvm.org

The code from this repository can now be found at https://github.com/llvm/llvm-project/tree/main/mlir/

Migration

If you have a local fork of this repository or pull-requests that need to be migrated to the LLVM monorepo, the following recipe may help you:

# From your local MLIR clone:
$ git clone git@github.com:newren/git-filter-repo.git /tmp/git-filter-repo
$ /tmp/git-filter-repo/git-filter-repo --path-rename :mlir/ --force  --message-callback 'return re.sub(b"(#[0-9]+)", b"tensorflow/mlir\\1", message)' --refs <branch name>

After this, all the commits from the previous upstream MLIR should match the ones in the monorepo now. If you don't provide the --refs option, this will rewrite all the branches in your repo.

From there you should be able to rebase any of your branch/commits on top of the LLVM monorepo:

$ git remote set-url origin git@github.com:llvm/llvm-project.git
$ git fetch origin
$ git rebase origin/main -i

Cherry-picking commits should also work, if you checkout the main branch from the monorepo you can git cherry-pick <sha1> from your (rewritten) branches.

You can also export patches with git format-patch <range> and re-apply it on the monorepo using git am <patch file>.