Convert Figma logo to code with AI

mitsuba-renderer logomitsuba2

Mitsuba 2: A Retargetable Forward and Inverse Renderer

2,045
267
2,045
106

Top Related Projects

2,633

Real-Time Rendering Framework

Advanced shading language for production GI renderers

2,810

Source code to pbrt, the ray tracer described in the forthcoming 4th edition of the "Physically Based Rendering: From Theory to Implementation" book.

1,149

LuxCore source repository

5,989

Universal Scene Description

A modern open source rendering engine for animation and visual effects

Quick Overview

Mitsuba 2 is an open-source rendering system for physically-based graphics research. It is a complete rewrite of its predecessor, Mitsuba 1, offering improved performance, flexibility, and differentiable rendering capabilities. Mitsuba 2 is designed to be a testbed for algorithm development in computer graphics, computer vision, and beyond.

Pros

  • Differentiable rendering support, enabling gradient-based optimization of scene parameters
  • Highly modular and extensible architecture, allowing easy integration of new rendering techniques
  • Multi-language support (C++, Python, and CUDA) for versatile development and research
  • Excellent documentation and active community support

Cons

  • Steeper learning curve compared to some other rendering systems due to its advanced features
  • Limited built-in support for real-time rendering, as it's primarily focused on offline rendering
  • Requires significant computational resources for complex scenes and advanced rendering techniques
  • Some features from Mitsuba 1 are still being ported or reimplemented in Mitsuba 2

Code Examples

  1. Basic scene setup and rendering:
import mitsuba as mi

mi.set_variant('scalar_rgb')

scene = mi.load_file('scene.xml')
image = mi.render(scene)
mi.util.write_bitmap('output.exr', image)
  1. Differentiable rendering example:
import mitsuba as mi
import enoki as ek

mi.set_variant('cuda_ad_rgb')

scene = mi.load_file('scene.xml')
params = mi.traverse(scene)

image = mi.render(scene, spp=16)
loss = ek.hsum(image) / len(image)
ek.backward(loss)

print(params['my_material.reflectance.value'].grad)
  1. Custom integrator implementation:
import mitsuba as mi

class CustomIntegrator(mi.SamplingIntegrator):
    def sample(self, scene, sampler, ray, medium=None, active=True):
        # Custom integrator logic here
        return mi.Spectrum(1.0)

mi.register_integrator("custom", lambda props: CustomIntegrator(props))

Getting Started

  1. Install Mitsuba 2:
git clone --recursive https://github.com/mitsuba-renderer/mitsuba2
cd mitsuba2
mkdir build
cd build
cmake ..
make -j
  1. Set up Python environment:
conda create -n mitsuba2 python=3.8
conda activate mitsuba2
pip install -r requirements.txt
  1. Run a simple rendering:
import mitsuba as mi
mi.set_variant('scalar_rgb')
scene = mi.load_file('path/to/scene.xml')
image = mi.render(scene)
mi.util.write_bitmap('output.png', image)

Competitor Comparisons

2,633

Real-Time Rendering Framework

Pros of Falcor

  • More focused on real-time rendering and game development
  • Extensive support for DirectX 12 and Vulkan
  • Includes advanced features like ray tracing and machine learning integration

Cons of Falcor

  • Less flexible for non-real-time rendering applications
  • Steeper learning curve for researchers not familiar with game engine architectures
  • Limited cross-platform support compared to Mitsuba 2

Code Comparison

Falcor (C++):

void SimpleRenderGraph::onLoad(RenderContext* pRenderContext)
{
    mpRasterPass = RasterPass::create();
    mpRasterPass->setScene(mpScene);
    mpRasterPass->setOutput("color", ResourceFormat::RGBA32Float);
}

Mitsuba 2 (Python):

scene = load_file(filename)
integrator = load_dict({
    'type': 'path',
    'max_depth': 8
})
image = render(scene, spp=32, integrator=integrator)

The code snippets demonstrate the different approaches: Falcor focuses on real-time rendering with a more complex setup, while Mitsuba 2 offers a simpler, research-oriented interface for offline rendering.

Advanced shading language for production GI renderers

Pros of OpenShadingLanguage

  • Widely adopted in the film and VFX industry, with support from major studios
  • Extensive documentation and community resources
  • Flexible and extensible shading language designed for production use

Cons of OpenShadingLanguage

  • Steeper learning curve compared to Mitsuba2's Python-based workflow
  • Less focus on research-oriented features and physically-based rendering
  • May require more setup and integration with existing rendering pipelines

Code Comparison

Mitsuba2 (Python):

import mitsuba
scene = mitsuba.load_file('scene.xml')
image = mitsuba.render(scene)

OpenShadingLanguage:

shader example(
    float input = 0.5,
    output color result = 0
)
{
    result = color(input, input, input);
}

OpenShadingLanguage focuses on shader programming, while Mitsuba2 provides a higher-level interface for scene description and rendering. OSL's syntax is C-like, whereas Mitsuba2 leverages Python for ease of use in research environments.

2,810

Source code to pbrt, the ray tracer described in the forthcoming 4th edition of the "Physically Based Rendering: From Theory to Implementation" book.

Pros of pbrt-v4

  • More comprehensive documentation and educational resources
  • Wider industry adoption and recognition
  • Faster rendering performance for certain scenes

Cons of pbrt-v4

  • Less flexible architecture for experimenting with new rendering techniques
  • More limited support for GPU acceleration
  • Steeper learning curve for beginners

Code Comparison

Mitsuba2 (Python interface):

scene = load_file('scene.xml')
integrator = scene.integrator()
sensor = scene.sensors()[0]
result = integrator.render(scene, sensor)

pbrt-v4 (C++ interface):

std::unique_ptr<Film> film = FilmHandle::Create(filmParams);
std::unique_ptr<Camera> camera = CameraHandle::Create(cameraParams, film.get());
Integrator *integrator = IntegratorHandle::Create(integratorParams, camera.get());
integrator->Render(scene);

Both repositories offer powerful rendering capabilities, but Mitsuba2 provides a more flexible Python interface, while pbrt-v4 focuses on performance and industry-standard C++ implementation. Mitsuba2 is better suited for research and experimentation, while pbrt-v4 is more aligned with production rendering workflows.

1,149

LuxCore source repository

Pros of LuxCore

  • More user-friendly interface and better integration with 3D modeling software
  • Extensive material system with support for complex shaders and textures
  • Active development with frequent updates and community support

Cons of LuxCore

  • Generally slower rendering performance compared to Mitsuba2
  • Less flexibility for research and experimentation in rendering algorithms
  • Steeper learning curve for advanced users and developers

Code Comparison

LuxCore (C++):

Scene *scene = Scene::Create();
scene->Parse(Properties("scene.scn"));
RenderConfig *config = RenderConfig::Create(Properties("config.cfg"));
RenderSession *session = RenderSession::Create(config, scene);
session->Start();

Mitsuba2 (Python):

import mitsuba
mitsuba.set_variant('scalar_rgb')
from mitsuba.core import Thread
from mitsuba.core.xml import load_file

scene = load_file('scene.xml')
sensor = scene.sensors()[0]
sensor.sample_ray(0, 0, [0.5, 0.5])

Both repositories offer powerful rendering capabilities, but LuxCore focuses more on user-friendly features and integration with 3D software, while Mitsuba2 emphasizes flexibility and research-oriented functionality. LuxCore's code example demonstrates its scene setup and rendering process, while Mitsuba2's code showcases its Python interface and ray sampling capabilities.

5,989

Universal Scene Description

Pros of OpenUSD

  • Widely adopted industry standard for 3D content creation and exchange
  • Extensive documentation and community support
  • Robust ecosystem with integrations in major 3D software packages

Cons of OpenUSD

  • Steeper learning curve due to complex architecture
  • Primarily focused on scene description rather than rendering
  • Larger codebase and potentially higher resource requirements

Code Comparison

OpenUSD (C++):

#include "pxr/usd/usd/stage.h"
#include "pxr/usd/usdGeom/sphere.h"

auto stage = pxr::UsdStage::CreateInMemory();
auto spherePrim = pxr::UsdGeomSphere::Define(stage, pxr::SdfPath("/mySphere"));
spherePrim.CreateRadiusAttr().Set(2.0);

Mitsuba 2 (Python):

import mitsuba as mi

scene = mi.load_dict({
    'type': 'scene',
    'sphere': {
        'type': 'sphere',
        'radius': 2.0,
        'to_world': mi.Transform4f.translate([0, 0, 0])
    }
})

While OpenUSD focuses on scene description and interchange, Mitsuba 2 is primarily a rendering system. OpenUSD offers a more comprehensive framework for 3D content creation, while Mitsuba 2 provides a specialized physically-based rendering solution. The choice between them depends on specific project requirements and workflow integration needs.

A modern open source rendering engine for animation and visual effects

Pros of appleseed

  • More extensive documentation and user guides
  • Larger community and more active development
  • Built-in scene editing and material editing tools

Cons of appleseed

  • Slower rendering performance for some scenes
  • Less focus on research-oriented features
  • More complex setup process for beginners

Code Comparison

appleseed:

project = asr.Project('my_project')
scene = project.get_scene()
camera = asr.Camera('pinhole_camera', 'camera', {'f_stop': '8.0'})
scene.cameras().insert(camera)

Mitsuba 2:

scene = mi.load_file('scene.xml')
sensor = scene.sensors()[0]
sensor.set_float('fov', 45.0)
mi.render(scene, spp=16)

Both renderers use Python for scene setup, but appleseed's API is more object-oriented, while Mitsuba 2 relies more on XML scene descriptions with Python modifications.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Mitsuba logo

Mitsuba Renderer 2

Documentation
docs

This repository is deprecated

NOTE: Mitsuba 3 has recently been released, which addresses many long-standing limitations of Mitsuba 2. This repository is therefore deprecated: it will not receive updates or bugfixes, and we recommend that you migrate to Mitsuba 3.

Introduction

Mitsuba 2 is a research-oriented rendering system written in portable C++17. It consists of a small set of core libraries and a wide variety of plugins that implement functionality ranging from materials and light sources to complete rendering algorithms. Mitsuba 2 strives to retain scene compatibility with its predecessor Mitsuba 0.6. However, in most other respects, it is a completely new system following a different set of goals.

The most significant change of Mitsuba 2 is that it is a retargetable renderer: this means that the underlying implementations and data structures are specified in a generic fashion that can be transformed to accomplish a number of different tasks. For example:

  1. In the simplest case, Mitsuba 2 is an ordinary CPU-based RGB renderer that processes one ray at a time similar to its predecessor Mitsuba 0.6.

  2. Alternatively, Mitsuba 2 can be transformed into a differentiable renderer that runs on NVIDIA RTX GPUs. A differentiable rendering algorithm is able to compute derivatives of the entire simulation with respect to input parameters such as camera pose, geometry, BSDFs, textures, and volumes. In conjunction with gradient-based optimization, this opens door to challenging inverse problems including computational material design and scene reconstruction.

  3. Another type of transformation turns Mitsuba 2 into a vectorized CPU renderer that leverages Single Instruction/Multiple Data (SIMD) instruction sets such as AVX512 on modern CPUs to efficiently sample many light paths in parallel.

  4. Yet another type of transformation rewrites physical aspects of the simulation: Mitsuba can be used as a monochromatic renderer, RGB-based renderer, or spectral renderer. Each variant can optionally account for the effects of polarization if desired.

In addition to the above transformations, there are several other noteworthy changes:

  1. Mitsuba 2 provides very fine-grained Python bindings to essentially every function using pybind11. This makes it possible to import the renderer into a Jupyter notebook and develop new algorithms interactively while visualizing their behavior using plots.

  2. The renderer includes a large automated test suite written in Python, and its development relies on several continuous integration servers that compile and test new commits on different operating systems using various compilation settings (e.g. debug/release builds, single/double precision, etc). Manually checking that external contributions don't break existing functionality had become a severe bottleneck in the previous Mitsuba 0.6 codebase, hence the goal of this infrastructure is to avoid such manual checks and streamline interactions with the community (Pull Requests, etc.) in the future.

  3. An all-new cross-platform user interface is currently being developed using the NanoGUI library. Note that this is not yet complete.

Compiling and using Mitsuba 2

Please see the documentation for details on how to compile, use, and extend Mitsuba 2.

About

This project was created by Wenzel Jakob. Significant features and/or improvements to the code were contributed by Merlin Nimier-David, Guillaume Loubet, Benoît Ruiz, Sébastien Speierer, Delio Vicini, and Tizian Zeltner.