rrubberr

Playing With Glass

Although every image posted on this site until now has been rendered with LuxRender or LuxCoreRender, the state-of-the-art of unbiased rendering has progressed significantly since the introduction of the techniques employed by those two pieces of software; namely path tracing & metropolis light transport. While many pieces of software aim to integrate advanced techniques, most of them lack the ease of use which made LuxRender so palatable from an artistic standpoint.

 

A similar piece of software which fulfills this usability requirement is Mitsuba (not to be confused with Mitsuba2), which bills itself as a "research oriented" rendering system that "places a strong emphasis on experimental rendering techniques." These experimental techniques were used to generate most of the images shown on this page. The appeal of Mitsuba is bolstered by its very capable and fully featured Blender addon, mitsuba-blender which allows for scene, material, and lighting creation within the familiar Blender interface.

 

Rather than leading with the final rendered image, the image at right is an example of what happens when your "artistic vision" falls outside the boundaries of what your rendering software was meant to do in a timely manner. 

Mitsuba Render 0.6 and Blender 2.75

But what does this mean from a nuanced perspective?

 

In this example, the image above (rendered with LuxCoreRender) exhibits poor convergence, which points to the Monte Carlo nature of light transport simulation. In rendering, Monte Carlo refers to the repeated random sampling of paths of light in order to approximate the correct solution to the rendering equation which, simplistically, could be solved to compute the radiance of a given point. The equation is shown below.

Using Monte Carlo sampling to solve the rendering equation means that, as more random light paths are traced, and thus more samples are generated at each point, the average of these samples converges towards the correct solution to this equation (i.e. the expected luminance and color). Due to the complexity of the rendering equation it is currently impractical to solve in a more wholistic way, even though the solution is deterministic in principal.

This concept is illustrated in the image above, which displays the number of samples ( S ) taken at each pixel ( P ). You can see that as samples per pixel ( S/P ) increases from left to right, the percieved noisiness of the image decreases.

 

This inverse correlation between sample count and noisiness presents one of the main challenges inherent to the Monte Carlo nature of rendering, as it can similarly be represented as an inverse correlation between render time (the time required for a computer to generate the image) and convergence. In almost all cases, greater convergence is desireable. Although render time is always a hurdle, extensive research has yielded a number of better solutions to solving the rendering equation, more quickly than brute-force path tracing.

The image above presents a comparison between three different advanced rendering techniques, which are referred to in Mitsuba, LuxRender, and PBRT as integrators. Each of these integrators, of which Mitsuba includes more than ten, represents a discrete approach to solving the light transport equation. Certain integrators are by nature better suited to different scenarios, and this is visualized in the comparison boxes on the right side of the above image. Succinct explanations of each integrator are provided below, however clicking the bold and underlined links at the beginning of each explanation links to the paper describing the method; these will provide a more complete understanding.

 

ERPT (energy redistribution path tracing) is a method proposed by Cline et al. which "uses path mutations to redistribute the energy of the samples over the image plane to reduce variance." Simply, this approach takes advantage of the high correlation between nearby points (correlated integrals) in the scene to redistribute energy in order to reduce variance and increase convergence through energy flow.

 

MLT (Metropolis light transport, in this case path space, with manifold perturbation) is a method proposed by Veach & Guibas which slightly modifies, or mutates, randomly generated paths in order to calculate nearby (correlated) light paths' contribution to the scene. This, like ERPT, has the advantage of allowing the integrator to search for nearby contributions once difficult light paths are found. Manifold perturbation, as proposed by Jakob & Marschner, expands this by adding an additional approach to modifying light paths which is especially useful in specular and refractive scenarios.

 

PM (photon mapping, in this case with irradiance gradients) is a method proposed by Jensen which projects photons from light sources in the scene to create photon maps. These are then visualized by a ray tracing path from the camera. The addition of irradiance gradients, as proposed by Ward and Heckbert, allows for better convergence by interpolating between nearby photon locations, thereby smoothing the appearance of the image.

 

ERPT and PPM in particular present better convergence, in this scenario, than MLT.

The image above, rendered in only 5 minutes and 36 seconds, represents a significant improvement in convergence over the original image at the top of this page. This was achieved by using the last method presented in the comparison (photon mapping with irradiance gradients). The image also makes use of adaptive sampling, which allows Mitsuba to intelligently increase sample count, within bounded limits, in difficult parts of the image. This means that more samples were taken to generate the glasses in the image than the diffuse background, for example, increasing efficiency by eliminating unneeded samples.

Although each of these integrators aims to solve the same deterministic equation, slight differences arise between them due to the Monte Carlo nature of their approach. The image at left exhibits the difference between two highly converged images, one rendered with ERPT and the other with PM, showing the slight variance in color and brightness across the image.

 

The sources for Mitsuba can be found here, while my modification of the exporter plugin which allows it to interface with the current version can be found here. The exporter must be used with Blender 2.75, due to python version incompatabilities. I recommend users have a working knowledge of Linux packages (SCons, Qt, etc.) before attempting to compile, as doing so may be a significant challenge.

The image below is an adaptation of Project: Studio to the Mitsuba rendering software. This image was rendered in 1 hour and 36 minutes; a significant reduction over the 9 hours and 23 minutes required to render the original with LuxRender.