Unified Opaque and Transparent Motion and Defocus Blur

Decoupled Space and Time Sampling of Motion and Defocus Blur for Unified Rendering of Transparent and Opaque Objects

Unified Rendering Teaser

Abstract

We propose a unified rendering approach that jointly handles motion and defocus blur for transparent and opaque objects at interactive frame rates. Our key idea is to create a sampled representation of all parts of the scene geometry that are potentially visible at any point in time for the duration of a frame in an initial rasterization step. We store the resulting temporally-varying fragments (t-fragments) in a bounding volume hierarchy which is rebuild every frame using a fast spatial median construction algorithm. This makes our approach suitable for interactive applications with dynamic scenes and animations. Next, we perform spatial sampling to determine all t-fragments that intersect with a specific viewing ray at any point in time. Viewing rays are sampled according to the lens uv-sampling for depth-of-field effects. In a final temporal sampling step, we evaluate the predetermined viewing ray/t-fragment intersections for one or multiple points in time. This allows us to incorporate all standard shading effects including transparency. We describe the overall framework, present our GPU implementation, and evaluate our rendering approach with respect to scalability, quality, and performance.

Paper

Decoupled Space and Time Sampling of Motion and Defocus Blur for Unified Rendering of Transparent and Opaque Objects
[Eurographics Digital Library] [Preprint(18MB)]
Sven Widmer,Dominik Wodniok, Daniel Thul, Stefan Guthe, and Michael Goesele
In: Proceedings of Pacific Graphics, Okinawa, Japan, 2016

Supplemental Material

Supplemental Material: Decoupled Space and Time Sampling of Motion and Defocus Blur for Unified Rendering of Transparent and Opaque Objects
[Eurographics Digital Library] [Preprint(4MB)]
Sven Widmer,Dominik Wodniok, Daniel Thul, Stefan Guthe, and Michael Goesele
In: Proceedings of Pacific Graphics, Okinawa, Japan, 2016

Video

Addendum: Stratified Time Sampling

Only after publication we noticed that we could have applied stratified sampling to time sampling. This only requires one additional line of code in a shader at virtually no additional computational cost. Following is a comparison of Figure 8 a) which used uniform sampling with 32 time samples with stratified sampling and the reference:

Uniform Sampling (32 time samples)

uniform_sampling

Stratified Sampling (32 time samples)

stratified sampling

Blender/Cycles Reference (32 time samples)

Reference pic

The stratified sampling result surprisingly has less noise than the reference image. We suspect filtered texture lookups from mip-mapping to be the cause for this.

Addendum: Temporally Varying Axis Aligned Bounding Box Intersection

A subproblem in our publication was to determine if a ray intersects a temporally varying axis aligned bounding box at some point in time. In the concurrent work “Time-Continuous Quasi-Monte Carlo Ray Tracing” which also has been presented at Pacific Graphics 2016 Gribel and Akenine-Möller had to solve exactly the same problem. While we opted for a cheaper, slightly conservative approach suitable for interactive applications, they provide a more expensive but exact intersection test.