cs184-p3-2-pathtracer

Assignment 3-2: PathTracer

Fanyu Meng

Overview

In this project, we added more features to our path tracer in the previous project. The added features include enabling reflection and refraction, rendering microfacet surfaces, environment light and depth of field.

Part 1: Mirror and Glass Materials

We render mirror and glass materials by allowing light rays to be reflected and refracted during calculation. For mirror, we characterize mirror by setting its BRDF to constantly return the reflected ray w.r.t. surface normal. For glass BRDF, we return either the reflected ray or the refracted ray according to Schlick’s reflection coefficient. The refraction direction is calculated using the Fresnel equation.

Max depth of 0.
Max depth of 1.
Max depth of 2.
Max depth of 3.
Max depth of 4.
Max depth of 5.
Max depth of 6.
Max depth of 100.

Note that:

Part 2: Microfacet Materials

We implement this feature with the microface BRDF by multiplying the Fresnel term, the shadowing-masking term, the normal distribution function (NDF) and some scalars. The Fresnel term is calculated using the refractive index and the extinction coefficient of the material, and the NDF is calculated using the Beckmann distribution.

We also apply importance sampling to increase convergence rate. We do this by importance the half vector from the NDF and correctly scale the result.

alpha=0.5
alpha=0.25
alpha=0.05
alpha=0.005

As we can see, the material tends to diffuse when α is higher, and is more glossy when α is small.

Cosine hemisphere sampling
Importance sampling

The image rendered with importance sampling much less noisy since it converges much faster than uniform sampling.

Rendering metal cobalt using a different set of eta and k parameters.

Part 3: Environment Light

The idea of environment light is that the environmental texture is considered as light sources infinitely far away and is mapped onto the materials. This way we can have a more realistic scene as if the objects are reflection the world.

We implement environment light by sampling from the environment texture from each directions. The 3D direction is converted into 2D texture coordinate and the corresponding value is recorded. We also incorporate importance sampling by sampling brighter points with a higher probability. We do this by pre-calculating the marginal environment light density for each pixel.

Used environment map
Corresponding probability CDF
Diffuse surface with uniform sampling.
Diffuse surface with importance sampling.
Microfacet surface with uniform sampling.
Microfacet surface with importance sampling.

As we can see, using importance sampling on environment light makes rendering a diffuse object less noisy; however, on a more reflective object, the highlight area will be less noisy while darker areas will be more noisy, since we are more likely to sample from brighter locations on the environment light map.

Part 4: Depth of Field

For a pinhole camera, all light received by the camera comes from the direction of the pinhole. However, this might not the be case for a thin lens camera. Light could be distorted by the thin lens and come from different directions. This created the idea of depth of field, that only objects in a certain range of distance are on focus, and other objects are blurred in the image.

We implement this feature by sampling on a disk with radius lensRadius and centered at pos when generating the camera ray. Instead of returning the ray from the camera location to the given direction, we return the ray from the random location of the disk to the corresponding point on the focal plane. This allows the objects around the focal plane to the on sharp while objects at other distances will be out of focus and blurred.

Focused in front of the dragon.
Focused around the head of the dragon.
Focused around the tail of the dragon.
Focused behind the dragon.
lensRadius = 0.5
lensRadius = 0.1
lensRadius = 0.05
lensRadius = 0.01

As we can see, focused on the same plane, the smaller the lens radius is, the more part of the object can be on focus. If lensRadius = 0, then it should behave the same as if we did not implement the depth of field feature.