# CS488 - Introduction to Computer Graphics - Lecture 26

## Review

1. Distribution Ray Tracing
2. Bi-directional ray tracing
3. Office hours

## Lighting

#### Participating Media

What is fog?

• Lots of little water droplets
• Light gets scattered

What is beer?

• Lots of little colour centres
• Light gets absorbed

What they have in common is

• The farther light goes the more likely it is to get scattered or absorbed.
• The property is described by Beer's Law (named after August Beer, no relation)
• I(x) ~ exp( -k(\lambda) x )

What happens to the light that doesn't make it through?

#### Shadows

What is a shadow?

Shadows come for free' in the ray tracer.

• Can we make them fast enough to use with OpenGL?

Yes. The methods, in increasing order of cost.

1. Projective shadows
• Project silhouette of shadowing object onto shadowed object
• Draw a dark area where shadow lies, using alpha blending unless you are trying to get the deep space' look
• Easy for simple objects onto onto simple objects, ...
• What about meshed objects?

Notice that we know a lot about how to project.

2. Shadow maps
• Project as if each light is an eye
• Scene behind every point that appears in the virtual frame buffer is in shadow
• Store the distance to each point in the z-buffer
• Project towards the eye

For each point that is visible

• Transform the point to the light's coordinate frame
• Is the distance stored in the light's virtual z-buffer greater than the distance to the light
• If yes' then the point is shadowed from that light
• If no' then it is illuminated by that light

How does this interact with scan conversion?

What if the light is inside the view frustrum?

3. Shadow volumes
• Project from light as for shadow maps
• Define a set of polygons that are the boundaries of the volume that is in shadow.
• Front-facing wrt eye +1
• Back-facing -1
• Count along the ray from eye to point, staring with zero
• If > 0 in shadow
• If 0 in light

Comment on global illumination. If you are doing a walk-through, you can calculate the illumination on each polygon once, then re-project the scene from different viewpoints as the user moves around.

#### Light Fields

This is a way to avoid even the need to re-project.

Let's turn our attention away from the surfaces of objects and onto the volume between objects

At every point in this volume there is a light density

• for every possible direction
• for every visible wavelength

This quantity L(P, theta, lambda ) is the light field. If we knew it we could

• evaluate it at the eye position
• at the angle heading for each pixel
• to get RGB for that pixel

The evaluation is, in fact, just a projective transformation of the light field.

How do we get the light field?

1. by measurement
2. by calculation

How is the light field used in 2007?

But tomorrow!!

## Surface Properties

#### Texture Mapping

1. Basic
1. Start with a 2D image: pixellated or procedural
2. Map 2D image onto primitive using a 2D affine transformation
• Simple if the surface of the primitive is flat
• otherwise, ...
• Texture pixels normally do not match display pixels, so some image processing may be needed.
3. Backwards map intersection point with ray into the image to get the surface properties
2. Normal Mapping (Bump mapping)
1. Start with a difference surface, defined with respect to the surface
2. Calculate the normals to the difference surface and map them onto the surface of the primitive
3. Use the mapped surface models for lighting
4. No occlusion, shadows are wrong, silhouettes are wrong, nobody notices!
3. Solid Textures
1. Solution to mapping texture onto curved surfaces
2. Usually procedural