# Lighting

## Participating Media

What is fog?

• Lots of little water droplets
• Light gets scattered

What is beer?

• Lots of little colour centres
• Light gets absorbed

What they have in common is

• The farther light goes the more likely it is to get scattered or absorbed.
• The property is described by Beer's Law (named after August Beer, no relation)
• I(x) ~ exp( -k(\lambda) x )

What happens to the light that doesn't make it through?

Shadows come for free' in the ray tracer.

• Can we make them fast enough to use with OpenGL?

Yes. The methods, in increasing order of cost.

• Draw a dark area where shadow lies, using alpha blending unless you are trying to get the deep space' look
• Easy for simple objects onto onto simple geometry, ...
• Finding the silhouette of a mesh
• from a particular direction
• lots of algorithms: best are linear in the length of the silhouette,

but with a big constant

• still more to do

Notice that we know a lot about how to project.

• Project as if each light is an eye
• Scene behind every point that appears in the virtual frame buffer is in shadow
• Store the distance to each point in the z-buffer
• Project towards the eye

For each point that is visible

• Transform the point to the light's coordinate frame
• Is the distance stored in the light's virtual z-buffer greater than the distance to the light
• If yes' then the point is shadowed from that light
• If no' then it is illuminated by that light

How does this interact with scan conversion?

What if the light is inside the view frustrum?

• Remember that the eye ray is not the same as the axis of the view frustrum.
• Project from light as for shadow maps
• Define a set of polygons that are the boundaries of the volume that is in shadow.
• Front-facing wrt eye +1
• Back-facing -1
• Count along the ray from eye to point, staring with zero
• Staring with:
• zero if eye is not in shadow
• number of times it's shadowed otherwise
• If > 0 in shadow
• If 0 in light

Currently (2009) the preferred technique is shadow maps

# Global Illumination

Comment on global illumination. If you are doing a walk-through, you can calculate the illumination on each polygon once, then re-render (re-project) the scene from different viewpoints as the user moves around.

Calculating illumination

Each small bit of surface in the scene

1. receives some amount of light (possibly none)
• from other bits of surface: \sum_bits (light emitted in the direction of this bit) * (fraction occluded)
• B(y, <y-x>, \lambda) = \sum_surfaces (I(x, <y-x>, \lambda) + L(x, <y-x>, \lambda) * F(x,y) * dx.dy
2. emits some amount of light (possibly none)
• I(x, <z>, \lambda )
3. re-emits some amount of light (possibly none)
• sum_directions (received light from ...) * (BRDF to ...)
• L(x, <y-x>, \lambda) = \sum_<z> B(x, <z>, \lambda) * R(<z>, <y-x>, \lambda)

Solve the resulting equations.

1. F(x, y)dx.dy is known from the geometry
2. I(x, <z>, \lambda) and R(<z-in>, <z-out>, \lambda) are surface properties in the model
3. B(x, <z>, \lambda) and L(x, <z>, \lambda) are unknown.
4. Substitute B into the third equation.
5. The result is a set of linear equations that can be solved for L

Once L is known,

1. B is easily calculated.
2. The light field is easily calculated at point P
• LF(P, <z>, \lambda) = sum_x L(x, <P-x>, \lambda) \delta(<z>, <P-x>)

## The Light Field

Let's turn our attention away from the surfaces of objects and onto the volume between objects

At every point in this volume there is a light density

• for every possible direction
• for every visible wavelength

This quantity LF(P, <z>, \lambda ) is the light field. If we knew it we could

• evaluate it at the eye position
• at the angle heading for each pixel
• to get RGB for that pixel

The evaluation is, in fact, just a projective transformation of the light field.

How do we get the light field?

1. by measurement
2. by calculation
• Radiosity is the obvious method

How is the light field used in 2009?

• routine applications for backdrops
• Think about a window in a dark room
• Light passes only one direction
• What's wrong with treating a window like a 2D scene on the wall?
• Easy to do by texture mapping
• How would we get the necessary data?
• calculation
• measurement
• remote controlled digital camera
• still the problems of storage and reconstruction
• yesterday's excitement

But tomorrow!!

#### Plenoptic Function

Think about what the viewer can do.

1. The seriously handicapped viewer can
• not move in position
• not move the direction of gaze

Ray tracing is perfect.

2. The mildly handicapped viewer can
• not move in position
• gaze in any direction

Ray trace onto a sphere surrounding the viewer and reproject from the sphere to a view plane whenever the direction of gaze changes.

3. The unhandicapped viewer can
• move around
• gaze in any direction

Ray trace onto a sphere at each accessible point.

The third is the light field, also called the plenoptic function, and it has to be recalculated every time something in the scene moves.

#### `Backdrop' Applications

Imagine making a game or a movie

• There is an area accessible to the players (actors, camera), and
• there is an area inaccessible to the players (actors, camera).

An easy backdrop

• Surround the accessible volume with a sphere (actually a hemi-sphere)
• Ray trace the scene outside the accessible volume onto the sphere
• Put the re-projected portion of the sphere into the frame buffer, depth buffer set to infinity
• Where is the eye point?
• The centre of the sphere works for the mildly handicapped viewer.
• What is missing for the unhandicapped viewer?

A more difficult backdrop

• Photography
• Perhaps a window