# CS488 - Introduction to Computer Graphics - Lecture 19

#### Nearby Light Sources

What is used in practice

• 1/(a + br + cr^2)
• In the limits: r -> 0, r -> infinity, r in between?

What should be in theory

1. Points
• falls of as 1/r^2
• emits a constant
2. Lines
• falls of as 1/r, but only if the length is infinite
• emits as l
3. Areas
• doesn't fall off, but only if the area is infinite
• emits as l^2

## Global Illumination

Calculating all the light. Why bother?

• Think about what you need for a walkthrough

Each small bit y of surface in the scene

1. receives some amount of light (possibly none)
• from other bits of surface, x: \sum_bits (light emitted in the direction of this bit) * (fraction occluded)
• B(y, <y-x>, \lambda) = \sum_surfaces (I(x, <y-x>, \lambda) + L(x, <y-x>, \lambda) * F(x,y)
• I - light emitted
• L - light re-emitted
2. emits some amount of light (possibly none)
• I(x, <z>, \lambda )
3. re-emits some amount of light (possibly none)
• sum_directions (received light from ...) * (BRDF to ...)
• L(x, <y-x>, \lambda) = \sum_<z> B(x, <z>, \lambda) * R(<z>, <y-x>, \lambda)

Solve the resulting equations.

1. F(x, y)dx.dy is known from the geometry
2. I(x, <z>, \lambda) and R(<z-in>, <z-out>, \lambda) are surface properties in the model
3. B(x, <z>, \lambda) and L(x, <z>, \lambda) are unknown.
4. Substitute B into the third equation.
5. The result is a set of linear equations that can be solved for L

Once L is known,

1. B is easily calculated.
2. The light field is easily calculated at point P
• LF(P, <z>, \lambda) = sum_x L(x, <P-x>, \lambda) \delta(<z>, <P-x>)

What's wrong with this picture?

#### Light Fields

This is a way to avoid even the need to re-project.

Let's turn our attention away from the surfaces of objects and onto the volume between objects

At every point in this volume there is a light density

• for every possible direction
• for every visible wavelength

This quantity L(P, theta, lambda ) is the light field. If we knew it we could

• evaluate it at the eye position
• at the angle heading for each pixel
• to get RGB for that pixel

The evaluation is, in fact, just a projective transformation of the light field.

How do we get the light field?

1. by measurement
2. by calculation

How is the light field used in 2007?

But tomorrow!!

#### Plenoptic Function

Think about what the viewer can do.

1. The seriously handicapped viewer can
• not move in position
• not move the direction of gaze

Ray tracing is perfect.

2. The mildly handicapped viewer can
• not move in position
• gaze in any direction

Ray trace onto a sphere surrounding the viewer and reproject from the sphere to a view plane whenever the direction of gaze changes.

3. The unhandicapped viewer can
• move around
• gaze in any direction

Ray trace onto a sphere at each accessible point.

The third is the light field, also called the plenoptic function, and it has to be recalculated every time something in the scene moves.

#### `Backdrop' Applications

Imagine making a game

• There is an area accessible to the players, and
• there is an area inaccessible to the players.

An easy backdrop

• Surround the accessible volume with a sphere (actually a hemi-sphere)
• Ray trace the scene outside the accessible volume onto the sphere
• Put the re-projected portion of the sphere into the frame buffer, depth buffer set to infinity
• Where is the eye point?
• The centre of the sphere works for the mildly handicapped viewer.
• What is missing for the unhandicapped viewer?

A more difficult backdrop

Photography

Perhaps a window

### Texture Mapping

1. Basic
2. Map 2D image onto primitive using a 2D affine transformation
• Simple if the surface of the primitive is flat
• otherwise, ...
• Texture pixels normally do not match display pixels, so some image processing may be needed.
3. Backwards map intersection point with ray into the image to get the surface properties
2. Normal Mapping (Bump mapping)
1. Start with a difference surface, defined with respect to the surface
2. Calculate the normals to the difference surface and map them onto the surface of the primitive
3. Use the mapped surface models for lighting
4. No occlusion, shadows are wrong, silhouettes are wrong, nobody notices!
3. Solid Textures
1. Solution to mapping texture onto curved surfaces
2. Usually procedural