CS488 - Introduction to Computer Graphics - Lecture 16

1. Mid-term
2. Project proposals

Ray Tracing Petering Out

Anti-aliasing

This a topic close to image processing, but

• we don't want to lose information while we are scan converting

For example, scan converting polygons, using the a-buffer

1. Sort the polygons back to front
3. A polygon covers part of it.
4. What do we do?
• naive: write the pixel
• slightly less naive, if it covers more than half, write the pixel
• OpenGL: calculate the coverage, then blend the pixel, using the alpha-buffer

Two different, but linked, types of artifacts

1. Spatial (or temporal) frequency aliasing,
• image features appear at innappropriate sizes
2. Reconstruction aliasing
• totally new features, like jaggedness, appear.

Exact solutions are simple in principle

1. Remove high spatial frquencies by filtering
1. Fourier transform in image space: remember that you need to keep both amplitude and phase.
2. Filter
3. Inverse transform in image space

Filtering is the tricky part.

2. Use a sampling filter that is the inverse of the reconstruction filter
1. For the display to be used find the pixel shape
2. Construct a sampling filter appropriate for the pixel shape
3. Do ray-tracing calculating over a weighted area

Finding the pixel shape is the hard part.

In practice

1. Beam tracing
2. Super sampling
3. Stochastic sampling

Distribution Ray Tracing

What's wrong with

1. Beam tracing
2. Super sampling
3. Stochastic sampling

It doesn't try explicitly to put the work into the places that make the most difference.

Trying to do so we should get heuristics like

• If <xxx> is important, then put extra rays into <yyy>.

Here are a few such heuristics

Distribute carefully over

1. Reflection directions <==>
• you want better highlights.
• Why would you want better highlights?
• How would you do this?
2. Area lights <==>
• Why would you want soft shadows?
• How would you get them?
3. Aperture <==>
• you want depth of focus effects
• Why would you want depth of focus effects?
• How would you get them?
4. Time <==>
• you want motion blur.
• Why would you want motion blur?
• How would you get it?
• Render the image w/o antialiasing
• Use an edge finding algorithm to collect the pixels where the colour changes abruptly
• Re-render those images using antialiasing

All these techniques, except the last, are inordinately expensive! To cut down the work:

• Render the image w/o antialiasing
• Use an edge finding algorithm to collect the pixels where the colour changes abruptly
• Re-render those images using antialiasing

All these techniques, except the last, are inordinately expensive!

Lighting

The caustic project.

What is a caustic?

Bidirectional Ray Tracing

Recursive ray tracing makes a tree with a big fan-out: Cost ~ exp( n*d ,) where d is the depth, n is the fan-out.

• n is big, so it's worth getting d down.

Bright idea

1. Trace out from eye
2. Trace out from sources of illumination
3. Match the rays in the centre ... somehow!
4. The resulting double cone gets the same performance at Cost ~ exp( n*d/2 ), which is surely worth it.

The problem is step 3. How do you match the rays? The current state of the art is photon mapping. Here's how it works.

From every light source

• send out rays in `randomly' chosen directions
• For each ray
• Follow it until it hits a surface.
• If the surface is reflective
• Send out a ray in the reflection direction
• If the surface is not reflective
• Accumulate a pool of illumination
• Send out a ray in a randomly chosen direction
• Continue following rays until ...

For each pixel

• Cast a ray into the scene
• When it hits a surface, use the light you find accumulated there for the illumination in your lighting model

Notice tuning and calibration needed.

What creates the caustics?

Texture Mapping

1. Basic
2. Map 2D image onto primitive using a 2D affine transformation
• Simple if the surface of the primitive is flat
• otherwise, ...
• Texture pixels normally do not match display pixels, so some image processing may be needed.
3. Backwards map intersection point with ray into the image to get the surface properties
2. Normal Mapping (Bump mapping)
1. Start with a difference surface, defined with respect to the surface
2. Calculate the normals to the difference surface and map them onto the surface of the primitive
3. Use the mapped surface models for lighting
4. No occlusion, shadows are wrong, silhouettes are wrong, nobody notices!
3. Solid Textures
1. Solution to mapping texture onto curved surfaces
2. Usually procedural