CS488 - Introduction to Computer Graphics - Lecture 16

Comments and Questions

  1. Mid-term
  2. Project proposals

Ray Tracing Petering Out


Anti-aliasing

This a topic close to image processing, but

For example, scan converting polygons, using the a-buffer

  1. Sort the polygons back to front
  2. Start with a black pixel
  3. A polygon covers part of it.
  4. What do we do?

Two different, but linked, types of artifacts

  1. Spatial (or temporal) frequency aliasing,
  2. Reconstruction aliasing

Exact solutions are simple in principle

  1. Remove high spatial frquencies by filtering
    1. Fourier transform in image space: remember that you need to keep both amplitude and phase.
    2. Filter
    3. Inverse transform in image space

    Filtering is the tricky part.

  2. Use a sampling filter that is the inverse of the reconstruction filter
    1. For the display to be used find the pixel shape
    2. Construct a sampling filter appropriate for the pixel shape
    3. Do ray-tracing calculating over a weighted area

    Finding the pixel shape is the hard part.

In practice

  1. Beam tracing
  2. Super sampling
  3. Stochastic sampling

Distribution Ray Tracing

What's wrong with

  1. Beam tracing
  2. Super sampling
  3. Stochastic sampling

It doesn't try explicitly to put the work into the places that make the most difference.

Trying to do so we should get heuristics like

Here are a few such heuristics

Distribute carefully over

  1. Reflection directions <==>
  2. Area lights <==>
  3. Aperture <==>
  4. Time <==>
  5. Adaptive<==>

All these techniques, except the last, are inordinately expensive! To cut down the work:

Adaptive anti-aliasing

All these techniques, except the last, are inordinately expensive!


Lighting

The caustic project.

What is a caustic?

Bidirectional Ray Tracing

Recursive ray tracing makes a tree with a big fan-out: Cost ~ exp( n*d ,) where d is the depth, n is the fan-out.

Bright idea

  1. Trace out from eye
  2. Trace out from sources of illumination
  3. Match the rays in the centre ... somehow!
  4. The resulting double cone gets the same performance at Cost ~ exp( n*d/2 ), which is surely worth it.

The problem is step 3. How do you match the rays? The current state of the art is photon mapping. Here's how it works.

From every light source

For each pixel

Notice tuning and calibration needed.

What creates the caustics?

Participating Media

Shadows

Radiosity


Texture Mapping

  1. Basic
    1. Start with a 2D image: pixellated or procedural
    2. Map 2D image onto primitive using a 2D affine transformation
      • Simple if the surface of the primitive is flat
      • otherwise, ...
      • Texture pixels normally do not match display pixels, so some image processing may be needed.
    3. Backwards map intersection point with ray into the image to get the surface properties
  2. Normal Mapping (Bump mapping)
    1. Start with a difference surface, defined with respect to the surface
    2. Calculate the normals to the difference surface and map them onto the surface of the primitive
    3. Use the mapped surface models for lighting
    4. No occlusion, shadows are wrong, silhouettes are wrong, nobody notices!
  3. Solid Textures
    1. Solution to mapping texture onto curved surfaces
    2. Usually procedural

Return to: