CS488 - Introduction to Computer Graphics - Lecture 24

Ray Tracing

Adaptive Distribution Ray Tracing

What's wrong with

  1. Beam tracing
  2. Super sampling
  3. Stochastic sampling

They are too expensive, because

Instead, they should try explicitly to put the work into the places that make the most difference.

Trying to do so we should get heuristics like

Here are a few such heuristics

Distribute carefully over

  1. Reflection directions <==>
  2. Area lights <==>
  3. Time <==>
  4. Anti-aliasing

These techniques are hard!


The caustic project.

What is a caustic?

Bidirectional Ray Tracing

Recursive ray tracing makes a tree with a big fan-out: Cost ~ exp( n*d ,) where d is the depth, n is the fan-out.

Bright idea

  1. Trace out from eye
  2. Trace out from sources of illumination
  3. Match the rays in the centre ... somehow!
  4. The resulting double cone gets the same performance at Cost ~ exp( n*d/2 ), which is surely worth it.

The problem is step 3. How do you match the rays? The current state of the art is photon mapping. Here's how it works.

From every light source

For each pixel

Notice tuning and calibration needed.

What creates the caustics?

Participating Media

What is fog?

What is beer?

What they have in common is

What happens to the light that doesn't make it through?


What is a shadow?

Shadows come `for free' in the ray tracer.

Yes. The methods, in increasing order of cost.

  1. Projective shadows

    Notice that we know a lot about how to project.

  2. Shadow maps

    How does this interact with scan conversion?

    What if the light is inside the view frustrum?

  3. Shadow volumes

Comment on global illumination. If you are doing a walk-through, you can calculate the illumination on each polygon once, then re-project the scene from different viewpoints as the user moves around.

Return to: