# Ray Tracing

What's wrong with

1. Beam tracing
2. Super sampling
3. Stochastic sampling

They are too expensive, because

• They put extra work into every pixel
• For most pixels the extra work gets no reward

Instead, they should try explicitly to put the work into the places that make the most difference.

Trying to do so we should get heuristics like

• If <xxx> is important, then put extra rays into <yyy>.

Here are a few such heuristics

Distribute carefully over

1. Reflection directions <==>
• you want better highlights.
• Put extra rays in the highlight
• How?
• Ray-trace a coarse image
• find the pixels/rays that contribute to the highlight
• Send extra rays between and near them
2. Area lights <==>
• Put extra rays at the edges of the shadows
• How?
• Ray-trace coarsely with a point light source
• determine pixels/rays that are near the edges of shadows
• Send extra rays doing a more complex lighting calculation near shadow edges
• How would you get them?
3. Time <==>
• you want motion blur.
• Concentrate on the objects that are moving
• How?
• Ray-trace coarsely for different times
• Determine pixels/rays that change between the images
• Cast extra rays there for even more times
• Average
4. Anti-aliasing
• You want better anti-aliasing
• Put extra rays where more than one primitive contributes to an object
• How?
• Ray trace coarsely
• Run an edge detector to find pixels/rays on edges
• Cast extra rays for those pixels and their neighbours.

These techniques are hard!

• Not to cast the extra rays
• But to find out where extra rays are needed.

# Lighting

The caustic project.

What is a caustic?

## Bidirectional Ray Tracing

Recursive ray tracing makes a tree with a big fan-out: Cost ~ exp( n*d ,) where d is the depth, n is the fan-out.

• n is big, so it's worth getting d down.

Bright idea

1. Trace out from eye
2. Trace out from sources of illumination
3. Match the rays in the centre ... somehow!
4. The resulting double cone gets the same performance at Cost ~ exp( n*d/2 ), which is surely worth it.

The problem is step 3. How do you match the rays? The current state of the art is photon mapping. Here's how it works.

From every light source

• send out rays in randomly' chosen directions
• For each ray
• Follow it until it hits a surface.
• If the surface is reflective
• Send out a ray in the reflection direction
• If the surface is not reflective
• Accumulate a pool of illumination
• Send out a ray in a randomly chosen direction
• Continue following rays until ...

For each pixel

• Cast a ray into the scene
• When it hits a surface, use the light you find accumulated there for the illumination in your lighting model

Notice tuning and calibration needed.

What creates the caustics?

## Participating Media

What is fog?

• Lots of little water droplets
• Light gets scattered

What is beer?

• Lots of little colour centres
• Light gets absorbed

What they have in common is

• The farther light goes the more likely it is to get scattered or absorbed.
• The property is described by Beer's Law (named after August Beer, no relation)
• I(x) ~ exp( -k(\lambda) x )

What happens to the light that doesn't make it through?

• Can we make them fast enough to use with OpenGL?

Yes. The methods, in increasing order of cost.

• Draw a dark area where shadow lies, using alpha blending unless you are trying to get the deep space' look
• Easy for simple objects onto onto simple objects, ...

Notice that we know a lot about how to project.

• Project as if each light is an eye
• Scene behind every point that appears in the virtual frame buffer is in shadow
• Store the distance to each point in the z-buffer
• Project towards the eye

For each point that is visible

• Transform the point to the light's coordinate frame
• Is the distance stored in the light's virtual z-buffer greater than the distance to the light
• If yes' then the point is shadowed from that light
• If `no' then it is illuminated by that light

How does this interact with scan conversion?

What if the light is inside the view frustrum?