CS488 - Introduction to Computer Graphics - Lecture 16
Comments and Questions
- Mid-term
- Project proposals
Ray Tracing Petering Out
Anti-aliasing
This a topic close to image processing, but
- we don't want to lose information while we are scan converting
For example, scan converting polygons, using the a-buffer
- Sort the polygons back to front
- Start with a black pixel
- A polygon covers part of it.
- What do we do?
- naive: write the pixel
- slightly less naive, if it covers more than half, write the
pixel
- OpenGL: calculate the coverage, then blend the pixel, using the
alpha-buffer
Two different, but linked, types of artifacts
- Spatial (or temporal) frequency aliasing,
- image features appear at innappropriate sizes
- Reconstruction aliasing
- totally new features, like jaggedness, appear.
Exact solutions are simple in principle
- Remove high spatial frquencies by filtering
- Fourier transform in image space: remember that you need to keep
both amplitude and phase.
- Filter
- Inverse transform in image space
Filtering is the tricky part.
- Use a sampling filter that is the inverse of the reconstruction filter
- For the display to be used find the pixel shape
- Construct a sampling filter appropriate for the pixel shape
- Do ray-tracing calculating over a weighted area
Finding the pixel shape is the hard part.
In practice
- Beam tracing
- Super sampling
- Stochastic sampling
Distribution Ray Tracing
What's wrong with
- Beam tracing
- Super sampling
- Stochastic sampling
It doesn't try explicitly to put the work into the places that make the
most difference.
Trying to do so we should get heuristics like
- If <xxx> is important, then put extra rays into <yyy>.
Here are a few such heuristics
Distribute carefully over
- Reflection directions <==>
- you want better highlights.
- Why would you want better highlights?
- How would you do this?
- Area lights <==>
- you want soft shadows
- Why would you want soft shadows?
- How would you get them?
- Aperture <==>
- you want depth of focus effects
- Why would you want depth of focus effects?
- How would you get them?
- Time <==>
- you want motion blur.
- Why would you want motion blur?
- How would you get it?
- Adaptive<==>
- Render the image w/o antialiasing
- Use an edge finding algorithm to collect the pixels where the
colour changes abruptly
- Re-render those images using antialiasing
All these techniques, except the last, are inordinately expensive! To cut
down the work:
Adaptive anti-aliasing
- Render the image w/o antialiasing
- Use an edge finding algorithm to collect the pixels where the colour
changes abruptly
- Re-render those images using antialiasing
All these techniques, except the last, are inordinately expensive!
Lighting
The caustic project.
What is a caustic?
Bidirectional Ray Tracing
Recursive ray tracing makes a tree with a big fan-out: Cost ~ exp( n*d ,)
where d is the depth, n is the fan-out.
- n is big, so it's worth getting d down.
Bright idea
- Trace out from eye
- Trace out from sources of illumination
- Match the rays in the centre ... somehow!
- The resulting double cone gets the same performance at Cost ~ exp(
n*d/2 ), which is surely worth it.
The problem is step 3. How do you match the rays? The current state of the
art is photon mapping. Here's how it works.
From every light source
- send out rays in `randomly' chosen directions
- For each ray
- Follow it until it hits a surface.
- If the surface is reflective
- Send out a ray in the reflection direction
- If the surface is not reflective
- Accumulate a pool of illumination
- Send out a ray in a randomly chosen direction
- Continue following rays until ...
For each pixel
- Cast a ray into the scene
- When it hits a surface, use the light you find accumulated there for
the illumination in your lighting model
Notice tuning and calibration needed.
What creates the caustics?
Participating Media
Shadows
Radiosity
Texture Mapping
- Basic
- Start with a 2D image: pixellated or procedural
- Map 2D image onto primitive using a 2D affine transformation
- Simple if the surface of the primitive is flat
- otherwise, ...
- Texture pixels normally do not match display pixels, so some
image processing may be needed.
- Backwards map intersection point with ray into the image to get the
surface properties
- Normal Mapping (Bump mapping)
- Start with a difference surface, defined with respect to the
surface
- Calculate the normals to the difference surface and map them onto
the surface of the primitive
- Use the mapped surface models for lighting
- No occlusion, shadows are wrong, silhouettes are wrong, nobody
notices!
- Solid Textures
- Solution to mapping texture onto curved surfaces
- Usually procedural
Return to: