CS488 - Introduction to Computer Graphics - Lecture 19
Comments and Questions
Nearby Light Sources
What is used in practice
- 1/(a + br + cr^2)
- adjust constants to fit
- In the limits: r -> 0, r -> infinity, r in between?
What should be in theory
- Points
- falls of as 1/r^2
- emits a constant
- Lines
- falls of as 1/r, but only if the length is infinite
- emits as l
- Areas
- doesn't fall off, but only if the area is infinite
- emits as l^2
Global Illumination
Radiosity
Calculating all the light. Why bother?
- Think about what you need for a walkthrough
Each small bit y of surface in the scene
- receives some amount of light (possibly none)
- from other bits of surface, x: \sum_bits (light emitted in the
direction of this bit) * (fraction occluded)
- B(y, <y-x>, \lambda) = \sum_surfaces (I(x, <y-x>,
\lambda) + L(x, <y-x>, \lambda) * F(x,y)
- B - light received
- I - light emitted
- L - light re-emitted
- emits some amount of light (possibly none)
- re-emits some amount of light (possibly none)
- sum_directions (received light from ...) * (BRDF to ...)
- L(x, <y-x>, \lambda) = \sum_<z> B(x, <z>,
\lambda) * R(<z>, <y-x>, \lambda)
Solve the resulting equations.
- F(x, y)dx.dy is known from the geometry
- I(x, <z>, \lambda) and R(<z-in>, <z-out>, \lambda)
are surface properties in the model
- B(x, <z>, \lambda) and L(x, <z>, \lambda) are unknown.
- Substitute B into the third equation.
- The result is a set of linear equations that can be solved for L
Once L is known,
- B is easily calculated.
- The light field is easily calculated at point P
- LF(P, <z>, \lambda) = sum_x L(x, <P-x>, \lambda)
\delta(<z>, <P-x>)
What's wrong with this picture?
Light Fields
This is a way to avoid even the need to re-project.
Let's turn our attention away from the surfaces of objects and onto the
volume between objects
At every point in this volume there is a light density
- for every possible direction
- for every visible wavelength
This quantity L(P, theta, lambda ) is the light field. If we knew it we
could
- evaluate it at the eye position
- at the angle heading for each pixel
- to get RGB for that pixel
The evaluation is, in fact, just a projective transformation of the light
field.
How do we get the light field?
- by measurement
- by calculation
How is the light field used in 2007?
But tomorrow!!
Plenoptic Function
Think about what the viewer can do.
- The seriously handicapped viewer can
- not move in position
- not move the direction of gaze
Ray tracing is perfect.
- The mildly handicapped viewer can
- not move in position
- gaze in any direction
Ray trace onto a sphere surrounding the viewer and reproject from the
sphere to a view plane whenever the direction of gaze changes.
- The unhandicapped viewer can
- move around
- gaze in any direction
Ray trace onto a sphere at each accessible point.
The third is the light field, also called the plenoptic function, and it
has to be recalculated every time something in the scene moves.
`Backdrop' Applications
Imagine making a game
- There is an area accessible to the players, and
- there is an area inaccessible to the players.
An easy backdrop
- Surround the accessible volume with a sphere (actually a
hemi-sphere)
- Ray trace the scene outside the accessible volume onto the sphere
- Put the re-projected portion of the sphere into the frame buffer, depth
buffer set to infinity
- Where is the eye point?
- The centre of the sphere works for the mildly handicapped
viewer.
- What is missing for the unhandicapped viewer?
A more difficult backdrop
Photography
Perhaps a window
Texture Mapping
- Basic
- Start with a 2D image: pixellated or procedural
- Map 2D image onto primitive using a 2D affine transformation
- Simple if the surface of the primitive is flat
- otherwise, ...
- Texture pixels normally do not match display pixels, so some
image processing may be needed.
- Backwards map intersection point with ray into the image to get the
surface properties
- Normal Mapping (Bump mapping)
- Start with a difference surface, defined with respect to the
surface
- Calculate the normals to the difference surface and map them onto
the surface of the primitive
- Use the mapped surface models for lighting
- No occlusion, shadows are wrong, silhouettes are wrong, nobody
notices!
- Solid Textures
- Solution to mapping texture onto curved surfaces
- Usually procedural
Return to: