CS488 - Introduction to Computer Graphics - Lecture 26
Global Illumination
Comment on global illumination. If you are doing a walk-through, you can
calculate the illumination on each polygon once, then re-render (re-project)
the scene from different viewpoints as the user moves around.
Radiosity
Calculating illumination
Each small bit of surface in the scene
- receives some amount of light (possibly none)
- from other bits of surface: \sum_bits (light emitted in the
direction of this bit) * (fraction occluded)
- B(y, <y-x>, \lambda) = \sum_surfaces (I(x, <y-x>,
\lambda) + L(x, <y-x>, \lambda) * F(x,y) * dx.dy
- emits some amount of light (possibly none)
- re-emits some amount of light (possibly none)
- sum_directions (received light from ...) * (BRDF to ...)
- L(x, <y-x>, \lambda) = \sum_<z> B(x, <z>,
\lambda) * R(<z>, <y-x>, \lambda)
Solve the resulting equations.
- F(x, y)dx.dy is known from the geometry
- I(x, <z>, \lambda) and R(<z-in>, <z-out>, \lambda)
are surface properties in the model
- B(x, <z>, \lambda) and L(x, <z>, \lambda) are unknown.
- Substitute B into the third equation.
- The result is a set of linear equations that can be solved for L
Once L is known,
- B is easily calculated.
- The light field is easily calculated at point P
- LF(P, <z>, \lambda) = sum_x L(x, <P-x>, \lambda)
\delta(<z>, <P-x>)
The Light Field
Plenoptic Function
Think about what the viewer can do.
- The seriously handicapped viewer can
- not move in position
- not move the direction of gaze
Ray tracing is perfect.
- The mildly handicapped viewer can
- not move in position
- gaze in any direction
Ray trace onto a sphere surrounding the viewer and reproject from the
sphere to a view plane whenever the direction of gaze changes.
- The unhandicapped viewer can
- move around
- gaze in any direction
Ray trace onto a sphere at each accessible point.
The third is the light field, also called the plenoptic function, and it
has to be recalculated every time something in the scene moves.
Filling Space with Light
Let's turn our attention away from the surfaces of objects and onto the
volume between objects
At every point in this volume there is a light density
- for every possible direction
- for every visible wavelength
This quantity LF(P, <z>, \lambda ) is the light field. If we knew it
we could
- evaluate it at the eye position
- at the angle heading for each pixel
- to get RGB for that pixel
The evaluation is, in fact, just a projective transformation of the light
field.
How do we get the light field?
- by measurement
- by calculation
- Radiosity is the obvious method
How is the light field used in 2009?
- routine applications for backdrops
- Think about a window in a dark room
- Light passes only one direction
- What's wrong with treating a window like a 2D scene on the wall?
- Easy to do by texture mapping
- How would we get the necessary data?
- calculation
- measurement
- remote controlled digital camera
- still the problems of storage and reconstruction
- yesterday's excitement
But tomorrow!!
`Backdrop' Applications
Imagine making a game or a movie
- There is an area accessible to the players (actors, camera), and
- there is an area inaccessible to the players (actors, camera).
An easy backdrop
- Surround the accessible volume with a sphere (actually a
hemi-sphere)
- Ray trace the scene outside the accessible volume onto the sphere
- Put the re-projected portion of the sphere into the frame buffer, depth
buffer set to infinity
- Where is the eye point?
- The centre of the sphere works for the mildly handicapped
viewer.
- What is missing for the unhandicapped viewer?
- How do you make certain that artifacts are not visible?
- For a normal backdrop, three volumes
- The smallest one for user position
- A surrounding one that is 3D modelled.
- The remainder, which is done as a normal backdrop, and moves
with the user
- For a plenoptic backdrop, two volumes
- One for user motion
- The remainder, which is a plenoptic backdrop, which doesn't
move with the user
- Sizes determined perceptually
- threshold of perceptability of motion parallax
- threshold of perceptability for object rotation
Return to: