CS488 - Introduction to Computer Graphics - Lecture 9
Public Service Announcements
- Assignment 2 due 4 June, one week from today.
Perspective Projection
Mouse Interface
Model-View-Controller
Model
The application logic
- does not interact directly with the user
- receives input from controller
- updates its data structures
- provides output to view
View
Presentation of the model to the user
- receives a scene from the model
- shared data synchronized using handles
- puts the scene onto the output devices
- visual, audio, tactile, etc
- translates from world to device coordinates
Controller
Provision of interpreted (more or less) input to the user
- gets raw input from user actions on input devices
- i.e., in screen coordinates
- has to communicate the input in language the model understands
- e.g., gets screen coordinates from input device
sends the `name' of a scene element to the model
Complication
Usually the view knows where in screen coordinates it put each scene
element
Picking
For assignment 3 you need to be able to select graphical objects with the
mouse
The simple principle
- Mouse gives (x, y).
- Render pixel at (x, y ) keeping track of the polygon it comes from.
- Associate polygon with object
For the assignment let GL do it for you.
- Associate names (unsigned integers) with objects that are drawn
- Get back a hit stack, with primitives `near' where the mouse
clicked
- It's up to you how you handle the hit stack.
See the notes.
Rotation
For assignment 3 you need to be able to rotate with the mouse
The virtual trackball
- vertical motion rotates about y
- horizontal motion rotates about x
- circular motion outside the
trackball rotates about z
You need to modify the sample code.
Hierarchical Models
What we have
How to render any polygon anywhere
- Put the polygon where you want it to be in world coordinates
- Transform to view coordinates
- Perspective transform
- Clip in normalized device coordinates.
- Scan convert each polygon in any order, using z-buffer to remove
occuded pixels
What we want to have
Objects made of polygons that we can feed into the rendering pipeline.
- Making the objects is called modelling.
- Here we discuss the data structures and algorithms associated with
hierarchical modelling
- Hierarchical is a synonym for `divide and conquer'.
Argument by example
- A Forest consists of many trees, each at a different location, scale
and orientation
- Use only two or three different trees, each defined with respect to
a tree-centred coordinate system (TCS)
- With respect to the world coordinate system (WCS) define, for each
tree, a matrix that expressed the relationship between the TCS and
the WCS.
- Each Tree has a Trunk (in a TkCS) and a Root (in an RCS)
- Trunks and roots come in various versions
- Choose one trunk and one root, making sure that they are
compatible.
- Define matrices that define RCS and TkCS with respect to the
TCS
- The matrices may include scaling.
- Each Trunk has several branches, each with a BCS
- Branches come in various versions
- Choose branches of compatible versions.
- Define a BCS/TkCS matrix for each.
- The matrices may include scaling.
- Each branch has many twigs, each having a TgCS
- Choose a few twigs from the various versions of twigs
- Define a TgCS/BCS for each
- The matrices may include scaling
- Each twig has several leaves, LCS
- Leaves come in a variety of shapes
- Choose some leaves that have compatible shapes.
- Define the LCS/TgCS of each
Aside. It seems to me that we have, rather glibly, required a lot of
choosing and defining
- We defined one polygon mesh for each version of a tree element: 5(types
of element) x 10(versions per type) = 50 meshes to model.
- We choose a set of each type from the versions: 10(elements per set) x
10(choices per element) = 100
- We define a matrix for each element: 1(trunk) x 1(root) x 10(branches)
x 10(twigs) x 10(leaves) = 1000
This should seem like a lot of work, because it is. Remember this when I
tell you that your project has too much modelling.
This seems like work a computer could do better than a human. Much
research has been done with -- usually -- disappointing results. We have not
yet found good algorithms that put in the right amounts of randomness and
order. The human visual system seems to be very finally tuned to expect a
correct
How do we render a leaf?
- Transform points from LCS to TgCS (MLTg)
- Transform points from TgCS to BCS (MTgB)
- Transform points from BCS to TkCS (MBTk)
- Transform points from TkCS to TCS (MTkT)
- Transform points from TCS to WCS (MTW)
- Transform points to view coordinates (MWV)
- Perspective Transform points to the image plane (MVIp)
Make up the matrix M = MVIp * MWV * MTW * MTkT * MBTk * MTgB * MLTg1
- and use it on each point (probably polygon vertex) in the leaf
How do we render the second leaf?
- Multiply again: M2 = MVIp * MWV * MTW * MTkT * MBTk * MTgB * MLTg2
Too much work.
- Do less work: M2 = M * (MLTg1)^-1 * MLTg2
Accumulating round-off error.
- Keep around
- MVIp * MWV * MTW * MTkT * MBTk * MTgB,
- MVIp * MWV * MTW * MTkT * MBTk,
- MVIp * MWV * MTW * MTkT,
- MVIp * MWV * MTW,
- MVIp * MWV
We like this approach because we can do it with a stack.
Scene Graph
Render a scene using a matrix stack.
Render a forest
proc forest( )
unitMatrix( )
multMatrix(MVP)
multMatrix(MWV)
pushMatrix( )
multMatrix(MTW1)
tree1( )
popMatrix( )
pushMatrix( )
multMatrix(MTW2)
tree2( )
popMatrix( )
etc.
Render trees
proc tree1( )
pushMatrix( )
multMatrix(MTTk11)
trunk1( )
popMatrix( )
pushMatrix( )
multMatrix(MTR12)
root2( )
popMatrix( )
proc tree2( )
pushMatrix( )
multMatrix(MTTk21)
trunk1( )
popMatrix( )
pushMatrix( )
multMatrix(MTR22)
root2( )
...
We don't want always to write a program, so we encapsulate the program as
data.
Traverse a DAG
traverse( root )
proc traverse( node ) {
if ( drawable( node ) ) {
draw( node )
}
for each child {
traverse( child )
}
return
}
Build a DAG
- This is done by a modelling tool, or program
scene = gr.transform( )
tree1 = gr.transform( )
gr.add_child( tree1, scene )
gr.set_transform( tree1, gr.translation(...)*gr.rotation(...)*...)
...
root = gr.transform( )
rootshape = gr.cylinder( )
gr.add_child( root, tree1 )
gr.set_transform( root, gr.scaling(...)*... )
gr.addchild( rootshape, root )
gr.setmaterial( rootshape, rough_bark )
trunk = gr.transform( )
gr.add_child( trunk, tree1 )
gr.set_transform( trunk, gr.scaling(...)*... )
trunkshape = gr.cylinder( )
gr.add_child( trunkshape, trunk )
gr.add_material( trunkshape, roughbark )
// The code below is repeated for each branch
branch = gr_transform( )
gr.add_child( branch, trunk )
gr.set_transform( branch, gr... )
branchshape = gr_cylinder( )
gr.add_child( branch, branchshape)
gr.setmaterial( branchshape, mediumbark )
twig = grtransform( )
...
column1 = gr.transform( )
gr.add_child( temple1, column1 )
gr.set_transform( column1, gr.translation(...)*gr.scaling(...)
...
which generates the DAG:
forest
|
tree1------------------------------tree2----------tree3--...
| | |
trunk1--root1 trunk2--root2
| |
branch11--branch12--branch13--... branch21--...
| |
twig111--twig112--twig113--... twig211--...
| |
leaf1111--leaf1112--leaf1113--... leaf2111--...
Colour
What is needed for colour?
- An eye.
- A source of illumination.
- A surface.
How is colour created?
- Source of illumination emits light (photons of differing
wavelength).
- Surface modifies light.
- Eye compares surfaces and notices different modifications.
How do we represent colour?
- As some kind of sum of photons?
- As a distribution of photons (over wavelength)?
- As a ratio of distributions of photons?
To the rescue,
But,
- Only approximately correct
- but to within 1-2% for most humans,
- only describes matching, not appearance
- Doesn't describe non-additive colour mixture
More precise requires illumination as well
Return to: