CS798 Assignment 2: Real-time NPR with 3D models
Due date: March 14th, 2008
At the core of many real-time NPR application is the problem of
detecting and rendering silhouettes of 3D models. Numerous
approaches have been attempted, and they have varying degrees
of success depending on context. You'll implement a "simple"
silhouette technique derived from a couple of sources.
The most important paper to read is the
WYSIWYG NPR paper
from SIGGRAPH 2002. Pay particular attention to Section 4.3 (Stroke
Visibility) and Section 5.4 (Silhouettes). It might also help to
read some of the references they mention in those sections,
Real-Time NPR paper and the
Silhouettes paper. I also recommend
on silhouettes; your implementation will be based on his Section 3.3.
Most recently, Kaplan's
HQI paper provides a concise summary of these techniques, and some ideas
for speeding them up with additional functionality.
You must then construct an interactive viewer for 3D objects
that renders them in silhouette style. You should support at
least a default mode where the only lines drawn are visible
silhouette lines. At the bottom of this page, I'll provide
a link to a skeleton implementation you can download as a base
for your implementation.
You're required to implement only a small piece of the functionality
of the WYSIWYG NPR paper. Here are the core features you need.
I'm offering an option in
how you implement silhouettes. The text in blue explains the
alternative approach you're allowed to take.
- You must use their technique for deciding which mesh
triangles contain silhouette edges. You do not
have to use their temporally-coherent, probabilistic
method; it's okay to iterate over all triangles every
frame. You do not need to detect and discard
"back-facing silhouette edges"; they won't affect the
output too much. You do not need to link
individual segments into long curves; it's okay to
process each triangle separately.
Alternatively, you are permitted to find
silhouette edges as a subset of mesh edges, by checking
for edges that border front faces and back faces. If you choose to
find edges this way, you should use the edge buffer technique
this paper. Note that this method is a bit easier
than the one above, so I'll ask for a bit more complexity
below. Ideally, of course, you'd implement both and be
able to toggle between them in the interface...
- You must then determine visibility of silhouette edges.
You are allowed to do this at the granularity of whole
edges; that is, you can make a decision to draw an entire
edge or none of it. To determine visibility, you need
to render the "ID image" that they describe. It's
possible to render a simpler version of the ID image
for our purposes -- mesh triangles can be drawn in the
background colour, and each silhouette edge can then be drawn
with a colour that identifies the triangle that generated
the given edge. Make sure that things like lighting and
antialiasing are disabled!
If you choose to use mesh edges as
silhouette edges, you should measure visibility pixel-by-pixel,
and draw only those portions of silhouette edges that are
- At this point, you should have a list of silhouette
edges that should be drawn. You must draw the edges in
image space with a user-controlled thickness.
In other words, you must project the endpoints of line
segments to screen coordinates and render each segment with
constant on-screen thickness. A good balance between
simplicity and attractiveness is to render each segment
in a flat colour as a rectangle with a semicircle attached
to each end.
- Your implementation must be reasonably fast.
Now, lots of factors will affect the speed of your program
(your CPU, your graphics card, the complexity of the model,
and of course the quality of your implementation), and
"reasonably fast" is poorly defined at best. I won't attempt
to nail down a precise threshold, but on the machines in
the undergrad graphics labs you should have no trouble
breaking 15FPS for simple models like the cow and the
triceratops (included in the source distribution).
Next, you must implement at least one nontrivial extension.
This part of the assignment is open-ended, and there are
many possible extensions. In this assignment, there are both
qualitative extensions (that affect the aesthetics of the output)
and quantitative extensions (that affect performance or correctness).
What follows is a short list of suggestions for extensions. You are
free (indeed, encouraged) to dream up other ideas. If you're unsure
about your idea, or need more guidance, come talk to me.
This part of the assignment is not intended to be overwhelming; it's
just a way to get you thinking about what ideas might follow on from
the paper. Don't feel you have to wear your fingers down to nubs
trying to implement your extension. A proof-of-concept will suffice.
- Improved performance. You'll need to demonstrate that your
changes had a noticeable effect on performance, possibly via
- Implement the probabilistic technique mentioned in
paper. Every frame, you need to search for silhouette
edges among those triangles that had them last time,
plus a small, randomly-chosen set of additional triangles.
- Implement part of the algorithm on the GPU. There are
more and more ways to do this every day as GPUs become
more sophisticated. One potential starting point
are the Hardware Determined
Edge Features of McGuire and Hughes (NPAR 2004).
- Improved correctness. The algorithm as given produces many
false positives (silhouette edges that are drawn when they
shouldn't be) and false negatives (edges that should be there
but aren't). Careful tuning and hacking are
required to balance the algorithm and obtain the correct set
of silhouette edges. If you work on getting very precise
silhouettes, please be prepared to demonstrate how and why
your improvements work.
- An analytical solution. It would be very interesting to move
the whole computation back into the CPU and use advanced
computational geometry algorithms to achieve comparable performance.
Judicious use of arrangements of line segments could take the place
of the discretized ID image and give far higher precision.
- Improved aesthetics. The paper clearly demonstrates the
wide range of aesthetic styles that can be achieved using
silhouettes and little else.
- Paint the interior of the model with a toon shader.
- Link the disconnected silhouette segments into long
sequences of control points. Then render these sequences
as spline curves. Experiment with adding character to
the curves such as wiggle and variable thickness.
- Add texture mapping. In order for
this to be effective, you'll probably need to connect
the segments up into strokes.
- Implement a paper texture, as described in Section 4.4
of the WYSIWYG NPR paper.
- Allow the user to paint decal and/or crease strokes onto the
model, as described in Sections 5.1 and 5.2. The
visibility algorithm applies equally well to these strokes.
What to submit
You need to produce a short write-up describing your implementation
and showcasing the images you created. Your write-up can either
be a PDF document or a web page. Your submission should not contain
more than about three pages of text, though you're welcome
to make it longer by including lots of pictures.
I would prefer for you to make your submission available on the web
and mail me a URL by the deadline. If you would prefer not to do
that, mail me the PDF or an archive of the web page as an attachment.
You are free to structure your submission as you desire. But it should
at least include the following:
You're welcome to include other comments and observations.
- Describe your implementation. What set of languages, tools,
and libraries did you use? What is the interface? Include
one or two screenshots showing your interface in action.
- Give some performance numbers. How many frames per second
are you able to get for the models provided in the source
code and the models you experimented with? What hardware
did you use to get these numbers?
- If there are aspects of the paper that you didn't get working,
list them. If applicable, explain how you would need to modify
your program to complete the implementation.
- Describe your extension. Explain why you predicted your
extension would enhance the algorithm's output or behaviour.
Briefly explain how you had to modify the core algorithm to
accommodate this extension. Comment on how successful you feel the
- Include some samples. This assignment is one part of what
would be a larger NPR application, and as such is not intended to
produce finished pieces of art on its own. Nevertheless, you
should include drawings created by your system with at least
two different models. If you implemented an aesthetic
extension, produce at least one "finished" drawing that uses it.
- At least one result must clearly demonstrate the
effect of your extension (if your extension can not be demonstrated
visually, you must find some other way to prove that it's working).
If possible, give a side-by-side comparison with and without
the extension running, highlighting its effect.
By default, I'm not going to look at your source code. But I reserve
the right to request it as part of marking if it sounds from your
description like there's something worth taking a look at.
I'm likely to book live demonstrations for this assignment; more
details will follow.
You're free to construct your implementation in whatever way you like,
as long as it implements the technique described above. This
assignment really doesn't rely on anything apart from a 3D graphics
library. OpenGL is the obvious choice, though more adventurous
students may attempt Direct3D. You'll need some way to read 3D
triangle meshes. If you're working from scratch, I recommend the very
straightforward OBJ file format (for which some sample code can also be
found in the CS488 skeleton code). An even better choice in C++ would
be to start with Szymon Rusinkiewicz's trimesh
library for working with meshes.
In 2004 I provided a skeleton program based on Python and PyOpenGL (the
linked tarball also includes some sample OBJ meshes).
It's doubtful that old sample will still compile without a lot of
effort, but you might be able to harvest some ideas from the source code.
If I produce a new skeleton program, I'll post it here.
See the Brown
Mesh Set for many standard test models in computer graphics.
There are some non-obvious bits of OpenGL that you need to be
aware of in order to get good results:
- In a system like this one, it can be very hard to know
what your program is doing. I strongly recommend that
you set up your drawing function so that you can view
intermediate results (like the ID image) interactively.
Consider having a "draw mode" selector in the user
interface so you can switch back and forth.
- When building the ID image, you can't simply draw the
triangles and then draw the silhouette edges. In this
simple approach, silhouette edges will sometimes drop
underneath the triangles they lie on (sometimes called
"depth fighting"). Use
to push the model slightly backwards. Then set the
offset to zero when drawing edges.
- You'll need to use
glReadPixels to read
chunks of the ID image back into memory. This is a
costly operation. Don't do it more times than you need to.
One approach is to read only those parts that are near
potential silhouette edges. Another is to read the whole
frame buffer once into memory.
suggests, you can scan-convert segments using, say,
Bresenham's algorithm, and check the neighbourhood
around every pixel (you'll find many pseudocode implementations
of Bresenham's via a simple web search).
- In your draw method, you'll need the camera position.
Assume that the model is drawn in camera coordinates.
M is the OpenGL modelview matrix, M
transforms world coordinates into camera coordinates.
The camera is at position (0,0,0) in camera coordinates,
and so the camera position in world coordinates will
You can use
glGetDoublev to get M
from OpenGL, and manipulate it using
Matrix4x4 class. Just remember that
OpenGL stores its matrices as the transposes of