CS798 Assignment 1: Painterly rendering
Due date: January 23rd, 2004
Painterly rendering algorithms create an artistic rendering
from a source image (usually a photograph). You are to implement
one such algorithm, namely the one described by Aaron Hertzmann in
"Painterly Rendering with Curved Brush Strokes of Multiple Sizes",
from the 1998 SIGGRAPH conference. A sample rendering from the
paper is shown above. The algorithm covers a canvas in layers of
strokes of decreasing radius, applying strokes where the canvas
differs sufficiently from the reference image. Long strokes can
be generated that follow curves of nearly constant colour in the
You should start by reading the paper, available from
page. You'll notice that the paper contains pretty good pseudocode
for the core algorithms -- the pseudocode translates readily into
a real program.
The next step is to construct an implementation of the algorithm
(see below for some notes regarding the implementation). You
are required to implement the following aspects of the paper.
- The canvas is painted in a sequence of layers with successively
- Each layer is painted by checking for colour difference
between the current canvas and a blurred reference image
(as in the
paintLayer procedure in the paper).
- The brush strokes are generated as in the
procedure, i.e. they follow a vector field defined by the
perpendicular of the image gradient of the blurred reference
- Strokes are rendered as spline curves with rounded ends.
They don't need to be antialiased (though it's worthwhile).
- At least the following parameters are available as described in the
- Approximation threshold T
- Sequence of brush sizes R1,...Rn
- Curvature filter fc
- Blur factor fσ
- Minimum and maximum stroke lengths
- Grid size fg
Next, you must implement at least one nontrivial extension to your
algorithm. This part of the assignment is open-ended, and there are
many possible extensions. Ideally, a "non-trivial" extension is one
that produces a noticeable, qualitative change to the drawings produced
by your system. You should be able to show side-by-side images with
and without your extension and convince a stranger that your extension
is doing something.
What follows is a short list of suggestions for extensions. You are
free (indeed, encouraged) to dream up other ideas. If you're unsure
about your idea, or need more guidance, come talk to me.
This part of the assignment is not intended to be overwhelming; it's
just a way to get you thinking about what ideas might follow on from
the paper. Don't feel you have to wear your fingers down to nubs
trying to implement your extension. A proof-of-concept will suffice.
- Improve performance by moving some of the computation to the
GPU. This extension doesn't produce a qualitative difference
in the output, so you'll probably have to demonstrate it live.
Don't do this unless (a) you're handy with graphics hardware,
and (b) the performance improvement is dramatic.
- Prettier strokes. The strokes in the paper are solid coloured
curves. Consider adding in some texture, shape, or surface.
Figure 8 of Artistic Silhouettes: A Hybrid Approach.
gives a good demonstration of possible stroke enhancements.
Another possibility would be to implement Hertzmann's
Fast Paint Texture. Note that these sorts of extensions
probably require you to use OpenGL to render your canvas.
- Smarter strokes. Other papers have attempted to place strokes
more intelligently than simply by following image gradients.
algorithm for automatic painterly rendering based on local
source image approximation from NPAR 2000, and
Vision: Painterly Rendering Using Computer Vision Techniques
from NPAR 2002. As a more minor enhancement,
discards gradient values that are too small and uses thin-plate
splines to interpolate them from neighbouring good values.
- Animation. This has to be more than just running the algorithm
on every frame in some video clip. There has to be some kind
of frame-to-frame coherence of strokes. See papers on this
Hertzmann and Perlin. If you have access to a video
capture card, the latter paper can be implemented fast enough
to run it in real time on live video, which is more exciting
than batch processing a sequence of frames.
- Salience. The algorithm doesn't "understand" the image it's
painting; it considers importance and detail
to be the same thing when they usually aren't. Allow the
user to paint a "detail map" that tells the algorithm where
finer strokes (or at least finer tolerances) are needed.
This idea is mentioned in Hertzmann's
Paint by Relaxation.
He tells me that this approach works reasonably well, but doesn't
produce results that are terribly exciting. He gives details
on the idea on page 37 of his thesis (page 51 of the PDF file).
- Edges. Find a way to extract interesting edges from the
image and paint those edges in black on top of the brush
strokes to better delineate features in the rendering.
The final step is to produce painterly renderings using your program.
You can use any photographs you like; I recommend
Philip Greenspun's collection
and Freefoto. What's important
is that your renderings should clearly demonstrate the features you
implemented. At least one image should be rendered a few times with
different parameters (see Section 3.1 of the paper) to produce different
What to submit
You need to produce a short write-up describing your implementation
and showcasing the paintings you created. Your write-up can either
be a PDF document or a web page. Your submission should not contain
more than about three pages of text, though you're welcome
to make it longer by including lots of pictures.
I would prefer for you to make your submission available on the web
and mail me a URL by the deadline. If you would prefer not to do
that, mail me the PDF or an archive of the web page as an attachment.
You are free to structure your submission as you desire. But it should
at least include the following:
You're welcome to include other comments and observations.
- Describe your implementation. What set of languages, tools,
and libraries did you use? What is the interface? If you
created an interactive user interface, include screen shots.
- If there are aspects of the paper that you didn't get working,
list them. If applicable, explain how you would need to modify
your program to complete the implementation.
- Describe your extension. Explain why you predicted your
extension would enhance the algorithm's output. Briefly
explain how you had to modify the core algorithm to accommodate
this extension. Comment on how successful you feel the
- Include sample output. Always include the source image along
with the painterly rendering. Use at least three different
source images. At least one image must be rendered with several
parameter settings to show the different artistic effects that
- At least one result must clearly demonstrate the
effect of your extension (if your extension can not be demonstrated
visually, you must find some other way to prove that it's working).
If possible, give a side-by-side comparison with and without
the extension running, highlighting its effect.
By default, I'm not going to look at your source code. But I reserve
the right to request it as part of marking if it sounds from your
description like there's something worth taking a look at.
I also reserve the right to request a live demonstration; this
could be important if you create an especially nice interface or
a realtime version of the algorithm.
You're free to construct your implementation in whatever way you
like, as long as it can produce the desired final renderings.
But for the purposes of this sort of work, not all languages
and libraries are created equal. There are a few things you'll need
to be able to do with images.
- Load an image from a file and extract its pixel data
- Apply Gaussian blurs and Sobel edge detection (to compute
gradients) to images
- Draw curved strokes on an image and read back the pixel data
You're encouraged to find implementations of these algorithms out
in the world and incorporate them into your program. You are only
responsible for the core algorithms described in the paper. If you're
not sure about whether it's okay to use some piece of code, ask me.
You don't have to follow the details of the paper exactly. what's
important is obtain and render a similar set of strokes. So, for
example, you don't need to use the z-buffer to randomize stroke
order; it suffices to store up a sequence of strokes in memory
and execute them in random order. Indeed, you don't need to use
OpenGL at all.
My implementation is written in a style similar to the assignments
in CS488 -- a Python interface (for flexibility) with a C++ back end
(for speed). The Python
Imaging Library makes loading and saving images a snap.
is a Python module that provides Gaussian blurring to the PIL.
In my C++ code, I render strokes to an image (stored as a character
array) using the excellent
library. If you wanted to turn this sort of implementation into
an interactive OpenGL interface, you might also find the
PyOpenGL library useful.
Certain important operations in PIL, PIL_usm, and libart
are non-obvious and require hard-won knowledge to get working.
I've tried to summarize the technical aspects that could help
you in two files: painthelp.cpp
and paint.py. You can use these
files as a skeleton for your own implementation, or steal ideas
from them as necessary.
Aaron Hertzmann has also been kind enough to donate some code
to the assignment. Mainly, he's providing his Stroke class,
which can render coloured strokes using OpenGL triangle strips.
Feel free to borrow from the following files: