CS789, Spring 2005
Lecture notes, Week 2
Models of Tasks and Users
Models
Why do we use models?
- put the model in place of the real thing and calculate
Two approaches to validating models
- verify model components against accepted truth and against
experiments.
- put the model in the system and compare results to actual results
Come in two varieties
- too abstract
- too concrete
How do you find the correct balance?
Task Models
What is a task?
- Something the user wants to do, such as
- play a game
- browse the internet
- do his or her income tax
Interfaces generally support collections of tasks. What are the tasks
supported by a bank machine?
What is a task model?
- Something that breaks the task into component parts
This can be done along several different dimensions
- time
- space
- level of analysis
- what else?
Task models usually try to simplify the task a user is trying to do in one
way or another. Two ways are common.
- Conceptual models - different task descriptions based on different
levels of abstraction, (compare to levels of knowledge last lecture)
- Componential models - break tasks in separate actions, all on the same
level of abstraction. Tasks can normally be broken in two ways.
- Into streams or chains - channels of activity that go on at the
same time and combine to accomplish the task. Example, steering
wheel, brake, accelerator streams in driving. Few examples of
multi-stream tasks in computer-mediated interfaces - why?
- Into links - "atomic" actions that follow one another to accomplish
the task.
Conceptual models
Characteristics Try to capture the way it's necessary to act in
order to get the task done.
Example - GOMS. An acronym, of course.
- G - goals
- O - operators - basic operations
- M - methods - alternate ways of accomplishing the same goal
- S - select
Uses
- constructing scenarios that include details on all the levels
- ensuring observability
- presented to users to aid learning
- possibly, a formalism in which interfaces can be described
Componential models
Characteristics. At a low level - usually, the operational level -
identify behavioural atoms that have to be chained together to get the job
done.
Example - Keystroke model.
Uses
- computing costs of scenarios
- examining assumptions about optimization
- possibly, a formalism we can use for proving theorems about
interfaces
Task models generally assume a user with fixed skills, capable of finding an
optimal course of action from among a variety of alternatives. There is an
important subject -- cost-benefit analysis -- which defines ways of selecting
optimal courses of action. This discipline tries to formalize the process.
Looking at it shows
- some important underlying assumptions we implicitly make when we talk
in terms of costs and benefits
- some considerations that are too easy to overlook when thinking about
costs and benefits
Cost-benefit trade-offs
First the basic idea, then the conceptual background, then the measurement
problems
The basic idea
Cost-benefit is a calculus: a method of substituting quantitative
calculation for intuition in judging alternatives.
- More than one method to do something; which is best? routing a highway,
answering a letter, correcting spelling
- Find the costs, subtract the benefits and minimize
- This is something we do all the time, and usually quite well.
- Humans are good at finding an optimal solution when searching over
continuous functions, how much peanut butter to put in a sandwich,
- but not good at,
- optimizing across discrete alternatives, choosing the flavour of an
ice cream cone,
- optimizing when extreme probabilities are involved, like buying
insurance or lottery tickets,
- optimizing when it's necessary to consider many dimensions at
once,
An interesting generalization of the "interface problem"
Let's think about different scales of analysis.
- How does your perceptual/motor learning select receptor/muscle
interactions?
- How do you select the combination of features to accomplish a task?
- How do you select which program to use?
- How does the designer select features for a program?
- How does a company select what type of program to manufacture?
- How does an industry decide what type of technology to develop?
Can you see any common patterns that occur at different scales?
Cost Measurement
To optimize users gather data. Where users can't optimize designers gather
data on behalf of users.
Measurement
- It is essental that costs and benefits be computed quantitatively.
Why?
- Aggregation requires a single unit of measurement into which all costs
and benefits can be translated:
- time to perform an action is usual;
- which has a consequence: things that aren't easily convertible are
easily overlooked or inaccurately estimated. Examples:
- subjective response: "feels nice", or "makes me feel good", or
"feeling of frustration"
- dollars: adding a feature may increase the dollar cost because
it necessitates system upgrading, software or hardware
- self-esteem: "I like being a Unix guru."
Why is measurement difficult?
- Externalities: effects that apply to other uses, other users, other
usages. Examples
- learning
- for doing the same operation in the future
- for doing other operations: can be positive or negative
(interference)
- creating useful objects like macros
- automated methods for present and future use
- use by self or use by others
- complementarities: an action can change the value of another
action, either positively or negatively
- add to PATH or define an alias?
- Remember that externalities are created for other parts of life.
Users have life-objectives unrelated to computers
- N.B. Most arguments in favour of consistency, congruence, user
interface toolkits, UIMXs, etc, etc, etc, depend on the creation of
externalities.
- Discount rates
- How much is the user willing to invest in making the future
easier?
- The answer varies from person to person, and from occasion to
occasion
- Counter-intuitively, high discount rates actually make cost-benefit
calculations simpler.
- Opportunity costs
- What can you do with the time you save?
- Example. Does it matter that the compiler is slow if you can answer
mail (read news, surf the web) while it is running?
- Users vary, but
- users become more similar to one another as they learn
so that
- highly practiced users are likely to be more predictable than
novices
A common "solution" to user variation is to throw in everything,
including the kitchen sink. This has positive and negative consequences.
Negative:
- implementation and documentation cost
- increased time just to learn about the existence of features
("Without learning anything" is a self-contradiction.)
- increased user confusion
- users diverge as they learn
Positive:
- users can what's best for them as individuals
- users can switch from one way to another of doing the same thing to
ease boredom
- users can remain individuals
A typical componential model: the keystroke model
- Predict the time required to perform an interface operation
- in terms of elementary operations
- used to compare usability
- easily overwhelmed by complexity
- Assumptions
- time is what matters
- errors unimportant
- components can be combined by addition (additive factors)
- individual components are independent and predictable
- Levels of decomposition
- large tasks from unit tasks: unit task ~ one minute
- unit tasks from task acquisition plus task execution
- execution from system tasks
- system task from "keystroke" or "point with mouse" or "home with
hands" or "mental preparation" or "wait for computer" or ...
- Examples
- typing put<cr> four keystrokes
- selecting "put" from a static menu:
- home to mouse
- point with mouse
- mouse buttonpress
- home to keyboard
- Evaluation (Should I believe it?)
- evaluation must be empirical: some sort of statistics
- test additive factors assumptions: how many are added post
hoc
- test reasonability of factors
- Applications
- benchmark results for user interfaces: but where do the natural
methods come from?
- parametric analysis: which method wins? for which users?
- application categorization: keyboard or mouse dominated?
Speed-accuracy trade-off
- The faster you go the more errors you make
- Extreme strategies:
- as slow as necessary to avoid errors
- as fast as you can regardless of errors
- Which intermediate strategy is best?
- depends on costs and benefits
- example: positioning the cursor using the mouse, which depdends on
what you're doing just before and just after
Should errors be avoided?
- Speed-accuracy trade-off says ??
- What is the cost of an error?
- Error cost varies interestingly with
- error frequency.
- cost of recovering from error
- Learning uses errors in important ways
- trial and error
- exploration
User models
User model buzzwords
- Ergonomics
- engineering specification of physical characteristics
- focus on static characteristics
- operational knowledge
- Human factors
- engineering specification of mental/physical capabilities
- focus on dynamic characteristics
- operational and rule knowledge
- User models
- componential models of how humans do tasks
- focus on tactical characteristics
- rule and procedure knowledge, taking operational knowledge into
account
- Cognitive models
- how users think and learn
- focus on strategic characteristics
- problem-solving
Models of input
- Attentive/pre-attentive
- divide input into streams
- attend to one of the streams
- streams claim attention by content and context
- Parallel/serial
- monitor many streams
- focus on one stream at a time
- conjunction is defined on streams
- How are qualities associated into objects?
- objects defined spatially
- conjunction is fundamental to objects
- process a single object at a time
Models of output
- Motor programs
- Assemblies of individual components
- pipelining
- execution is gradually automated, e.g. walking
- Techniques of control
- ballistic, open loop, e.g., saccades
- feedback, e.g. tracking with a mouse
Quantifying the models
The "model human processor"
- mechanically predictable human
- actions dominated by low level processing
- N. B. strongly conditioned by stimulus-response experimentation,
something like a Handbook of Human Perception and
Performance
- Input timings
- 70 msec: minimal cognitive unit
- 100 msec: temporal integration time
- 230 msec: perception with motor control involved
- Output
- 70 msec: minimal motor unit
- 240 msec: minimal cycle time
- Reasons for skepticism
- We can't quantify anything complex enough to be interesting.
- We don't know enough to disambiguate classes of action
- We can't separate high from low level effects a priori.
- But, analysis like this does specify limits.
Return to: