CS789, Spring 2005

Lecture notes, Week 1

History of User Interfaces

The idea behind this history. Suppose that we raise the threshold for importance high enough that only one important new concept has occurred in each of the five decades since the computer was invented. What are they? How have they changed user interfaces?

But first we need to defines some terms.

UI, UIMS

Individual applications have user interfaces (UIs). A user interface can be instantiated or managed by a user interface management system (UIMS). Making these can be considered an engineering discipline: designing useful artifacts.

HCI, CHI

A UI should fit its users well. To make this happen the design process requires knowledge about how computers and humans interact interact, which is created by a scientific process that investigates humans & computers as they interact, seeking universal laws that describe the interaction


0. The Computer

The computer executes a sequence of instructions, manipulating its internal state, consuming input and producing output.

To do so it requires

Providing this were the following interface innovations

N.B. At this point the development of computers bifurcated!

Branch one: the engineering branch

Branch two: the computer as programmable object

Single programmers are able to create these superb artifacts out of the most malleable substance yet discovered.

N. B. The branches will be seen to be rejoining one another toward the end of computer evolution.


1. The Compiler

The compiler is used to assemble program fragments into a program.

Using it effectively requires

In providing this we have the following problems to solve

They are solved by the following interface innovations


2. The Operating System

Operating systems provide the ability to manage on-line, multiple (independent) computations (multi-programming).

Using them effectively requires

In doing this we have the following problems to solve

They are solved by the following interface innovations

N. B. Changes associated with the invention of operating systems influence how the compiler is used. (E.g., the teletype makes possible the screen editor.) The recursive nature of computation means that examining a program (the listing) or changing it (the card-deck) can now be done by running a program under the supervision of the operating system. What's important here is that interface innovation is cumulative.


2a. The Database

Databases provide wide access to schema-structured data. This is sometimes called client-server computing. Each individual record of data is interpreted by means of a schema that is global to the entire system.

Using them effectively requires

In doing this we have the following problems to solve

They are solved by the following interface innovations


3. Wordprocessors, spreadsheets, &c. "Personal productivity tools"

These are tools for manipulating documents that are semi-structured. The program determines part of the structure; the user determines the rest. The first part of the structure is global (to the program); the second part is local (to the document).

Using them effectively requires

In doing this we have the following problems to solve

They are solved by the following interface innovations


4. Network-wide applications

These are applications, like web-browsing, that require wide-area access to semi-structured data. This data is very similar to the information stored in the human brain; to use it effectively it is necessary to compete successfully against human memory.

Using them effectively requires

In doing this we have the following problems to solve

The interface innovations we expect to support this are


CS689, Winter 2001, Lecture notes, Week 1

Design Methodology

Designing an interactive program

Buzzword: User Centred Design

  1. Marketing: identifies opportunity (opportunity = {user, task})
  2. UCD group: determines what the interface will be (issues design document)
  3. System architect: creates the software architecture (issues requirement specification)
  4. Implementation team: implements something that fulfills the requirement specification
  5. Testing team: makes sure that the implementation actually fulfills the requirement specification, possible iteration through step 4.
  6. Usability group: ensures that the product fulfills the design document, possible iteration through step 2.
  7. Sales group: tests to find out if what has been produced is actually worth buying, possible iteration through step 1.
It seems that steps 2 and 6 are the domain of user interface designers/testers. What do they amount to in practice? It also seems that the people who carry out steps 3 to 5 have to know enough about interface design and testing to avoid subverting steps 2 and 6.

How does this procedure fail when we try to use it in the real world?

User interface design

Things that can go on during the design of an interface

  1. Inspiration: take a walk, think about it, have a great idea. What do you think about to help you have the great idea? Think of this as a collection of things that you will assemble into the interface.
  2. Functional design methods. Explicitly list the functions that are performed by the system for which you are designing the interface. Create a set of interacting components capable of perfoming these functions; give each component a counterpart in the interface.
  3. Formal design methods. Map the interface onto a formal structure like a state machine, prove theorems about it.
As the interface design starts to take shape the designer is likely to want to show it to users. There are a variety of ways to create concrete instances of the interface for early user testing.
  1. Interface toolkits based on scripting, such as Tcl/Tk.
  2. Rapid prototyping environments, such as Macromind director. Make something that can be observed and tested.
  3. Wizard of Oz techniques. A human plays the part of the computer, following scripts produced from an evolving design document
  4. Paper and marker mockups. Implement the interface using sketches, have the designer manipulate them in mock interface interactions.
As the design starts to take shape we try to avoid obvious blunders. The main way of doing so is by comparing interface component properties against empirically-derived rules. These come from ergonomics, human factors and engineering psychology. There are a whole lot of books on human perceptual/cognitive/motors abilities on your shelves. Consider each action that a user might perform in the interface and check the implementation against human characteristics.

In fact only inspiration is cheap and easy enough for most interface design. More elaborate design methods are useful for resolving problems and disagreements. A designer with good inspiration must be able to put themself into the place of a range of users using only his or her imagination. (That's the way scenarios get created and elaborated.) What would you do in order to develop that ability in yourself?

User interface testing

Some methods are more useful in the design phase; others in the usability testing phase
  1. Try it out. Does it do the right things? Does it crash or lock up?
  2. Objective performance tests: psychological experimentation
  3. Videotaped (or auditaped) system usage: anthropogical observation
  4. Questionnaires, self-report: sociological surveying
  5. Cognitive walkthroughs: engineering design practice
Except for the first alternative these are all very costly, and have to be aimed at the problem points in the design. What do you have to do in order to be good at the first one? (Note that every programmer who writes even a single line of interface code has to do it.)

When does this design method break? And why?

This methodologies are, as it turns out, idealizations of what happens in practice. I can identify two very important deficiencies:
  1. Designs change (specification slip). Varies from 30% to 200%. Many interface decisions end up being made by programmers; the last thing that is really tested is the design document.
  2. Users change as they use an interface. The interface will be used in one way by a new user (the decision to purchase), and a different way by an experienced user (the decision to continue using). This is human-computer co-evolution
To understand the first point we have to think about why designs change. Designs change These all feed back on one another. Examples?

To understand the second point we have to think a little bit about how users change.


Learning

What kinds of things do users know?

They know

  1. facts, "Ottawa is the capital of Canada", knowing that
  2. procedures, putting a spoon into your mouth without hitting your lips, knowing how
How do we learn these different types of knowledge?

"Knowing how" can be divided into four different categories

  1. Operational, perceptual/motor capabilities: how to move fingers and aim them at keys
  2. Rule-based, motor programs triggered by categorical perception: to end input hit ESC (to start command hit ESC)
  3. Cognitive procedures, sequences of rules that solve stereotyped problems: to look at every ifdef in vi, type /ifdef, look at what you got, then type / repeatedly, watching for reappearance of the first occurrence.
  4. Problem analysis and solution, creation of new cognitive procedures: to find the bug check all the ifdefs
How do we learn each one of these?

Here's another way of clssifying things that users know

  1. Recognition: Which of "Exit" or "Quit" exits from WordPerfect
  2. Recall: "lc" gives the contents of the current directory at Waterloo
For recall we can generate the knowedge from nothing; for recognition we can recognize the knowledge when we see it. How do we learn each one of these?

Ways of changing knowledge = "learning"

Techniques of learning
  1. Trial-and-error: putting on a nut, riding a bicycle
  2. Practice = rehearsel: using a hammer
  3. How-to-do it descriptions: cookbook recipes
  4. Modification of examples: making html pages
All these take time. How much time? What incentives are we (users) normally given to spend the time?

And an orthogonal classification

Styles of learning
  1. Search, result-motivated: goals, how to set the time on the clock radio in a car
  2. Browsing, exploration: curiosity, bi-product, where does that tunnel go?
What are the rewards that normally induce us (users) to undertake one or another of these learning paths? Search is possible when you already know a lot; exploration is the only thing that's possible when you know very little. But, exploration pays off more as you learn more. Why?

And another orthogonal classification

Sources of learning
  1. Manuals, help pages: recipes, principles, examples.
  2. Experimentation: empirical information, observations.
  3. Other users: solutions, analogies, models.
As you enlarge the community of users to which you belong the last source becomes more and more important. Note the effect of positive feedback.

Note. Testing the solution is a normal termination step in many learning processes. How many times should a solution be tested? What determines how many times a particular piece of learning should be tested?


Return to: