We work towards goals, from simple to very complex, known and unknown, throughout our lives. As we aim for our goals we navigate a rich natural, social and artificial environment. Through these environments we sense and act on cues we perceive. And when we act at the most basic level in the world we are often responding to affordances we pick up within our environment.
The term affordance refers to perception. As we use our senses to navigate the physical landscape, we perceive in the world potential for action. We see steps and perceive moving up, we hear the volume of voices change from soft to load and perceive objects moving closer in space, and we feel a lever give to the weight of our hand and infer pushing down. This theory has gained much attention within interface design, for providing intuitive ways for people to manipulate abstract objects on-screen is a great challenge.
Physical objects and tools provide both real and perceived affordances. A toaster affords us to push down, and we perceive the action of manipulating the machine on through pushing the lever down. Computers as they exist today offer many affordances as well – buttons of various sizes to be pressed, dragging objects by brushing fingers across pads or screens, a mouse that moves, click and scrolls. However most often we use these objects to manipulate abstractions on a two dimensional screen, and these abstract objects offer no true physical affordances.
When manipulating objects on a computer screen, we work with perceived affordances, and often these perceived affordances have some grounding in the physical world. We see a graphical rendition of a raised object and can perceive it as something to push down – like a button in the real world. In William Gaver’s 1991 article “Technology Affordances” he discusses sequential and nested perceptible affordances. A perceived sequential affordance is one perceived affordance that leads to another – his example being “grabbing” an on-screen object leads to “dragging” it, to say, the graphical representation of a waste bin. Perceived sequential affordances deal with time and logical order, while nested perceived affordances deal with the layout and grouping of objects on-screen. Only the appearance of an unfinished block of text and a scrollbar together infers that we can scroll down to reveal more text.
Gaver clearly states he is referring to perceived affordances (rather than an affordance, which, coined by perceptual psychologist J. J. Gibson, refers to physical objects) when discussing interfaces. However in an article written by Don Norman eight years later, he counters the usage of the term affordance. Like Gaver, he insists upon the term perceived affordance, however he feels that most often we are not dealing with perceived affordances onscreen, instead, we get visual feedback that uses cultural conventions (or a shared symbolic language) that is learned (rather than simply perceived). These conventions are constraints in designing an interface, and can only truly be tested by observing people use interfaces in the real world.
Manipulating abstract objects on a 2-D surface (from pencil and paper to the computer screen) is a powerful cognitive tool, and we will continue to deal with interfaces however we name the terms to describe their design. But I feel we respond most readily to physical affordances, and as technology evolves I think we’ll see more physical manipulations of data and more embedded technology. We exist and act towards are goals in the physical world, and the more motion and senses we use towards those goals the more sophisticated our actions can become. While we will most likely always have to deal with interfaces, the closer we can embed technology into our acting in the world the more intuitive – such as the true perceived affordance – our technology will be.