Introduction
description
Transcript of Introduction
THE HISTORY AND DEVELOPMENT OF THE GRAPHICAL USER INTERFACE
AN INDEPENDENT STUDY PROJECT (ISP) SUBMITTED TOTHE FACULTY OF THE SCHOOL OF ARTS AND SCIENCES
OF COLUMBIA PACIFIC UNIVERSITYIN CANDIDACY FOR THE DEGREE OF
BACHELOR OF SCIENCE
BYRAYMOND JOSEPH WILSON
DECLARATION OF AUTHENTICITY
I declare that all material presented to Columbia Pacific University is my own work, or fully and specifically acknowledged wherever adapted from other sources. I understand that, if at any time it is shown that I have significantly misrepresented material presented to the University, any degree or credits awarded to me on the basis of that material may be revoked.
___________________________________ ____________
S. Pal Asija — Faculty Mentor — __________________ ________
Dr. Peter Pick — Dean, A&S — __________________ ________
________________________________ ____________
________________________________ ____________
________________________________ ____________
Table of Contents
Abstract..............................................................................................................................
Introduction........................................................................................................................
Graphics: The Language of the Mind.................................................................................
Mnemonics and Memory Enhancement..............................................................................
The Three Mentalities: Kinesthetic, Visual and Symbolic..................................................
The Psychology of Graphical User Interface Design..........................................................
The History of Graphical User Interface Development.......................................................
Goals and Principles Driving Graphical User Interface Designs.........................................
Thoughts on Future Graphical User Interface Concepts......................................................
Conclusions........................................................................................................................
BIBLIOGRAPHY..............................................................................................................
History & Development of the GUI by Raymond J. Wilson
The History and Development of the Graphical user Interface—Raymond J. Wilson (B.S.A&S; mentor S. Pal Asija, J.D.)
This paper presents a discussion of Graphical User Interfaces (GUIs) from several
points of view. It discusses the powerful role played by graphic images in human
communication, thought processes, and memory. The important role of cognitive
psychology in GUI design is shown and the psychological factors which must be
considered in design of GUIs are listed. A detailed discussion of the history of GUI
development is presented along with discussion of the goals and principles of GUI which
have evolved over the years. The author concludes with some thoughts on the impact of
future developments in computer technology on GUI design.
Page 1
History & Development of the GUI by Raymond J. Wilson
Introduction
The purpose of this paper is to explore the history and development of the graphical user
interface (GUI, pronounced “gooey”) from several perspectives. It covers the graphical
nature of human thought, the role mnemonics play in aiding memory retention, the
various mentalities that exist in all of us, their interactions, and their relationship to GUI
design. The power of graphic images to make deep impressions on the minds of viewers
has long been recognized (1, 2, 3). The very language of the mind is often thought of as
graphical in nature (3, 4). Words cannot communicate ideas to people as quickly as
graphic images can and are not as easily remembered (1, 2, 17, 18). GUIs have brought
about substantial gains in user productivity due to their intuitive mode of operation (19).
This paper also presents the contributions of GUI visionaries such as Ivan Sutherland,
Douglas Engelbart, Alan Kay, and others. As GUI development has evolved, a common
set of design goals and principles has been assembled gradually (12, 13, 16, 17, 19, 20).
The simpler and less obtrusive a GUI is—the greater its value to potential users (12).
Additionally, modern day graphical interface components and their advantages and
disadvantages are discussed, as well as the factors that make a graphical human-user
interface natural and inviting. The modern graphical user interface (GUI) has been
developed to tap into the powerful visual processing ability of the mind (9-11, 13, 19, 20,
23). Studies in the field of cognitive psychology have shown that GUIs can actually be
designed to stimulate learning on the part of users (9-11). Vital to GUI design is the
importance of considering the psychological characteristics of the specific group of users
for whom it is being designed (19). The modern GUI has a long and interesting history of
development (9, 11, 20). This paper concludes with a discussion of the future of
Page 2
History & Development of the GUI by Raymond J. Wilson
computer interface designs and presents concepts such as: embodied virtuality, and
virtual reality. The quest for the perfect GUI is a long way from complete (12, 19). As
computer technology continues to advance entirely new GUI paradigms will,
undoubtedly, become necessary (19, 24).
GUIs provide a powerful way to manipulate and control computers and have placed this
power into the hands of the millions of computer users. Microsoft’s Windows (an
extremely popular GUI first released in 1985) has come to be used on 85% of personal
computers sold today. Apple’s Macintosh (a GUI equipped computer released in 1984)
continues to have a significantly smaller (10% market share) but extremely loyal
following. The recent release of Microsoft’s Windows 95, also known as Chicago, was
attended by much pomp and circumstance.
NEW YORK -With free midnight coffee and doughnuts in Denver and free morning newspapers in London, with special colored lights on the Empire State Building and a blitz of one-time, before-dawn sale prices across the country, the moguls of Microsoft let loose last night with the splashiest, most frenzied, most expensive introduction of a computer product in the industry's history.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Microsoft said it had spent more than $200 million on the campaign to push Windows 95, and analysts believe the operating system may bring in $700 million to $1.45 billion to the company, based in Redmond, Wash.1
The reason for this is not just because people want to possess the latest and greatest
software technology, but because of the control and usefulness provided by the GUI.
Today, savvy software companies gladly shell out $200 million, just to advertise a new
GUI. Back in the days when Dr. Alan Kay’s Learning Research Group (LRG) was
1 ? Carey Goldberg of The New York Times, "Windows on the World: Microsoft Goes Grand, Global, Gaga," The Denver Post, 24 August 1995.
Page 3
History & Development of the GUI by Raymond J. Wilson
developing interface concepts that make today’s GUIs so popular, he had to justify his
development work to short-sighted planners every step of the way.
Right around this time we were involved in another conflict with Xerox management, in particular with Don Pendery the head "planner". He really didn't understand what we were talking about and instead was interested in "trends" and "what was the future going to be like" and how could Xerox "defend against it". I got so upset I said to him, "Look. The best way to predict the future is to invent it. Don't worry about what all those other people might do, this is the century in which almost any clear vision can be made!" He remained unconvinced, and that led to the famous "Pendery Papers for PARC Planning Purposes", a collection of essays on various aspects of the future. Mine proposed a version of the notebook as a "Display Transducer", and Jim Mitchell's was entitled "NLS on a Minicomputer".2 (italics added)
Computer science pioneers such as Ivan Sutherland, Douglas Engelbart, and Xerox
PARC’s Alan Kay invented a future that has endured for twenty years. Xerox PARC is
very involved in inventing the next generation of computers and user interface
technology. Xerox PARC-spawned-concepts such as “ubiquitous computing” and
“embodied virtuality” will undoubtedly introduce the fourth generation of computers in
the very near future—computers invisibly woven into the fabric of our everyday living—
learning, working, and playing.
Let us now begin our consideration of the history and development of the graphical user
interface by considering the topic of “graphics” itself.
Graphics: The Language of the Mind
The fact that the mind is able to comprehend graphical images more quickly than
symbolic letters and numbers is certainly not a new concept. Aristotle (384-322 BC) was
able to put into words one of the guiding principles of GUI design.2 ? Alan C. Kay, The Early History of Smalltalk, in ACM SIGPLAN Notices Vol. 28, no. 3, (March 1993).
Page 4
History & Development of the GUI by Raymond J. Wilson
Spoken words are the symbols of mental experience and written words are the symbols of spoken words. Just as all men have not the same writing, so all men have not the same speech sounds, but the mental experiences, which these directly symbolize, are the same for all, as also are those things of which our experiences are the images.3 (italics added)
In essence, Aristotle was saying that the language of graphic images is universal. If you
show a man the word “printer” in a language not understood by him, the idea of a printer
will not be communicated to him. However, if you show the same man a reasonable
likeness of a printer (assuming he has seen a printer before) they will recognize it
immediately. Presenting users with graphic images of things reduces the time to interpret
their meaning and aids in memory retention (both long-term and short-term). Edward
Booth-Clibborn and Daniele Baroni comment on the power of graphic images—in
conjunction with the context in which they occur—and their ability to rapidly transfer
ideas into the minds of people. Their comments regarding the tendency of the human
mind to think metaphorically echo Aristotle’s words.
In looking at a sign we can, for analytic purposes say that it has three aspects. It has form (how it appears), it has objective definition (it is a cross or a dagger), and finally it has significance (a meaning in the particular context in which it appears). A written X, for Instance, may have an identical form (it is also called a cross); it may mean an implied rejection (if it is used to mark an answer in an exam paper); or it may mean an acceptance (if it is by a candidate's name on a voting form). In each of these examples the same sign takes on meaning through its relation to the context in which it appears-an accepted conventional system of meanings in which the sign finds its place. Fundamentally, a sign is a way of modifying the world in which we live. It is part of a system of ordering reality into meaningful units which are all related through a larger system of ideas about the world.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .It appears to be characteristic of the human mind that it thinks in terms of
analogies or metaphors. Thus, some objects are more readily understood in terms of others. . . ..
It is out of this metaphorical aspect of human imagination that the symbol is born. The bare mark is elaborated further into a series of meanings. A line of
3 ? Aristotle, “On Interpretation,” in Great Books of the Western World, ed. Mortimer J. Adler (Chicago: Encyclopedia Britannica, Inc., 1990), vol. 7, 25.
Page 5
History & Development of the GUI by Raymond J. Wilson
dried clay on a rock represents the moon. The moon (Selene) is the sister of the sun (Helios). In a myth the moon transgressed divine moral law and therefore stands for profane sexuality. So the pattern of meaning spreads out from the clay mark on the rock to stand for a whole series of interrelated cosmic forces. The act of drawing calls on these forces and harnesses their power.
As a word finds its meaning in the context of a sentence, so the symbol finds its meaning in the context of other symbols with which it is associated. The simplest signs have so much meaning encoded in them that a basic symbolic expression (such as a cross or a swastika) is enough to trigger an elaborate train of associations and significances.4
Man’s capacity to process complex language, both written and spoken, has been of great
benefit. Without it, the scientific accomplishments of our age would have been an
impossibility. The use of numbers and letters is extremely important in order to
communicate complex thoughts from one mind to another, but within one’s mind the
language of graphics plays a very important part. Let’s explore why this is significant in
regard to graphical user interfaces. The computer is essentially a tool, and just as we use
tools as extensions of our bodies to help us perform physical feats we couldn’t do
otherwise, so the computer is a tool for the extension of our mind. GUIs attempt to
reduce the complexity involved in using the computer so that the work the computer can
help us perform becomes the focus and not the interface. A good graphical interface
fades into the background because it requires so little mental effort to manipulate. A
graphical interface is the equivalent of a mental lever. It can help us manipulate great
quantities of information with very little effort.
Clibborn and Baroni comment on the insight that was provided by symbolic (symbolic in
the graphical sense) thought in the case of chemist August Kekule von Stradonitz.
Additionally, they believe that the graphical nature of symbolic (pictorial) thought is less
constraining than the rational alpha-numerics of written language.
4 ? Edward Booth-Clibborn and Daniele Baroni, The Language of Graphics (New York: Harry N. Abrams, Inc., Publishers, 1980), pg. 9.
Page 6
History & Development of the GUI by Raymond J. Wilson
Therefore, when looking at symbols-marks, signs, or pictographs-it is important to remember that they are a product of a symbolic mode of thought. It is a different way of communicating experience from the logical or numerical modes of expression. It is, in a sense, the antithetical mode. For whereas language and numbers define and separate, symbols are suggestive of the similarities, the common identity of phenomena.
It is not that one mode of expression is superior to another Even in pure science the symbolic mode is important in providing the sudden flashes of insight and the resolution of seeming opposites that offer a new dimension of understanding. Jung cites the celebrated example of the nineteenth-century German chemist August Kekule von Stradonitz. Having tried in vain, by applied reason, to deduce the structure of benzene, Kekule was confronted in a dream by the vision of Ouroborous the mythical serpent that continually grows by feeding on its own tail, thus maintaining a constant size, Returning to his problem with this symbol, Kekule was able to demonstrate that the structure of benzene is a closed carbon ring. The intuitive image, apprehended in a dream, had supplied him with the solution his rational mind could not reach.
Although it was a common belief in the eighteenth and nineteenth centuries that rationality had superseded the more primitive perceptions of symbolism, Freud and Jung were able to show that symbolism-the collapsing of one meaning into another-is a constant part of daily life. Rationality, for all its valuable deductive power, seems to impose a strain on the human mind which the symbolic mode can release. The way forward is, as it was for the men of the Renaissance, to achieve a balance between rationality and symbolism, to become aware that there is a fundamental contradiction in all that we experience, and that everything is at once truly unique and at the same time simply one facet of an undifferentiated unity.5
The idea, that symbolic graphical images and their context help us integrate the
experiences of daily life into a whole, is an important concept which we will consider
when we talk about the importance of mnemonics and memory.
Many thinkers of the past have concluded that ideas themselves are actually graphic
images in the brain. People “see” ideas as images in the context of their experiences.
There has even been talk among brain researchers that the brain works on a holographic
principle—thoughts being “displayed” within the brain. René Descartes whose ideas
shaped scientific thought for centuries (and continue to) eloquently explained this
concept of “ideas as mental imagery” in his “Objections with Replies.”
5 ? Ibid., pg. 23.
Page 7
History & Development of the GUI by Raymond J. Wilson
In reference to the third Meditation-concerning God-some of these (thoughts of man) are, so to speak, images of things, and to these alone is the title "idea" properly applied; examples are my thought of a man, or of a Chimera, of Heavens, of an Angel, or [even] of God. When I think of a man, I recognize an idea, or image, with figure and colour as its constituents; and concerning this I can raise the question whether or not it is the likeness of a man. So it is also when I think of the heavens. When I think of the chimera, I recognize an idea or image, being able at the same time to doubt whether or not it is the likeness of an animal, which, though it does not exist, may yet exist or has at some other time existed. (italics added)6
Here the meaning assigned to the term idea is merely that of images depicted in the corporeal imagination; and, that being agreed on, it is easy for my critic to prove that there is no proper idea of Angel or of God. But I have everywhere, from time to time, and principally in this place, shown that I take the term idea to stand for whatever the mind directly perceives; and so when I will or when I fear, since at the same time I perceive that I will and fear, that very volition and apprehension are ranked among my ideas.7 (italics added)
We have seen how graphic images may be considered a universal language. Even though
persons may speak different languages and not understand each other’s speech, the
observed experiences of people are the same. We have also discovered that graphic
images can evoke powerful thought associations and memories that words alone cannot.
Graphic images in conjunction with their context form the basis of our experience base.
Humans actually think in terms of graphic images. By leveraging off of the powerful and
highly integrated graphic processing ability of the human mind, GUI designers can place
more computing power into the hands of users. Properly designed GUIs can open up the
power of the computer by being transparent to users and allowing them to focus on the
work being done instead of on the computer.
Mnemonics and Memory Enhancement
6 ? René Descartes, “The Third Set of Objections with the Author’s Replies,” in Great Books of the Western World, ed. Mortimer J. Adler (Chicago: Encyclopedia Britannica, Inc., 1990) , vol. 28, 363.7 ? Ibid., 363.
Page 8
History & Development of the GUI by Raymond J. Wilson
Another powerful characteristic of GUIs is their use of icons which act as mnemonic
aids. Graphic image processing is learned at a very early stage of human development
and is highly integrated into the human thinking process. It takes much less mental
energy to recognize a picture of an item or scene than it does to recognize and decode a
word or paragraph describing the same item or scene. Additionally a picture can evoke a
chain of thought much more readily than words alone. In today’s common graphic
market a picture is valued at one thousand words ... not a bad return. There is a memory
technique for memorizing lists of items which is highly imaginative. One associates a
very visually memorable graphic image, called a peg, with each item of the list and joins
it with the graphic image of the next item in the list. Here is an example of how someone
might memorize the following list:
· Ice-cream cone· Hair brush· Pickles· Cookies· Envelopes
I saw a giant ice-cream cone walking down the street, he was brushing his hair with a
large hairbrush that looked just like a bright green pickle. He was dumping giant
cookies out of an envelope, it was incredible!
This memorization technique borrows resources from the highly integrated visual
processing functions of the brain and does not rely on simply memorizing a list of
words.
Well designed GUIs do the same thing. You will notice that in most of today’s
popular interfaces the same icons are used to represent similar items across programs,
platforms, even across products by different manufacturers. Similarity among icons
Page 9
History & Development of the GUI by Raymond J. Wilson
helps a person to remember what-does-what, without having to waste time,
experience anxiety (which is very counter-productive) or have to say “I’m using
system X, now what do they use to invoke the print function again?”
When we look into the early history of philosophy we see that the great thinkers of
the past recognized that a likeness of a presentation had great power to guide a person
through his memory in order to remember related things. This is related to the fact
that our mental knowledge base is dominated by a mass of graphically symbolic
information which is related by context. It is instructive to consider Aristotle’s
thoughts on memory and remembering and their relation to GUI design. Notice the
expressions: movements and starting point, in relation to remembering—and to
remembering something one has forgotten.
Mnemonic exercises aim at preserving one's memory of something by repeatedly reminding him of it; which implies nothing else [on the learner's part] than the frequent contemplation of something [viz. the 'mnemonic', whatever it may be] as a likeness, and not as out of relation. . . .As regards the question, therefore, what memory or remembering is, it has now been shown that it is the state of a presentation, related as a likeness to that of which it is a presentation;8 (italics added)
Recollecting differs also in this respect from relearning, that one who recollects will be able, somehow, to move, solely by his own effort, to the term next after the starting-point. When one cannot do this of himself, but only by external assistance, he no longer remembers [i.e. he has totally forgotten, and therefore of course cannot recollect]. It often happens that, though a person cannot recollect at the moment, yet by seeking he can do so, and discovers what he seeks. This he succeeds in doing by setting up many movements, until finally he excites one of a kind which will have for its sequel the fact he wishes to recollect]. For remembering [which is the condicio sine qua non of recollecting] is the existence, potentially, in the mind of a movement capable of stimulating it to the desired movement, and this, as has been said, in such a way that the person should be moved [prompted to recollection] from within himself, i.e. in consequence of movements wholly contained within himself . . ..
8 ? Aristotle, “On Memory and Reminiscence,” in Great Books of the Western World, ed. Mortimer J. Adler (Chicago: Encyclopedia Britannica, Inc., 1990) , vol 7, 692.
Page 10
History & Development of the GUI by Raymond J. Wilson
But one must get hold of a starting-point. This explains why it is that persons are supposed to recollect sometimes by starting from mnemonic loci. The cause is that they pass swiftly in thought from one point to another, e.g. from milk to white, from white to mist, and thence to moist, from which one remembers Autumn [the 'season of mists'], if this be the season he is trying to recollect . . .. It seems true in general that the middle point also among all things is a good mnemonic starting-point from which to reach any of them. For if one does not recollect before, he will do so when he has come to this, or, if not, nothing can help him; . . .9
Visual mnemonic icons are memory starting points, i.e. they place a picture directly
into the mind and start the person rolling down the road to associating the iconic
image with previously learned information—they help you remember. It makes no
sense to slow computer users down by making them have to remember starting points
for themselves—GUIs show the user the starting point so that they can quickly get on
the road to performing the desired task or function. How many nails would get driven
if each time a carpenter wanted to use his hammer, he had to consciously think to
himself, “Now ... what part of the hammer do I hold again?”. Remembering how to
do a make, load and/or save files, build a software version, turn on bold fonts, undo
an action, or change a paragraph’s justification is a waste of mental energy.
Mnemonic icons used in GUIs show you pictures that guide you into doing these
things without having to think about it. This is one of the major advantages provided
by a well designed GUI.
We will be introduced to several GUI pioneers in the “History of the GUI” section of
this paper, but one name in particular stood out in my research—Dr. Alan Kay. Lets
consider the contribution made to the look and feel of present day GUIs, by Dr. Alan
Kay and his associates in the Learning Research Group at Xerox PARC.
9 ? Ibid., 693.
Page 11
History & Development of the GUI by Raymond J. Wilson
The Three Mentalities: Kinesthetic, Visual and Symbolic
Something on the order of an epiphany was experienced by Dr. Alan Kay back in the
late 1960s. It was at that time that the idea of “the computer as media” dawned on
him. The “computer as media” idea contrasted with the conventional “computer as
car” metaphor which intimated that very young children should be kept from
computers. Dr. Kay felt that if computers could be made usable by children, that
benefits would certainly accrue to adults. Research on the subject of learning
(especially learning by children) led Dr. Kay to a meeting with Seymour Papert (a
noted computer scientist), who was developing a computer language which could be
learned by children, known as LOGO. Papert himself had done research into the
writings of Jean Piaget, a noted cognitive psychologist who specialized in study of
the learning process in children. This meeting spurred Kay to do further research into
cognitive psychology and eventually to examine the work of Jerome Bruner, who had
furthered the studies of Jean Piaget. What did Alan Kay discover that influenced him
so profoundly?
How had Papert learned about the nature of children's thought? From Jean Piaget, the doyen of European cognitive psychologists . . .. One of his most important contributions is the idea that children go through several distinctive intellectual stages as they develop from birth to maturity. Much can be accomplished if the nature of the stages is heeded and much grief to the child can be caused if the stages are ignored. Piaget noticed a kinesthetic stage, a visual stage, and a symbolic stage. An example is that children in the visual stage, when shown a squat glass of water poured into a tall thin one, will say there is more water in the tall thin one even though the pouring was done in front of their eyes.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . I discovered Jerome Bruner's Towards a Theory of Instruction [1966] He had repeated and verified many of Piaget's results, and in the process came up with a different and much more powerful way to interpret what Piaget had seen. For example, in the water-pouring experiment, after the child asserted there was more water in the tall thin glass, Bruner covered it up with a card
Page 12
History & Development of the GUI by Raymond J. Wilson
and asked again. This time the child said, "There must be the same because where would the water go?" When Bruner took away the card to again reveal the tall thin glass, the child immediately changed back to saying there was more water.
When the cardboard was again interposed the child changed yet again. It was as though one set of processes was doing the reasoning when the child could see the situation, and another set was employed when the child could not see. Bruner's interpretation of experiments like these is one of the most important foundations for human-related design. Our mentalium seems to be made up of multiple separate mentalities with very different characteristics They reason differently, have different skills, and often are in conflict. Bruner identified a separate mentality with each of Piaget's stages: he called them enactive, iconic, symbolic. While not ignoring the existence of other mentalities, he concentrated on these three to come up with what are still some of the strongest ideas for creating learning-rich environments.10
The concept of the computer as media finally crystallized in Dr. Kay’s mind. A
computer, viewed as media, must be accessible on all three of these mental learning
levels (kinesthetic, visual and symbolic)—just as accessible as a sheet of paper and a
pencil. Dr. Kay felt that this type of interface, by its very nature, would promote
computer literacy. Not only would it encourage the use of computers by children, but
it would allow all users to grow from novice to expert as their ability to deal with the
computer in more symbolic terms developed. Thus the slogan coined by Dr. Kay.
Doing with Images makes Symbols
The slogan also implies-as did Bruner-that one should start with-be grounded in-the concrete "Doing with Images," and be carried into the more abstract "makes Symbols." All the ingredients were already around. We were ready to notice what the theoretical frameworks from other fields of Bruner, Gallwey, and others were trying to tell us. What is surprising to me is just how long it took to put it all together. After Xerox PARC provided the opportunity to turn these ideas into reality, it still took our group about five years and experiments with hundreds of users to come up with the first practical design that was in accord with Bruner's model and really worked. _____________________________________________________________DOING mouse enactive know where you are,
manipulate
10 ? Alan C. Kay, "User Interface a Personal View" in The Art of Human Computer interface Design, ed. Brenda Laurel, (Reading, MA: Addison Wesley Publishing Company, 1990), pp. 194-195.
Page 13
History & Development of the GUI by Raymond J. Wilson
withIMAGES icons, windows iconic recognize, compare,
configure, concretemakesSYMBOLS Smalltalk symbolic tie together long chains of
reasoning, abstract_____________________________________________________________11
The pioneering work of Dr. Alan Kay and the Learning Research Group of Xerox
PARC continues to be the foundation of much of human-computer interface design to
this day. There have certainly been improvements in the look and feel of the mouse,
the graphics display, keyboards, and windowing software, but contributions to the
field as significant as those made by Kay and company have not been seen in twenty
years or more. We’ll discuss more details of this in the “History of the GUI” section.
We will now discuss a very important aspect of GUI design, the psychology of the
user. What psychological differences exist among users that must be taken into
consideration when designing a graphical user interface?
The Psychology of Graphical User Interface Design
Everyone on earth is just a little bit different; everyone does not work the same way,
feel the same way about computers, or share the same cognitive style. Some people
are able to absorb more information and do it faster than others. Some people are
very logical, others are very intuitive. What psychological factors must be considered
when designing a useful user interface? Ben Shneiderman is a human-computer
interaction guru and is the professor of computer science at the University of
Maryland. Professor Shneiderman has observed thousands of computer users over the
11 ? Ibid., 196, 197.
Page 14
History & Development of the GUI by Raymond J. Wilson
years. Mr. Shneiderman shares his observations regarding the importance of
considering the psychology of potential users when designing a human-computer
interface in his book Designing the User Interface: Strategies for Effective Human-
Computer Interaction.
Some people dislike or are made anxious by computers; others are attracted to or are eager to use computers. Often, members of these divergent groups disapprove or are suspicious of members of the other community. Even people who enjoy using computers may have very different preferences for interaction styles, pace of interaction, graphics versus tabular presentations, dense versus sparse data presentation, step-by-step work versus all-at-once work, and so on. These differences are important. A clear understanding of personality and cognitive styles can be helpful in designing systems for a specific community of users.12
It would be nice if we categorize all users into ... say ... one or two groups and have a
button or menu item that says “Press here to customize this software to your
psychological profile A or B,” but unfortunately that is not the case. Professor
Shneiderman’s research has shown that at least four dichotomous types exist (giving
eight separate mentalities). Additionally, numerous methods of providing user
metrics have been established and work in this area is ongoing.
Unfortunately, there is no simple taxonomy of user personality types. An increasingly popular technique is to use the Myers-Briggs Type Indicator (MBTI) (Shneiderman, 1980), which is based on Carl Jung's theories of personality types. Jung conjectured that there were four dichotomies:
· Extroversion versus introversion: Extroverts focus on external stimuli and like variety and action, whereas introverts prefer familiar patterns, rely on their inner ideas, and work alone contentedly.
· Sensing versus intuition: Sensing types are attracted to established routines, are good at precise work, and enjoy applying known skills, whereas intuitive types like solving new problems and discovering new relations, but dislike taking time for precision.
12 ? Ben Shneiderman, Designing the User Interface: Strategies for Effective Human--Computer Interaction 2nd ed. (Reading, MA: Addison-Wesley Publishing Company, 1992), pg. 25.
Page 15
History & Development of the GUI by Raymond J. Wilson
· Perceptive versus judging: Perceptive types like to learn about new situations, but may have trouble making decisions, whereas judging types like to make a careful plan, and will seek to carry through the plan even if new facts change the goal.
· Feeling versus thinking: Feeling types are aware of other people's feelings, seek to please others, and relate well to most people, whereas thinking types are unemotional, may treat people impersonally, and like to put things in logical order.
The theory behind the MBTI provides portraits of the relationships between professions and personality types and between people of different personality types. It has been applied to testing user communities and to provide guidance to designers.
Many hundreds of psychological scales have been developed, including risk taking versus risk avoidance; internal versus external locus of control; reflective versus impulsive behavior; convergent versus divergent thinking; high versus low anxiety, tolerance for stress, tolerance for ambiguity, motivation, or compulsiveness; field dependence versus independence; assertive versus passive personality; and left- versus right-brain orientation As designers explore computer applications for home, education, art, music, and entertainment, they will greatly benefit from paying greater attention to personality types.13
The point to remember from all of this is that you must consider the psychology of
the potential user(s) of the interface you plan to design. By considering their
personality, temperament, work habits, likes, dislikes, and field of expertise you will
be in a much better position to provide an interface that will seem inviting and useful
to the customer.
The History of Graphical User Interface Development
Back in the dawn of computer design, machines developed by John Mauchly and
Presper Eckert had an interface that only a mother (or in this case a father) could
love. Pictures of their brainchild, ENIAC, found in encyclopedias, show a room full
of giant, electron-tube-filled, black boxes with hundreds of patch cords running from
13 ? Ibid., 25-26.
Page 16
History & Development of the GUI by Raymond J. Wilson
box to box. The computer genie had emerged from the lamp, but, alas, making your
wishes known was next to impossible. The power of the computer had been
discovered but it was like a wild bronco that still needed “bustin” you had to be a real
computer cowboy to tell it which way to go!
ENIAC was built by J. Presper ECKERT and John W. MAUCHLY at the Moore School of Electrical Engineering of the University of Pennsylvania under contract to the U.S. War Department. Designed for the calculation of ballistic trajectory tables, it officially became operational in February 1946 and was used successfully for about 9 years.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Physically, ENIAC was large and awkward. It was designed for a special
kind of calculation and did not include modern programming features such as stored, modifiable programming words.14 (italics added)
ENIAC was programmed via patch cords, it lacked the now familiar mouse,
keyboard, sound card, fax/modem, and full color CRT display. To say that ENIAC
was large and awkward is like saying that a one thousand pound man is “a little
husky.” ENIAC was not—as the saying goes—“user friendly”—it was a mean and
nasty electronic behemoth. It was as far away from the “computer as media” concept
as you could get. This I say not to detract from the technological leap forward that
ENIAC represented, but to contrast its user interface (actually the lack thereof) with
the modern GUI.
In an Apple-Computer-Corporation-Sponsored, Distinguished Lecture Series
videotape entitled Doing With Images Makes Symbols: Communicating with
Computers, Dr. Alan Kay covers many of the milestones of GUI history and
development. Much of the following information is based on notes I took while
watching the video (again and again). Other references will be foot-noted. I contacted
14 ? The New Grolier Multimedia Encyclopedia, Release 6. (On-line Computer Systems, Inc. 1993), s.v. “ENIAC.”
Page 17
History & Development of the GUI by Raymond J. Wilson
Dr. Kay via email and told him of my intentions to produce an academic paper on the
subject of GUI history and development. He kindly replied and sent some additional
reference materials which I have quoted from and have listed in the Bibliography.
The year 1962, sixteen years after ENIAC’s birth, saw the first attempt at a graphical
user interface. Ivan Sutherland, a pioneering computer scientist, is credited with
developing the first real GUI. It was called “Sketchpad” and it ran on a computer that
was so large, it had its own roof. Sketchpad’s host computer, called the TX-2,
contained 460 Kbytes of RAM (a great amount in 1962) and could execute 400,000
instructions per second. Sketchpad was an amazing achievement for its time and Ivan
Sutherland conceived and implemented many of the graphic drawing tools which are
taken for granted by users of interactive graphical packages today. In a movie, taken
in 1962 (included in Doing With Images Makes Symbols: Communicating with
Computers hosted by Dr. Alan Kay), Mr. Sutherland is seen, poised in front of his
computer’s display tube, using his creation. As he draws he explains some of the
features of Sketchpad.
· Use of the light-pen for adding elements to a drawing· A Virtual drawing page (1/3 mile on a side)· A scaleable “window” into the large drawing page.· Recognition of rough shapes entered via the lightpen by a user.· Master drawings and object oriented instances of drawings and components.· Rotating and scaling of any object, including master objects.
A user could draw a rough rectangle on the screen and ask the computer to “solve”
the rectangle. The computer would then change the rough sketch into a regular
rectangle. The user could sketch a rough arc, and the computer would solve the arc,
Page 18
History & Development of the GUI by Raymond J. Wilson
changing it into a chord with its ends at the user specified coordinates. Sketchpad’s
rules included “be parallel”, “be perpendicular”, “be mutually perpendicular.” and
others. An interesting discovery made during the development of Sketchpad was that
the lightpen is not a good input device. During use of a lightpen—on a vertical
display surface—the blood runs out of your hand, leaving it numb, in a relatively
short period of time. Dr. Sutherland had invented Sketchpad as the subject of his
doctoral thesis. In doing so he had single handedly invented the first interactive
graphics program, the first true object oriented program, and the first non-procedural
(object oriented) programming language. When asked how he was able to do this in
about a year’s time he replied “I didn’t know it was hard.” Sketchpad used the
computer screen as a drawing sheet and the lightpen as a virtual pencil, but an array
of buttons attached to the side of the display were used to enter commands—most of
the graphical interface elements were yet to be developed. Interestingly, Ivan
Sutherland’s future vision continued, he and his students would later be found at
Harvard developing a 3D “virtual reality” helmet in 1968.
Another true computer science visionary was Douglas C. Engelbart. Regarding
Engelbart’s achievements and contributions to the field of graphical user interface
design, Ben Shneiderman writes the following.
It is hard to trace the first explicit description of windows . . ., although several sources credit Doug Engelbart with the invention of the mouse, windows, outlining, collaborative work, and hypertext as part of his pioneering NLS system at the Stanford Research Institute during the mid-1960s15
15 ? Shneiderman, Designing the User Interface, p. 338 .
Page 19
History & Development of the GUI by Raymond J. Wilson
Doug Engelbart headed the Augmented Human Intellect Research Center at the
Stanford Research Institute. Robert Warfield, a graduate student of Computer science
at Rice University comments on Engelbart’s contribution in a 1983 Byte magazine
article which reported on, then-current, GUI developments.
How did all of this new technology come about? Much of the work can be attributed to Xerox PARC (Palo Alto Research Center) and its Learning Research Group (LRG). But the seeds of the technology can be traced farther back to Douglas Engelbart’s work on using computers to augment human intelligence . . .. It was Engelbart’s group that first invented the now-familiar mouse and incorporated multiple windows into the design of text editors.16 (italics added)
Engelbart realized that it was necessary to study workers in their office environment,
in order to provide computing tools that could, in his words, “augment their
intelligence.” His goal was to provide intellectual workers with a computer system
that was, in his words, “instantly responsive.” He believed a system of this type
would be extremely valuable (how right he was, as a trip through any engineering
department will testify).
Another film clip (included in Doing With Images Makes Symbols: Communicating
with Computers hosted by Dr. Alan Kay) shows Engelbart, in 1962, in front of a San
Francisco audience demonstrating his futuristic NLS system. The system included
hierarchical menus, hyper-text17, word processing and graphical drawing capabilities.
Engelbart pioneered the use of the mouse in conjunction with a five key pad which
16 ? Robert W. Warfield, “The New Interface Technology; An Introduction To Windows And Mice” Byte, December 1983.17 ? Hyper-text was actually conceived by Vannevar Bush who was the White House science adviser in 1945. He foresaw the coming of what is now called “information overload” and conceived of a computerized hyper-text system in which the user could conveniently follow links of interest rather than be overwhelmed by tides of unrelated information. The idea had to wait, however, for technology to catch up and for Douglas Engelbart to implement it.
Page 20
History & Development of the GUI by Raymond J. Wilson
were used together to select words, characters, or paragraphs for editing. The
interface provided instant visual feedback of editing changes.
Graphical drawing interspersed with hyper-text was demonstrated in the form of a
map, showing Engelbart’s route home from work. As a hyper-text link was selected
via the mouse and five key pad, its associated list would become visible. Engelbart’s
vision extended to the realm of visual and audio collaboration with co-workers.
Engelbart demonstrated this in the film clip by conversing—in real time—with a co-
worker located in Menlo Park. The NLS system used part of the computer screen as a
video display of the person you were working with. You could see, and talk to, your
co-worker in the next room or on the next continent. The other person could see and
hear you, view your work, and a separate cursor permitted them to point to sections
of text and/or graphics. They could also be given access rights to change the screen
contents if desired. There are features of the vision Engelbart implemented in 1962
that are just becoming commercially viable today ... over thirty years later.
Engelbart’s mouse has received tremendous improvements over the years. His
original mouse was analog in design and needed the augmentation of the five key pad
to let the computer know whether the user was pointing to a letter, a word, or a
paragraph. Today’s typical mouse is capable of extremely accurate movement
resolution (pixel by pixel accuracy in a 1280 x 1024 resolution display context) but
computer users owe the convenience and efficiency of the mouse to Douglas C.
Engelbart.
Page 21
History & Development of the GUI by Raymond J. Wilson
Next, we meet Dr. Alan Kay, whose vision of the computer-as-media would develop
and eventually come to fruition at Xerox PARC. Kay’s first attempt at interface
design was done for a project that was essentially the world’s second personal
computer (the first was Wes Clark’s LINC, designed in 1962). The computer was
called the “Flex Machine” and the hardware was designed by Ed Cheadle, an
electronics engineer Kay was introduced to while attending graduate school at the
University of Utah. At that time he didn’t have the clear vision of what a graphical
user interface should be, (in fact, Kay states this first UI repelled people) but he did
implement Sutherland’s virtual window concept, Cheadle’s clipping algorithm, and a
tiled (non-overlapping) window scheme for the Flex Machine.
At an ARPA meeting held at a ski lodge in Park City, Utah, Alan Kay heard a talk,
delivered by Marvin Minsky in which the ideas of Piaget and Papert were expounded.
The talk focused on the failure of traditional methods of educating the young and the
need to re-think them in light of twentieth century research into cognitive
psychology.
Another ski-lodge meeting happened in Park City later that spring The general topic was education and it was the first time I heard Marvin Minsky speak. He put forth a terrific diatribe against traditional educational methods, and from him I heard the ideas of Piaget and Papert for the first time. Marvin's talk was about how we think about complex situations and why schools are really bad places to learn these skills. He didn't have to make any claims about computers+kids to make his point It was clear that education and learning had to be rethought in the light of 20th century cognitive psychology and how good thinkers really think. Computingenters as a new representation system with new and useful metaphors for dealing with complexity, especially of systems.18
18 ? Kay, The Early History of Smalltalk.
Page 22
History & Development of the GUI by Raymond J. Wilson
These ideas were tucked away in Kay’s mind—eventually they would begin to react
with one another and Kay’s vision would come into focus. The next thing that
influenced Kay was a tour through the University of Illinois in 1968, where he saw
the first flat panel plasma display. At the time it was only a lump of neon-gas-filled
glass, capable of displaying dots of light, but it sent Kay’s mind racing. He was
already visualizing a larger version of the flat panel display with an integrated
version of the Flex Machine’s electronics on it’s back. Kay went so far as to construct
a cardboard model of a notebook sized computer weighing less than two pounds,
which he dubbed—the Dynabook. The Dynabook would possess: a high resolution,
flat panel graphics display for GUI support, both stylus and keyboard input devices,
removable storage medium, and a wireless network connection. Unfortunately, the
flat panel display technology—not expected until the late eighties—was still too far
off to give the project serious consideration at the time, but the idea was indelibly
fixed in Kay’s mind.
Kay’s next influence was his introduction to GRAIL (GRAphical Input Language) at
the RAND corporation (also in 1968). The GRAIL language was an attempt to
produce a totally iconic computer language (it didn’t even need a keyboard). The
input device was the RAND tablet—the first flat device which recognized the
gestures of users, who “wrote” on it using a stylus.19 GRAIL user’s would input
rough flow diagram symbols on the RAND tablet, which the computer would
recognize and redraw symmetrically. Text input via the stylus was recognized and
reformatted to fit inside the selected flow diagram symbol. Symbols could be picked
19 ? The RAND tablet was invented by Tom Ellis, Gabe Groner developed the gesture recognition software.
Page 23
History & Development of the GUI by Raymond J. Wilson
up, moved, and resized (Macintosh window control was inspired by this) and
connections between diagrammatic elements could be broken with a natural “erasing”
gesture. The system was totally modeless, that is to say, the user did not have to enter
a “mode” to erase, another “mode” to draw, another “mode” to edit, it was designed
to correctly handle user gestures in the context in which they occurred. Kay reports
that this amazing graphical user interface gave him the feeling of putting his hands
through the glass of the CRT and directly manipulating the data objects. GRAIL
helped him to see that his Flex Machine interface was “all wrong” but he couldn’t see
how he could stuff a GRAIL-like interface into the relatively tiny Flex Machine. The
beauty of the GRAIL interface took its place alongside the other ideas Kay was
mentally cataloging—spontaneous combustion was ready to occur.
The final catalyst was supplied by a visit to Seymour Papert, Wally Feurzig, Cynthia
Solomon, and other LOGO researchers working with children in the Lexington
schools. Actually seeing children using a computer—learning a computer language—
cemented the ideas Kay had been gathering into a clear vision. The computer should
be viewed as a medium with an interface as simple as crayons and paper. Kay began
designing a computer that children would be able to use and hoped that the ideas
generated along the way would “spill over” into the adult world. His vision of the
Dynabook was still very much alive and his first project called “Kiddi Komp” was
designed to be a platform that would allow him to develop the graphical user
interface that would eventually run on the Dynabook.
Page 24
History & Development of the GUI by Raymond J. Wilson
Many of today’s popular GUI elements can be traced to the work done on the Kiddi
Komp. Ben Laws developed the first font editor on it, Kiddi Komp’s high resolution,
graphic, pixel-based display allowed a user to pre-view a document on the screen as it
would appear when printed. Intricate on-screen sketching and drawing were possible
—the display was designed to be viewed, without eye fatigue, for six to seven hours
straight—actual design work could now be done on a computer screen. Kiddi Komp
was followed up by the development of the Minicom in the summer of 1971, and
finally by Chuck Thacker’s Alto shortly thereafter.
The Alto was the first workstation and it led to the design of many other
workstations, like the Lisa and the Macintosh. The Alto (ca. 1973) used an improved
version of Engelbart’s mouse and a GUI that became the model for GUIs to come for
the next twenty years. Some of the GUI features developed on the Alto that we have
come to take for granted were: use of the mouse, graphic pixel-based screens, pull-
down menus, font editors and multiple font displays, paint programs that turn the
computer and mouse into virtual drafting paper or paint, palette and canvas,
WYSIWYG (What You See Is What You Get) word processors that put professional
desk-top publishing capabilities into a user’s hands, multiple moveable and sizable
windows, and not to be forgotten—color monitors. Altos were brought into the Palo
Alto schools by Dr. Kay’s Learning Research Group in order to demonstrate that
children could quickly learn to use a computer effectively, when its interface was
simple and intuitive. Another important goal for Dr. Kay during these experiments
was to introduce the concept of computer literacy into the schools. Dr. Kay felt that
just as a person is not considered literate unless he or she can read and write—a
Page 25
History & Development of the GUI by Raymond J. Wilson
computer literate person should also be able to read and write in the context of the
computer. To do this, Dr. Kay introduced the students to a computer language he had
developed, based on Simula and Sketchpad, known as Smalltalk. Smalltalk was the
first widely used object oriented language. Many of the Palo Alto students became
very proficient in Smalltalk which testifies to its natural and intuitive user interface
and overall quality. Alan Kay’s work sparked a whole new generation of GUI
products that stimulated the software development world.
A new breed of personal computer hardware and software is beginning to enter the marketplace. These systems will be both easier to use and more productive than their predecessors. People who are not computer experts will feel comfortable using these personal computers in their day-to-day work, and experienced users will make fewer errors.
This "new interface technology” encompasses developments in hardware and software that essentially reduce the number of things a user must remember in order to use a system effectively. On the hardware side, pointing devices such as mice, touchscreens, and high-resolution graphics displays simplify communication between the user and the system. The software offers integration, multiple windows, and commands issued by selection from menus using the pointing devices. The combination of these features ensures that the users can concentrate on how people work instead of on how computers work.
Examples of the new technology currently or soon to be on the market include hardware/software combinations such as Apple's Lisa and Hewlett-Packard's 150 and software such as Visicorp's Visi On and Microsoft's Windows.20
Among computer historians Dr. Alan Kay’s group is always properly cited with
putting the ideas of many of their predecessors (e.g. Sutherland’ Sketchpad and
Engelbart’s mouse) to practical use and for inventing many of the now common
graphical interface elements.
How did all of this new technology come about? Much of the work can be attributed to Xerox PARC (Palo Alto Research Center) and its Learning Research Group (LRG). . . .
20 ? Warfield, “The New Interface Technology”.
Page 26
History & Development of the GUI by Raymond J. Wilson
The work at Xerox PARC began in 1971, when Alan Kay founded the Learning Research Group and initiated a project called Dynabook. Dynabook was to have been a notebook-sized personal computer that anyone, even children, could use in day-to-day work and that everyone would want to use. The Xerox Alto personal computer was used to build a prototype Dynabook system.
Although hardware limitations prevented commercial production of Dynabook, many of the features of the new interface technology can be traced directly to the prototyping efforts behind it. One of the most important products to come out of the Dynabook project was the Smalltalk language. From these efforts followed many others that expanded the basic concepts to make them usable in a general computing environment, including newer versions of Smalltalk that introduced overlapping windows, the Xerox Star workstation, which introduced icons, and a number of similar projects conducted by the LISP community using personal computers that execute LISP as their machine language.21
Xerox attempted to leverage off of Kay’s work with an ill-fated entry into the IBM
dominated business computer market ... the Xerox Star System. When asked which
names have the greatest computer user recognition: Macintosh, Windows or Star, it
would be the rare user who would say Star. However, it was the Star system that put
into practical use many of the ideas that later ended up making hundreds of millions
of dollars for Apple and Microsoft.
In April 1981, Xerox announced the 8010 Star Information System, a new personal computer designed for offices. Consisting of a processor, a large display, a keyboard, and a cursor-control device . . ., it is intended for business professionals who handle information.22
David Canfield Smith describes the Star from a systems viewpoint in a very
informative article entitled “Designing the Star User Interface” which appeared in
Byte magazine in April of 1982.
Star is a multifunction system combining document creation, data processing, and electronic filing, mailing, and printing. Document creation includes text editing and formatting, graphics editing, mathematical formula editing, and page layout. Data processing deals with homogeneous, relational databases
21 ? Ibid.22 ? David Canfield Smith et al. "Designing the Star User Interface," Byte (April, 1982): 242.
Page 27
History & Development of the GUI by Raymond J. Wilson
that can be sorted, filtered, and formatted under user control. Filing is an example of a network service utilizing the Ethernet local-area network. Files may be stored on a work station's disk, on a file server on the work station's network, or on a file server on a different network. Mailing permits users of work stations to communicate with one another. Printing utilizes laser-driven raster printers capable of printing both text and graphics.23
David goes on to describe the 1982 “Desktop” metaphor that remains a modern GUI
standard.
The Desktop is the principal Star technique for realizing the physical-office metaphor. The icons on it are visible, concrete embodiments of the corresponding physical objects. Star users are encouraged to think of the objects on the Desktop in physical terms. Therefore, you can move the icons around to arrange your Desktop as you wish. (Messy Desktops are certainly possible, just as in real life.) Two icons cannot occupy the same space (a basic law of physics). Although moving a document to a Desktop resource such as a printer involves transferring the document icon to the same square as the printer icon, the printer immediately "absorbs" the document, queuing it for printing. You can leave documents on your Desktop indefinitely, just as on a real desk, or you can file them away in folders or file drawers. Our intention and hope is that users will intuit things to do with icons, and that those things will indeed be part of the system.
The previously discussed theory of graphics and the Aristotelian ideas pertaining to
memory enhancement all came into play in the Star system’s GUI philosophy.
A well-designed system makes everything relevant to a task visible on the screen. It doesn't hide things under CODE+key combinations or force you to remember conventions. That burdens your memory. During conscious thought, the brain utilizes several levels of memory, the most important being the "short-term memory." Many studies have analyzed the short-term memory and its role in thinking. Two conclusions stand out: (1) conscious thought deals with concepts in the short-term memory . . . and (2) the capacity of the short-term memory is limited . . .. When everything being dealt with in a computer system is visible, the display screen relieves the load on the short-term memory by acting as a sort of "visual cache." Thinking becomes easier and more productive. A well-designed computer system can actually improve the quality of your thinking . . .. In addition, visual communication is often more efficient than linear communication; a picture is worth a thousand words.
23 ? Ibid.
Page 28
History & Development of the GUI by Raymond J. Wilson
A subtle thing happens when everything is visible: the display becomes reality. The user model becomes identical with what is on the screen. Objects can be understood purely in terms of their visible characteristics. Actions can be understood in terms of their effects on the screen. This lets users conduct experiments to test, verify, and expand their understanding - the essence of experimental science.24
Another first for Alan Kay’s Alto was the WYSIWYG desktop publishing concept
which, Smith goes on to relate, Star borrowed.
WYSIWYG is a simplifying technique for document-creation systems. All composition is done on the screen. It eliminates the iterations that plague users of document compilers. You can examine the appearance of a page on the screen and make changes until it looks right. The printed page will look the same . . .. Anyone who has used a document compiler or post-processor knows how valuable WYSIWYG is. The first powerful WYSIWYG editor was Bravo, an experimental editor developed for Alto at the Xerox Palo Alto Research Center . . .. The text-editor aspects of Star were derived from Bravo.25
Star had a tremendous amount of capability for its day. Whether Star was overpriced
or under-marketed is difficult to say but the personal computer market hardly blinked
at it. Apple’s Macintosh and IBM’s personal computers coupled with Microsoft’s
Windows would become the darlings of the computer world in short order.
The next big development in graphical user interface was the Apple Macintosh. The
Macintosh team was very influenced (although they seem reluctant to publicize it) by
the ideas of Bush, Engelbart, and of course Alan Kay. In December of 1979 the
Apple engineers who were working on the Mac paid a visit to Xerox PARC to see the
Alto. There is no doubt that what they saw influenced their graphical user interface
design. The Mac team made many improvements to the Alto’s graphical user
24 ? Ibid.25 ? Ibid.
Page 29
History & Development of the GUI by Raymond J. Wilson
interface including: the attractive look of the screen, icon grabbing and dragging, the
improved color scheme, and icons which zoom as applications are opened. Mac’s
sales soared for several years but have continually tapered off since the advent of
Windows 3.0 and later versions of Windows. Apple is scrambling to boost their
flagging sales with a new market strategy.
According to sources, Apple has ditched its ambitious five-year plan to increase the Mac OS market share to 25 percent from its current 10 percent. Instead, the company plans to swell profits by selling more products and services to its installed base of 16 million users worldwide and rely on Mac OS licensees to gradually increase Mac market share.
Previously, Apple relied on increased sales across the board to boost its share of the personal computer hardware market. But the company was barely detectable on the business market's radar screen. Indeed, despite sizable market share in publishing, higher-education and K-12 markets, Apple's share of the total PC market slipped to a record-low 8.5 percent last year, according to Dataquest Inc. of San Jose, Calif.26
This decline is generally attributed to Apple’s reluctance to license their operating
system. Relatively inexpensive IBM PC clones began appearing in the latter half of
the 1980s and computer enthusiasts flocked to buy them. Apple’s Macintosh was, and
continues to be, a very pricey commodity, so Apple lost a large proportion of market
share to PC clone buyers. Apple has only recently (January 1995) decided to license
the Mac operating system to clone makers.
Since early May, Power Computing Corp. has succeeded in shipping thousands of clones, including the one used by Negoescu. Less than a half of 1 percent of them have been returned, spokesman Mike Rosenfelt said.
The shipments follow Apple's announcement in January that it would license its popular Macintosh line in hopes of boosting Mac's market share from a meager 11 percent in the United States and 7.5 percent worldwide.
26 ? Jon Swartz and Robert Hess, "Apple: Market Share No Longer Top Priority," MacWEEK NEWS, Vol. 9 No. 27, 10 July 1995.
Page 30
History & Development of the GUI by Raymond J. Wilson
Three other companies - Radius, DayStar Digital and Pioneer made similar licensing deals, although their clones are more expensive and aimed at niche markets. 27
Only time will tell whether this strategy will woo back many PC users or if its “too
little too late.” The Mac’s graphical user interface was much maligned by Microsoft
who was pushing the DOS command line user interface and text based applications.
While Microsoft was involved in this anti-Mac campaign, they were planning their
own entry into the GUI market ... Windows 1.0.
Microsoft’s anti-GUI campaign came back to haunt them when they first introduced
Windows, the DOS faithful were loathe to commit the blasphemy of adopting a
WIMPy (Windows, Icons, Menus, Pointing device) interface. Adrian King describes
the situation in his book, Inside Windows 95.
Windows 1.0 eventually shipped in late 1985. Describing the market's reaction as lukewarm is akin to describing Bill Gates as well off. I remember installing the first Windows Software Development Kit on an IBM PC XT and being at different moments impressed by its features and bewildered by its complexity. Looking back on it now, I can see that it was of course sheer madness for Microsoft to believe that Windows could succeed on the limited hardware available at the time.
But Microsoft wasn't about to give up. Through successive versions, Windows gradually got better and the hardware got faster and more capacious. In 1987 and 1988 I managed the project that produced Windows/386 and launched it on the first 386-based PC: the Compaq Deskpro. It was my favorite time at Microsoft, and the entire project team-all fifteen of us-were rather proud of Windows/386. In comparison to MS-DOS it still didn’t sell worth a darn. Even Steve Ballmer was beginning to think that OS/2 might be the right strategy.
But Microsoft didn't give up, and on May 22, l990, Bill Gates introduced the latest and greatest release of Windows-version 3.0-to a rapt audience in New York City. Things were different this time. It was obvious to me in the theater that day that Windows was about to become a seven-year-old
27 ? Kerry Fehr-Snyder, "Macintosh No Longer A Loner Clones Do A Good Job Of Emulating Apple," The Arizona Republic, 28 August 1995, p. E1
Page 31
History & Development of the GUI by Raymond J. Wilson
overnight success. And it did. Bill and Steve would probably try to convince you it was planned that way. Don't believe it. Whether the galaxies were finally in correct alignment, or a confluence of market factors finally came about, or sheer determination finally carried the day is no longer relevant-Windows was finally a hit.28
Anyone who could be productive with an IBM PC equipped with a DOS command
line interface and the text based applications that were available pre-Windows
became a super-user once they made the switch to Microsoft Windows and
applications written specifically for it. I don’t think anyone will argue that
Sutherland. Engelbart, Kay, and the Macintosh team developed the GUI concepts we
are familiar with today ... but no one has been more successful at marketing and
profiting from them than Microsoft.
With sales of the current version of Windows topping a million copies a month by mid-1993, any new release of the product also needed to be totally reliable.29 (italics added)
With its latest release of Windows 95, Microsoft continues to be the undisputed
champion of GUI design and implementation. Say what you will about Bill Gates and
Microsoft, but they have succeeded in providing more computer users with a
consistent and reliable GUI than any of their competitors or predecessors.
As one researches the history and development of the GUI, actually retraces the path
of its invention, one can discern a distinct pattern of ideas. Specific goals and
principles of GUI design have taken shape over the years, a consistent philosophy
28 ? Adrian King , Inside Windows 95 (Redmond, Washington: Microsoft Press, 1994), pp. xxi - xxii29 ? Ibid., 2
Page 32
History & Development of the GUI by Raymond J. Wilson
that GUIs should be intuitively simple unfolds. Let us now consider the goals and
principles that presently drive GUI design.
Goals and Principles Driving Graphical User Interface Designs
If I were to put into words what the main goal of the graphical user interface is, it
would be to “get rid of the words.” The more a graphical user interface can speak
directly to the mind, utilizing the mind’s great pattern recognition and graphic
processing power, the more useful it is. The less a person notices the interface the
better it is. The simpler the interface, the better it is. The work that must be done with
a computer should be the focus of a user’s activity and the more naturally this is
supported the better the interface. One of the main problems with words is that they
mean different things to different people. Plato commented on the fact that names are
conventions arrived at by individuals and by groups, but they, by no means, carry the
universal recognition of graphic presentations.
I have often talked over this matter, both with Cratylus and others, and cannot convince myself that there is any principle of correctness in names other than convention and agreement; any name which you give, in my opinion, is the right one, and if you change that and give another, the new name is as correct as the old-we frequently change the names of our slaves, and the newly-imposed name is as good as the old: for there is no name given to anything by nature; all is convention and habit of the users;-such is my view.30
As computers come to control more and more equipment involved in our daily lives
there is less and less room for mistakes on the part of users. Surgical and diagnostic
equipment controlled by computers today has the capacity to inflict harm to patients
if mistakes in therapy are made, computerized x-ray and laser surgery equipment are
30 ? Plato, “Cratylus,” in Great Books of the Western World, ed. by Mortimer J. Adler (Chicago: Encyclopedia Britannica, Inc., 1990) , vol 6, 85.
Page 33
History & Development of the GUI by Raymond J. Wilson
some examples. In some instances the sheer volume of diagnostic data to be analyzed
depends on the graphical user interface being intuitive and simple. Nuclear magnetic
resonance imaging is a prime example.
A newer and more promising diagnostic tool is the nuclear magnetic resonance (NMR) scanner, a machine that resonates at the cell level, sending back great quantities of data that, again, are converted by the computer into images that clinicians can use for diagnostic purposes-it amounts to exploratory surgery without so much as a pinprick.31
The easier a physician can rotate or invert images, isolate and examine details of
anomalies, and enlist associated expert decision making tools, the greater the level of
care he can provide.
A well thought out graphical user interface should be able to guide novice users so
that they can use the system the first time they see it—as if they were intermediate
users. It should teach the user how to become more proficient, and enable them to
move from the novice/intermediate level onto becoming an expert with the
application. It must allow a user to explore its capabilities without fear of “breaking”
something. By possessing this quality it will engender its users with a spirit of
exploration. William Newman and Robert Sproull authored a landmark graphics
reference work entitled Principles of Interactive Computer Graphics. They comment
on the vital role an interface plays and describe the “moment of truth” when user
meets user-interface.
No single component of an interactive program is more unpredictable in performance than the user interface, i.e., the part of the program that determines how the user and the computer communicate. It is unfortunate that this should be so, for user-interface design has a particularly strong impact on
31 ? Pamela McCorduck, The Universal Machine: Confessions of a Technological Optimist (New York, McGraw-Hill Book Company, 1985), p. 141.
Page 34
History & Development of the GUI by Raymond J. Wilson
program acceptability as a whole. Our inability to predict user interface performance makes it particularly likely that users will react in unexpected ways when they first use the program. The biggest surprises often occur when the programmer sits down with his first user to explain the operation of the program:
Programmer: Now that you've drawn part of the circuit, you might want to change it in some way.
User: Yes, let's delete a component. How do we do that? P: Point at the menu item labeled CD. 44U: CD? P: It stands for 'component delete.' U: Ah. Well, here goes . . . hey, what happened? P: You're in analysis mode: you must have selected AM instead of CD.U: Funny, I was pointing at CD. How can I get out of analysis mode?P: Just type control-Q. U: [types C-O-N-T-R . . .]P: No, hold down the control key and hit Q. U: Sorry, silly of me . . . OK, I'll try for CD again. P: Maybe aim a bit above the letters to avoid getting into analysis mode--
no, not that much above--that's better. U: Got it! P: Now point to the component to delete it. U: OK . . . nothing's happening; what am I doing wrong? P: You’re not doing anything wrong; you've deleted the component but
the program hasn't removed it from the screen yet. U: When will it be removed? P: When you type control-J to redraw the picture. U: I'll try it . . . there we are; but only part of the component was
removed.P: Sorry, I forgot: you have to delete each half of the component
separately. Just point to CD again. U: Very well . . . now what's happened? P: You're in analysis mode again: type control-Q. U: Control . . . where's that Q? There it is . . . hey, why is the screen blank
all of a sudden? P: You typed Q, not control-Q, so the program quit to the operating
system. I'm really sorry, but we've lost everything and we'll have to start all over again.
U: [groans] Could we postpone that until next week? 32
32 ? William M. Newman and Robert F. Sproull, Principles of Interactive Computer Graphics (New York: McGraw-Hill Book Company, 1979), pp. 443-444.
Page 35
History & Development of the GUI by Raymond J. Wilson
The point is clear, from the above, that developers must give a great amount of
consideration to a program’s user interface. Also, it makes little sense to begin
developing without first polling potential users to ascertain what they feel is the most
natural way to model their work habits electronically. In the vast majority of cases a
well-designed graphical user interface provides the user with increased work
efficiency and a greater sense of control.
In an article entitled "Designing the Star User Interface" which appeared in the April
1982 issue of Byte, David Canfield Smith (et al) describe the design principles and
goals they lived by while developing the “Star” computer system. Their vision was to
build the most intuitive and powerful user interface possible. The Star system
leveraged heavily off of the GUI work done by Dr. Alan Kay’s Learning Research
Group.
Some types of concepts are inherently difficult for people to grasp. Without being too formal about it, our experience before and during the Star design led us to the following:
The characteristics on the left were incorporated into the Star user's conceptual model. The characteristics on the right we attempted to avoid.
Principles Used
The following main goals were pursued in designing the Star user interface: · familiar user's conceptual model
Classification: Easy Hardconcrete abstractvisible invisiblecopying creatingchoosing filling inrecognizing generatingediting programminginteractive batch
Page 36
History & Development of the GUI by Raymond J. Wilson
· seeing and pointing versus remembering and typing · what you see is what you get · universal commands · consistency · simplicity · modeless interaction· user tailorability33
The Star designers were so concerned about getting the graphical user interface right, that
they spent two years developing it before they ever wrote a line of product code. The
design of a conceptual model requires an in-depth study of potential users and their work
habits. Developers must get into the heads of users so that the finished interface acts like
a work-amplifier, whose commands are natural and intuitive in the user’s frame of
reference.
We have learned from Star the importance of formulating the fundamental concepts (the user’s conceptual model) before software is written, rather than tacking on a user interface afterward. Xerox devoted about thirty work-years to the design of the Star user interface. It was designed before the functionality of the system was fully decided. It was even designed before the computer hardware was built. We worked for two years before we wrote a single line of actual product software. Jonathan Seybold put it this way, “Most system design efforts start with hardware specifications, follow this with a set of functional specifications for the software, then try to figure out a logical user interface and command structure. The Star project started the other way around: the paramount concern was to define a conceptual model of how the user would relate to the system. Hardware and software followed this.”34
The developers responsible for the latest release of Windows (Windows 95) were
concerned with a similar set of goals. Adrian King—one of the architects of windows 3.0
and 3.1—enumerates the questions that a good GUI designer must constantly consider
during the design process, in his book Inside Windows 95
33 ? David Canfield Smith et al. "Designing the Star User Interface".34 ? Ibid.
Page 37
History & Development of the GUI by Raymond J. Wilson
Designers of these graphical interfaces worry constantly about a few very important characteristics, asking themselves whether the interface can be described in these ways:
· Consistent. Does the user always do the same thing in the same way? Does the user gain access to similar operations using the same keyboard or mouse inputs, guided by similar visual cues?
· Usable. Does the interface allow the user to do simple things simply and complex things within a reasonable number of operations? Forcing the user to go through awkward or obscure input sequences leads to frustration and ineffective use of the system.
· Learnable. Is every operation simple enough to be remembered easily? What the user learns by mastering one operation should be transferable to other operations.
· Intuitive. Is the interface so obvious that no training or documentation is necessary for the user to make full use of it? This aspect of a GUI is the holy grail for interface designers.
· Extensible. As hardware gets better or faster-for example, as common screen displays achieve higher resolution or new pointing devices appear-can the interface grow to accommodate them? Similarly, as new application categories become popular, does the user interface remain valid?
· Attractive. Does the screen look good? An ugly or overpopulated screen will deter the user and reduce the overall effectiveness of the interface.35
Adrian King spells out the conceptual details that were given serious consideration in
the design of Windows 95 to attract even more users and provide current users with
an even better GUI look and feel.
Many of the new user interface ideas for Windows 95 came from the visual design group at Microsoft. These are the people who define, refine, and improve the user interface for all of Microsoft's products. Over the last few years, Microsoft has used more and more visual design expertise on its projects, and Windows 95 is perhaps the first product in which the efforts of the visual design group have had a high level of impact on the appearance and operation of the product. Involved in more than pure visual design, the group works with the development team to define how a product is to respond to user actions. Their goal is to get all of Microsoft's products appearing and behaving in similar, obvious ways. If you know how to use one product, your
35 ? King , Inside Windows 95, p. 168.
Page 38
History & Development of the GUI by Raymond J. Wilson
learning time for another should be greatly reduced. Among other influences, the visual design group uses real people to test hypotheses about interface design-the input often coming from controlled usability testing. Does the user actually respond the way you think he or she should? If not, why not? One team goal for the revised interface in Windows 95 was to reduce the level of knowledge a novice needed in order to begin using the system. The usability tests helped validate whether the design innovations really did accomplish that goal.36
Again and again, simplicity, consistency, and intuitiveness are shown to be critical
characteristics of a successful GUI. Because the Windows 95 team incorporated these
ideals into their design efforts, Windows 95 users will enjoy the following benefits.
· More unified configuration and control of the system. The plethora of manager programs and other control functions is reduced.
· Improved consistency of the user interface. Similar functions look and feel the same.
· Improved visual details. 37
As one last piece of evidence to show that a graphical user interface which permits direct
manipulation of objects is the best type of interface available we present some
information from Ben Shneiderman’s book Designing the User Interface: Strategies for
Effective Human-Computer Interaction. Shneiderman lists the advantages and
disadvantages of the five primary interface interaction styles which are in common use
today.
Advantages and disadvantages of the five primary interaction styles. Interaction StyleAdvantages Disadvantages
menu selectionshortens learning imposes danger of many menus
36 ? Ibid., 165.37 ? Ibid., 160.
Page 39
History & Development of the GUI by Raymond J. Wilson
reduces keystrokes may slow frequent usersstructures decision making consumes screen spacepermits use of dialog-management tools requires rapid display rateallows easy support of error handling
form fill-insimplifies data entry consumes screen spacerequires modest trainingmakes assistance convenientpermits use of form-management tools
command languageis flexible has poor error handlingappeals to "power" users requires substantial training and
memorizationsupports user initiativeis convenient for creating user-defined macros
natural languagerelieves burden of learning syntax requires clarification dialog
may require more keystrokesmay not show contextis unpredictable
direct manipulationpresents task concepts visually may be hard to programis easy to learn may require graphics display andis easy to retain pointing devicesallows errors to be avoidedencourages explorationpermits high subjective satisfaction
Interaction Styles Table38
Looking at the Interaction Styles Table we can readily see that the direct manipulation
style of interface provides the greatest user advantage. Without a doubt GUIs are more
difficult to implement, but users are becoming more and more used to (some might say
38 ? Shneiderman, Designing the User Interface, p. 70.
Page 40
History & Development of the GUI by Raymond J. Wilson
addicted to) them. Mr. Shneiderman goes on to explain several of the advantages
provided by a direct manipulation interface (more commonly termed a GUI).
Direct manipulation: When a clever designer can create a visual representation of the world of action, the users' tasks can be greatly simplified because direct manipulation of the objects of interest is possible. Examples of such systems include display editors, LOTUS 1-2-3, air-traffic control systems, and video games. By pointing at visual representations of objects and actions, users can carry out tasks rapidly and observe the results immediately. Keyboard entry of commands or menu choices is replaced by cursor-motion devices to select from a visible set of objects and actions. Direct manipulation is appealing to novices, is easy to remember for intermittent users, and, with careful design, can be rapid for frequent users.39
He further elaborates.
The success of direct-manipulation interfaces is indicative of the power of using computers in a more visual or graphic manner. A picture is often cited to be worth a thousand words and, for some (but not all) tasks, it is clear that a visual presentation-such as a map or photograph-is dramatically easier to use than is a textual description or a spoken report. As computer speed and display resolution increase, scientific visualization and graphical interfaces are likely to have an expanding role. If a map of the United States is displayed, then it should be possible to point rapidly at one of 1000 cities to get tourist information. Of course, a foreigner who knows a city's name (for example, New Orleans), but not its location, may do better with a scrolling alphabetical list. Visual displays become even more attractive to provide orientation or context, to enable selection of regions, and to provide dynamic feedback for identifying changes (for example, a weather map) Scientific visualization has the power to make atomic, cosmic, or statistical worlds visible and comprehensible. These approaches to computing might be called "visual reality" in contrast with virtual reality.
Overall, the bandwidth of information presentation seems potentially higher in the visual domain than for media reaching any of the other senses. Users can scan, recognize, and remember images rapidly, and can detect changes in size, color, shape, movement, or texture. They can point to a single pixel, even in a megapixel display, and can drag one object to another to perform an action. User interfaces have been largely text-oriented, so it seems likely that, as visual approaches are explored, some new opportunities will emerge.40
39 ? Ibid., 72.40 ? Ibid., 422-423.
Page 41
History & Development of the GUI by Raymond J. Wilson
We have examined the characteristics of a natural and intuitive graphical user interface
from a conceptual point of view. No matter what graphical control elements we
incorporate (e.g. pull down menus, 3D buttons, icons, widgets, toolbars, etc.) it is vital to
follow the principles and goals discussed in this section for users to obtain the greatest
benefit from our GUI designs. Visual representation of information allows the high
bandwidth human visual process to be exploited for rapid communication of large
amounts of complex data. Direct manipulation interfaces (which is what most GUIs are)
provide the user with the greatest number of advantages when compared to all other
interface styles.
Thus far, we have discussed a number GUI design aspects—its history, its elements, and
the concepts which make a GUI intuitive and natural to use. But what lies ahead? What
GUI concepts lie around the bend and just over the horizon? We will now consider our
last subject: “Thoughts on Future Graphical User Interface Concepts.”
Thoughts on Future Graphical User Interface Concepts
It is extremely exciting to witness the continual development of the computer and its
inseparable companion the GUI. Futurists visualize computers eventually becoming an
integral part of the life of every human on earth. Computer scientists look forward to the
day when computers will “think for themselves,” using advanced neural networks and
artificial intelligence. Some computer scientists make claims that computers will go
beyond merely thinking. Robert Jastrow has suggested that computers are evolving.
. . . human evolution is a nearly finished chapter in the history of life . . . We can expect that a new species will arise out of man, surpassing his achievements as he
Page 42
History & Development of the GUI by Raymond J. Wilson
has surpassed those of his predecessor, Homo erectus. . . . The new kind of intelligent life is more likely to be made of silicon.41
What type of user interface will they possess then? Computer speech recognition is in its
infancy, many feel that it is a long way off, but I think we’ll see a true breakthrough in
the next decade or so. Gesture recognition by computers via optical, infra-red or Radar
technology sensors is closer to becoming a technical reality due to the virtual reality
(VR) research taking place at several universities today. Before we get into these flights
of fancy lets briefly look at the latest GUI from Microsoft, Windows 95, to see what sets
it apart from its predecessors. Adrian King introduces us to one of the biggest Windows
95 paradigm shifts—the shift away from the “application-centric” approach to the
“document-centric” concept—in his book Inside Windows 95.
The document-centric interface is the main theme of much of the conceptual work for OLE, Windows 95, and in the future, Cairo. The document-centric approach is derived from the object-oriented concepts that are now increasingly popular in the software industry. Unfortunately, object orientation has become an overused marketing term. There are real examples of its use, as in Next's NextStep system, but the proponents of many a system claim that theirs is an object-oriented approach without really implementing one. OLE and Windows 95 are major steps toward a full object-oriented system, although neither of them is complete in that regard. Microsoft intends that Cairo will be.
A document-centric approach means that the users concern themselves only with documents and not with programs and files The system itself is responsible for maintaining the relationship between data of a particular format and the application that can manipulate the data. Putting the responsibility on the system ties in with the usability information that Microsoft has gathered from users of Windows. Many users, particularly those introduced to the PC via Windows, not MS-DOS, find it difficult to separate the concepts of programs and of files. To these users, the item of concern is the document they work on-whether it be a letter composed with a word processing application, or a chart of recent sales results prepared with a spreadsheet application. For many people, the application program and the file containing the specific data are conceptually indivisible.
The document-centric approach contrasts with the approach implemented in most systems today, including Windows 3.1. Today you use an application-
41 ? Robert Jastrow, Time, (20 February 1978), p. 59.
Page 43
History & Development of the GUI by Raymond J. Wilson
centric model. To carry out some operation-for example, redrawing a sales graph in light of the latest month's results-you must first run the appropriate application, then load the data file, then change the numbers, and then redraw the chart. If you want to include the chart in a report, you also have to know how to run the application that handles your report and then cut and paste the chart from its native application into the report file.
OLE introduced the concept of a compound document. With OLE, many different types of data can be held and edited within a single document. Editing one element of the document involves simply double-clicking on the object. The application appropriate for manipulating that type of data is loaded without any further action from the user. You see and work with only a single document but possibly several different application programs.
The Windows 95 shell provides a document-centric approach to the system. Everything that can be conceptualized as a document has been. Collections of documents form folders (just like file folders), and you can organize folders and documents just as you would organize them in a real filing cabinet.42
The document-centric approach taken by Windows 95 is a real advance toward making
the GUI even more intuitive and simple to use. Another nice feature is the addition of
multiple clipboards. Users can cut items from spreadsheets, word processors, or drawing
programs and just set them in the background, ready to be picked up later and placed
where desired. With Windows 95 the GUI will fade yet further into the background
letting the user concentrate on the work at hand. Windows 95 is fast becoming the “here
and now” so lets look further into the future.
We will consider two schools of thought related to where the computer user interface will
go in the future. VR researchers are preparing to let users roam right into the computer to
manipulate 3D (and higher dimension) data objects and virtual controls by means of
positioning sensors, feedback gloves, stereo-eyephones, and stereo sound. The VR user
will be totally immersed in the interface. On the other hand we have the “ubiquitous”
school of computing. Ubiquitous is defined as “Being or seeming to be everywhere at the
42 ? King , Inside Windows 95, pp. 166-167.
Page 44
History & Development of the GUI by Raymond J. Wilson
same time.”43 The ubiquitous school sees computers becoming as common as paint on
the wall, a lot more numerous than at present, as unobtrusive as houseplants, but capable
of wireless inter-communication and an almost sentient quality. Let’s consider the
possibilities offered by the VR interface first.
Ben Shneiderman (a modern GUI guru) feels that VR is a virtually unexplored territory
loaded with totally unimagined interface concepts.
Far above the office desktop, much beyond multimedia, and high above the hype of hypertext, the gurus and purveyors of virtuality are donning their EyePhones and DataGloves to explore the total-immersion experience. Whether soaring over Seattle, bending around bronchial tubes to find lung cancers, or grasping complex molecules, the cyberspace explorers (cybernauts, VR Voyagers, or Chipster Columbuses) are playing Lewis and Clark to a grand Montana of the mind. The imagery and personalities involved in virtual reality are often colorful, and journalists have enjoyed the fun as much as the developers and promoters . . .. More sober researchers have tried to present a balanced view by conveying enthusiasm while reporting on problems . . ..44
Much scientific work is going on to prepare the way to safely enter this unexplored
country. A safe long-term stereo visual display is still under development. Some serious
problems have already been identified which relate to extended use of VR goggles (or
eye-phones). If the eye-phones don’t provide a virtual horizon (as the user would
experience without the eye-phones on) to help them keep their eyes and cochlea
coordinated, new neural pathways can begin to develop to cope with the new VR
situation. These new pathways can dangerously come into play when the user is not in
VR and least expects it.
43 ? William Morris ed. The American Heritage Dictionary of the English Language. (Boston: Houghton Mifflin Company, 1980) s.v. “ubiquitous.”44 ? Shneiderman, Designing the User Interface, p. 222.
Page 45
History & Development of the GUI by Raymond J. Wilson
There are also fears that users will become addicted to the control VR interfaces will
undoubtedly provide. They will perhaps feel that the VR world is more to their liking
than the unpredictable and far less controllable real world.
VR requires a lot of gear. In addition to the visual display there are the head and hand
position sensing devices, tactile feedback gloves, and sound devices. Scientific analysis is
in progress to develop systems which integrate these elements into a manageable physical
package. Viable VR is not here yet, but when it arrives the VR interface will be
extremely graphic in nature. Presently, Microsoft’s Excel, for example, provides the
capability to take 3D graphs and view them from any angle by manipulating the graph
with the mouse. In VR, a user would hold the graph in his hands, expand it to room
filling proportions if desired and, with a simple gesture, manipulate the viewing angle.
With another gesture he could identify any area of the graph he wanted to cut out and
then examine it in greater detail. The smallest nuances or trends in very large amounts of
data would become readily apparent. Electronic conferencing with colleagues from
around the world could be done in VR, greatly reducing a company’s travel expenses.
Employee morale could even be enhanced ... everyone could have a virtual window, or
work in the virtual woods or in a virtual old farmhouse or wherever they like. The
educational potential of VR is limitless, for example, it would not be feasible to send a
classroom full of students to Mars (although this may be desirable to teachers at times)
but via VR, classrooms of students could experience the trip as though it were real.
Timothy Ferris, author of The Mind’s Sky: Human Intelligence in a Cosmic Context had
this to say about his virtual trip to Mars.
Page 46
History & Development of the GUI by Raymond J. Wilson
I put on the helmet and found myself standing on a rocky promontory, looking down upon a jumble of cliffs and plateaus stained in unearthly shades of ochre, sand-yellow, and plum. Using the computer controls I could descend to the valley floor and wander for miles through the twists and turns of one of the solar system’s most imposing landscapes, or climb high into the pink sky and take in the wider view, studying inky bluffs that marched off toward distant peaks five hundred kilometers away. Another control altered the position of the sun; by turning this knob I could watch the canyon’s colors change from the hot reds of noon to a startlingly alien hue, somewhere between ash and gunmetal blue, at sunset.45
With this technology students would have the ability to experience traveling around the
world or around the solar system. These virtual experiences will have the potential to
touch humans deeper than any previously used communications medium. Those who
have experienced them have a profound sense of actually having been there. Note what
Timothy Ferris says about his Mars experience.
And despite its limitations, the experience of walking on Mars was vivid and immediate, not at all like seeing a movie or a photograph. The way I remember it, I was there.46
Will Windows 2001 include a VR interface? Only time will tell. One thing is certain—
VR is definitely coming. GUI designers, get on your thinking caps ( ... what a great
name for a VR GUI “The Thinking Cap”) VR GUIs are going to take future computer
users where no one has gone before. Last but by no means least let’s look at Xerox
PARC’s newest concept, “ubiquitous computing.”
Xerox PARC is at it again, their “think tank” legacy is well deserved. Today they are
hammering out a new interface paradigm which they call “ubiquitous computing.” The
45 ? Timothy Ferris, The Mind’s Sky: Human Intelligence in a Cosmic Context, (New York, Bantam Books, 1992), p. 49. 46 ? Ibid., 50.
Page 47
History & Development of the GUI by Raymond J. Wilson
idea behind ubiquitous computing or “embodied virtuality” is to take computers and
imbed them in just about everything: post-it-sized “tabs”, note-paper-sized “pads”, and
chalkboard-sized “live boards.” Good old work-stations will continue to be necessary in
the ubiquitous paradigm. The GUI will be an integral element of this concept, included at
all levels. The “tab” is a small computer which could be used as an active badge, a key, a
pointer, a gesture interpreter, etc.. The “pad” (which includes a stylus and a few buttons)
could be used as an electronic book, electronic sketch-pad, electronic sheet of paper, etc..
The “live board” could be used as a white-board, an electronic bulletin board, large
computer display, etc.. In ubiquitous computing, all computers have the capacity to
communicate with one another and with any peripheral device. New GUI concepts will
have to be developed to extract the full potential of these integrated systems. At this time,
display technology to realize practical, reasonably priced, high resolution live boards as
large as blackboards does not exist—but Xerox PARC is working on it. Presently,
communications technology permitting hundreds of mobile computers per room to
communicate with one another does not exist— but Xerox PARC is working on it. Count
on it—invest in it—Xerox PARC is inventing the future again. Mark Weiser (head of the
Computer Science Laboratory at Xerox PARC) has a romantic vision of what the future
of ubiquitous computing is going to be. His imaginative vision puts me in mind of
another former Xerox PARCer—Dr. Alan Kay. To close this section let’s take a look,
through the eyes of Mark Weiser, at the future of ubiquitous computing.
Neither an explication of the principles of ubiquitous computing nor a list of the technologies involved really gives a sense of what it would be like to live in a world full of invisible widgets. Extrapolating from today's rudimentary fragments of embodied virtuality is like trying to predict the publication of Finnegans Wake shortly after having inscribed the first clay tablets. Nevertheless, the effort is probably worthwhile:
Page 48
History & Development of the GUI by Raymond J. Wilson
Sal awakens; she smells coffee. A few minutes ago her alarm clock, alerted by her restless rolling before waking, had quietly asked, "Coffee?" and she had mumbled, "Yes." "Yes" and "no" are the only words it knows.
Sal looks out her windows at her neighborhood. Sunlight and a fence are visible through one, and through others she sees electronic trails that have been kept for her of neighbors coming and going during the early morning. Privacy conventions and practical data rates prevent displaying video footage, but time markers and electronic tracks on the neighborhood map let Sal feel cozy in her street.
Glancing at the windows to her kids' rooms, she can see that they got up 15 and 20 minutes ago and are already in the kitchen. Noticing that she is up, they start making more noise.
At breakfast Sal reads the news. She still prefers the paper form, as do most people. She spots an interesting quote from a columnist in the business section. She wipes her pen over the newspaper's name, date, section and page number and then circles the quote. The pen sends a message to the paper, which transmits the quote to her office.
Electronic mail arrives from the company that made her garage door opener. She had lost the instruction manual and asked them for help. They have sent her a new manual and also something unexpected-a way to find the old one. According to the note, she can press a code into the opener and the missing manual will find itself. In the garage, she tracks a beeping noise to where the oil-stained manual had fallen behind some boxes. Sure enough, there is the tiny tab the manufacturer had affixed in the cover to try to avoid Email requests like her own.
On the way to work Sal glances in the foreview mirror to check the traffic. She spots a slowdown ahead and also notices on a side street the telltale green in the foreview of a food shop, and a new one at that. She decides to take the next exit and get a cup of coffee while avoiding the jam. Once Sal arrives at work, the foreview helps her find a parking spot quickly. As she walks into the building, the machines in her office prepare to log her in but do not complete the sequence until she actually enters her office. On her way, she stops by the offices of four or five colleagues to exchange greetings and news.
Sal glances out her windows: a gray day in Silicon Valley, 75 percent humidity and 40 percent chance of afternoon showers; meanwhile it has been a quiet morning at the East Coast office. Usually the activity indicator shows at least one spontaneous, urgent meeting by now. She chooses not to shift the window on the home office back three hours-too much chance of being caught by surprise. But she knows others who do, usually people who never get a call from the East but just want to feel involved.
The telltale by the door that Sal programmed her first day on the job is blinking: fresh coffee. She heads for the coffee machine.
Coming back to her office, Sal picks up a tab and "waves" it to her friend Joe in the design group, with whom she has a joint assignment. They are sharing a virtual office for a few weeks. The sharing can take many forms-in this case, the
Page 49
History & Development of the GUI by Raymond J. Wilson
two have given each other access to their location detectors and to each other's screen contents and location. Sal chooses to keep miniature versions of all Joe's tabs and pads in view and three-dimensionally correct in a little suite of tabs in the back corner of her desk. She can't see what anything says, but she feels more in touch with his work when noticing the displays change out of the corner of her eye, and she can easily enlarge anything if necessary.
A blank tab on Sal's desk beeps and displays the word "Joe" on it. She picks it up and gestures with it toward her live board. Joe wants to discuss a document with her, and now it shows up on the wall as she hears Joe's voice:
"I've been wrestling with this third paragraph all morning, and it still has the wrong tone. Would you mind reading it?"
Sitting back and reading the paragraph, Sal wants to point to a word. She gestures again with the "Joe" tab onto a nearby pad and then uses the stylus to circle the word she wants:
"I think it's this term 'ubiquitous.' It's just not in common enough use and makes the whole passage sound a little formal. Can we rephrase the sentence to get rid of it?"
"I'll try that. Say, by the way, Sal, did you ever hear from Mary Hausdorf ?"
"No. Who's that? " "You remember. She was at the meeting last week. She told me she was
going to get in touch with you." Sal doesn't remember Mary, but she does vaguely remember the meeting.
She quickly starts a search for meetings held during the past two weeks with more than six people not previously in meetings with her and finds the one. The attendees' names pop up, and she sees Mary.
As is common in meetings, Mary made some biographical information about herself available to the other attendees, and Sal sees some common background. She'll just send Mary a note and see what's up. Sal is glad Mary did not make the biography available only during the time of the meeting, as many people do....
In addition to showing some of the ways that computers can enter invisibly into people's lives, this scenario points up some of the social issues that embodied virtuality will engender. Perhaps key among them is privacy: hundreds of computers in every room, all capable of sensing people near them and linked by high-speed networks, have the potential to make totalitarianism up to now seem like sheerest anarchy. Just as a workstation on a local-area network can be programmed to intercept messages meant for others, a single rogue tab in a room could potentially record everything that happened there.
Even today the active badges and self-writing appointment diaries that offer all kinds of convenience could be a source of real harm in the wrong hands. Not only corporate superiors or underlings but also overzealous government officials and even marketing firms could make unpleasant use of the same information that makes invisible computers so convenient.
Fortunately, cryptographic techniques already exist to secure messages from one ubiquitous computer to another and to safeguard private information
Page 50
History & Development of the GUI by Raymond J. Wilson
stored in networked systems. If designed into systems from the outset, these techniques can ensure that private data do not become public. A well-implemented version of ubiquitous computing could even afford better privacy protection than exists today.47
Conclusions
Communication by means of graphic symbols is far more effective than that achieved by
text alone. Graphics is a language that speaks directly to the mind—pictures literally are
worth thousands of words. The ability of the mind to readily recall graphic images is a
powerful aid in learning to use programs which incorporate GUIs. Thus, GUI equipped
programs are faster to learn and easier to use than programs which incorporate other
forms of user interface (menus, command line, natural language, etc.). The development
of the GUI to its present form was a process that took place over approximately 35 years,
starting in the early 1960s. Generally unknown is the fact that Xerox PARC played a
major role in developing many of today’s popular GUI elements. The extreme popularity
and success of modern GUIs is a tribute to their usefulness. Microsoft’s Windows has
grabbed the center stage of the GUI market today and shows no signs of slowing down.
GUIs have enabled millions of users to tap into the power of computers without having
to have degrees in computer science. GUI designers will be busy for a long time,
dreaming up GUI concepts capable of tapping into the power of virtual reality and of
new computing paradigms such as ubiquitous computing.
47 ? Mark Weiser, "The Computer for the 21st Century" Scientific American, (September 1991), 102-104.
Page 51
History & Development of the GUI by Raymond J. Wilson
BIBLIOGRAPHY
1. Aristotle. “On Interpretation.” In Great Books of the Western World, edited by Mortimer J. Adler, volume 7, 25-36. Chicago: Encyclopedia Britannica, Inc., 1990.
2. _______. “On Memory and Reminiscence.” In Great Books of the Western World, edited by Mortimer J. Adler, volume 7, 690-695. Chicago: Encyclopedia Britannica, Inc., 1990.
3. Booth-Clibborn, Edward and Daniele Baroni. The Language of Graphics. New York: Harry N. Abrams, Inc., Publishers, 1980.
4. Descartes, René, “The Third Set of Objections With The Author’s Replies.” In Great Books of the Western World, edited by Mortimer J. Adler, volume 28, 360-369. Chicago: Encyclopedia Britannica, Inc., 1990.
5 Fehr-Snyder, Kerry. "Macintosh No Longer A Loner Clones Do A Good Job Of Emulating Apple." The Arizona Republic, 28 August 1995, p. E1
6. Ferris, Timothy. The Mind’s Sky: Human Intelligence in a Cosmic Context. New York: Bantam Books, 1992.
7. Goldberg, Carey of The New York Times. "Windows on the World: Microsoft Goes Grand, Global, Gaga." The Denver Post, 24 August 1995.
8. Jastrow, Robert. Time, February 20, 1978, p. 59.
9. Kay, Alan C. Doing With Images Makes Symbols: Communicating With Computers. Produced and Directed by University Video Communications sponsored by: Higher Education Marketing Group, Apple Computer Incorporated, 1988.
10. ________. "User Interface a Personal View" in The Art of Human Computer Interface Design, ed. Brenda Laurel. Reading, MA: Addison Wesley Publishing Company, 1990, 194-197.
11. ________. The Early History of Smalltalk, in ACM SIGPLAN Notices Vol. 28, no. 3, March 1993.
12. King, Adrian. Inside Windows 95. Redmond, Washington: Microsoft Press, 1994.
13. Lemmons, Phil. “An Interview: The Macintosh Design Team; The Making of Macintosh.” Byte (June 1987): 58.
Page 52
History & Development of the GUI by Raymond J. Wilson
14. McCorduck, Pamela. The Universal Machine: Confessions of a Technological Optimist. New York: McGraw-Hill Book Company, 1985.
15. Morris, William ed. The American Heritage Dictionary of the English Language. Boston: Houghton Mifflin Company, 1980 s.v. “ubiquitous.”
16. Newman, William M. and Robert F. Sproull. Principles of Interactive Computer Graphics 2d ed. New York: McGraw-Hill Book Company, 1979.
17. Petzold, Charles. Programming Windows 3.1. Third Edition. Redmond, Washington: Microsoft Press, 1992.
18. Plato, “Cratylus.” In Great Books of the Western World, edited by Mortimer J. Adler, volume 6, 85-114. Chicago: Encyclopedia Britannica, Inc., 1990.
19. Shneiderman, Ben. Designing the User Interface: Strategies for Effective Human-Computer Interaction 2d ed. Reading, MA: Addison-Wesley Publishing Company, 1992.
20. Smith, David Canfield et. al. “Designing the Star User Interface.” Byte (April 1982): 242.
21 Swartz, Jon and Robert Hess. "Apple: Market Share No Longer Top Priority." MacWEEK NEWS, Vol. 9 No. 27, 10 July 1995.
22. The New Grolier Multimedia Encyclopedia, Release 6. On-line Computer Systems, Inc. 1993.
23. Warfield, Robert. “The New Interface Technology; An Introduction to Windows and Mice.” Byte (December 1983): 218.
24. Weiser, Mark. "The Computer for the 21st Century" Scientific American. September 1991, 102-104.
Page 53