Artificial intelligence and the real world

8
Futures 35 (2003) 779–786 www.elsevier.com/locate/futures Artificial intelligence and the real world Anne Jenkins Department of Sociology & Social Policy, University of Durham, Durham DH1 3HR, UK 1. Introduction The scale and significance of the potential consequences of AI make it an important futures concern. Perhaps more so than other emerging technologies, parti- cularly because AI is concerned with replicating and enhancing intelligence, and this concept, related as it to consciousness, is at the heart of human identity. Added to this we have the uncertainty over what capabilities are being developed, and the real concern over not being able to control a new and separate train of evolution that we may be setting in motion. It is clear that the developments in AI are overdue for exploration and futures analysis. The real question of artificial intelligence is not whether Bostrom [1] is right that machines will outsmart humans in the near future. I take issue with his position later on. The first question to ask of AI is ‘exactly what is being developed and why?’. Then a key concern becomes how this intelligence will interact with our world. By which I mean both how well this intelligence is designed for practical use, and how well we will adapt to its adoption into our society. 2. Progress, predictions and definitions There are currently two main branches of AI development, which are distinguished by their, for the moment at least, separate goals of building more intelligent machines, and understanding human cognition. The possible far end-points of these development paths for AI are hyper-intelligent machines with no volition or con- sciousness; hyper-intelligent human-machine mergers, that have consciousness; and a new entity, a truly conscious machine. However, the current goals of AI are about Tel.: +44-0-191-386-5267. E-mail address: [email protected] (A. Jenkins). 0016-3287/03/$ - see front matter 2003 Elsevier Science Ltd. All rights reserved. doi:10.1016/S0016-3287(03)00029-6

Transcript of Artificial intelligence and the real world

Page 1: Artificial intelligence and the real world

Futures 35 (2003) 779–786www.elsevier.com/locate/futures

Artificial intelligence and the real world

Anne Jenkins∗

Department of Sociology & Social Policy, University of Durham, Durham DH1 3HR, UK

1. Introduction

The scale and significance of the potential consequences of AI make it animportant futures concern. Perhaps more so than other emerging technologies, parti-cularly because AI is concerned with replicating and enhancing intelligence, and thisconcept, related as it to consciousness, is at the heart of human identity. Added tothis we have the uncertainty over what capabilities are being developed, and the realconcern over not being able to control a new and separate train of evolution that wemay be setting in motion. It is clear that the developments in AI are overdue forexploration and futures analysis.

The real question of artificial intelligence is not whether Bostrom[1] is right thatmachines will outsmart humans in the near future. I take issue with his position lateron. The first question to ask of AI is ‘exactly what is being developed and why?’.Then a key concern becomes how this intelligence will interact with our world. Bywhich I mean both how well this intelligence is designed for practical use, and howwell we will adapt to its adoption into our society.

2. Progress, predictions and definitions

There are currently two main branches of AI development, which are distinguishedby their, for the moment at least, separate goals of building more intelligentmachines, and understanding human cognition. The possible far end-points of thesedevelopment paths for AI are hyper-intelligent machines with no volition or con-sciousness; hyper-intelligent human-machine mergers, that have consciousness; anda new entity, a truly conscious machine. However, the current goals of AI are about

∗ Tel.: +44-0-191-386-5267.E-mail address: [email protected] (A. Jenkins).

0016-3287/03/$ - see front matter 2003 Elsevier Science Ltd. All rights reserved.doi:10.1016/S0016-3287(03)00029-6

Page 2: Artificial intelligence and the real world

780 A. Jenkins / Futures 35 (2003) 779–786

making intelligent artifacts (that do not have volition) rather than these autonomousagents. Although, of course this may change.

The premise of Bostrom’s paper implied by his title ‘When Machines outsmarthumans’ , i.e. that machines will have their own volition, self awareness, andambition, is that fully intelligent autonomous beings with a ‘general purpose smart-ness’ are likely to be created successfully in the mid-term. However, in forming hisprediction he has confused progress towards the immediate and plausible goals ofAI with progress towards their more ambitious aims to create autonomous, hyper-intelligent beings. The developments he discusses are not only generations of devel-opment away, they are only likely to result from a completely separate path of devel-opment based on a very different approach to intelligence. That said, this leap tothe more ambitious goals of AI is useful futures speculation, although it doesn’ twarrant a description as probable, especially not in his relatively short time-scale.After all, many AI developers doubt the possibility of a fully autonomous and con-scious AI [2,3].

Bostrom’s predictions are based on the sum of extrapolated likely developmentsin hardware, software and input-output mechanisms. This reflects a woefully limitedand uncritical approach to defining intelligence. For example, he claims that com-puter–environment interaction is a trivial problem, easily solved with robotic limbsand video cameras. This is by far his most astounding claim. It assumes that thislevel of experience (signal processing) is somehow equivalent to the human levelof experience complete with our cultural and social frameworks and the role playedby emotion in perception and cognition. Although the definition of intelligence is acomplex problem, if we leave it aside, as Bostrom does, we disguise the real anddifficult question of what we are trying to build, (i.e. what capabilities we aredesigning), the scope of the artificial intelligence. That is a crucial issue.

AI developers have taken a pragmatic approach and are working with parts ofwhat we call intelligence (ones we can model). It will help us to understand someof our cognitive processes better, and to build better machines, but it is a wildlyreductionist, technocratic, and computational view of intelligence. Whether this willever lead to a replication of the multifaceted human-type intelligence seems unlikely,without vast improvements in our definition and fundamental understanding of therelated physical, perceptual, social, and emotional facets of intelligence.

2.1. Measuring intelligence: balancing hogs

“This intelligence-testing business reminds me of the way they used to weigh hogsin Texas. They would somehow tie the hog to a plank and search around till theyfound a stone that would balance the weight of the hog and put that on the otherend of the plank. Then they’d guess the weight of the stone.” John Dewey, 1890.

Widely, AI developers concentrate on human-level intelligence at the expense ofhuman-type intelligence. Thus not only do we have a definition problem, we alsohave a measurement problem. To AI developers, and to Bostrom, levels of intelli-gence are expressions of processing power and speed of calculations (like millions

Page 3: Artificial intelligence and the real world

781A. Jenkins / Futures 35 (2003) 779–786

of instructions per second). They have estimated human level intelligence by estimat-ing the number of neurons in the brain and estimating the number of the interactionsbetween them. They then equate the number of instructions in the brain with thosein machines as a match of intelligence level. I seriously doubt that such a limitedexpression of intelligence will serve us well in the business of considering ordeveloping artificial intelligence.

The major elements missing from Bostrom’s description of AI are the role ofemotion, the social nature of intelligence and the need for this intelligence to operatewithin a philosophical framework. Equally importantly, Bostrom’s picture of thework in AI is also acultural, and non-experiential. He doesn’ t acknowledge the issueof how to accommodate different ways of knowing at the heart of different cultures,and what that means for replicating ways of thinking and learning. For example,will an AI tolerate uncertainty over which of numerous approaches and answers canbe conceived as preferable? Overlooking the role of emotion in perception, interac-tion and decision-making ignores the experiential nature of learning and intelligence,and this raises doubts as to whether AI with ‘general purpose smartness’ can bedeveloped at all with a computational approach. This kind of AI work stems froma culturally and epistemologically impoverished approach to intelligence.

3. Intelligence is as intelligence does: acting in the real world

It is clear that we need a better definition of intelligence, that allows AI to actsuccessfully in the real world. Existing in a physical world requires physical acts.In defining intelligence, we have to include intelligent actions. A machine wouldhave to act intelligently rather than just possess intelligence.

John McCarthy, AI pioneer, acknowledges this and stresses the need for a philo-sophical framework in developing a successful AI. “ If a computer programme is tobehave intelligently in the real world, it must be provided with some kind of frame-work into which to fit particular facts it is told or discovers. This amounts to at leasta fragment of some kind of philosophy, however naıve.” [4].

3.1. Feeling, thinking

“Thoughts are the shadows of our feelings—always darker, emptier and simpler.”Nietzsche, The Gay Science, 1882–87.“A really intelligent man feels what other men only know” Montesquieu, 1736

One crucial point about considering whether artificial (and evolving) intelligencesof human-type will come into being, is how an artificial intelligence perceives theworld and the experience they have of it. Our human intelligence is absolutely linkedto our experience of the world. This comprises physical experience of the physicalworld, mental/cognitive experience of the cognitive world of thoughts and ideas, andthe interaction between the two (see Popper’s description of World 3 and the Mind-Body problem [5]).

Page 4: Artificial intelligence and the real world

782 A. Jenkins / Futures 35 (2003) 779–786

Picard’s convincing work on affective computing takes account of the role ofemotion in perception and cognition. She shows us that emotions play a critical rolein rational decision-making, perception, human interaction and in human intelligence.“The role of emotion in human cognition is essential; emotions are not a luxury”she states [3].

Our perception, the nature of our physical experience of the real world will thenbe very different from that of machines. This means evolved machine intelligencewill be very different from human intelligence, precisely because our feedback con-sists of physical, emotional and cognitive aspects. Whereas computer intelligenceand feedback is confined to cognitive (and particularly computational aspects). Thefact is that if AIs don’ t have an emotional aspect to learning, their learning will bedifferent. And so their knowledge, decisions and actions will be different. What wemust consider is whether this knowledge will be as applicable to the ‘ real’ world.

The considerable work being done on robotics, sensors, and image processing,does not address the role of emotion in perception. Even if some human-machinemergers are developed, rather like the Borg featured in Star Trek, and have the fullrange of human experience, we don’ t know to what extent the human and machineparts will be integrated. Leaving us with questions about linking human-learnedknowledge and downloaded knowledge.

Some level of emotion capability would dramatically change the type of experi-ence and physicality the AI has. For example, what would an emotional aspect (likepain, dread, fear, excitement) to learning through experience, do to their capabilityfor ‘ rational’ decision making? Picard highlights some important logical and ethicalquestions and dilemmas surrounding the use of emotions in computers. For example,should machines be allowed to manipulate our responses by the use of simulatedemotions? If machines with ability to feel emotions were developed should they beallowed to conceal or disguise them, as humans do [6]?

3.2. The social nature of intelligence

Finally, Burke makes a strong case for the integration of the social into the con-struction of AI. He suggests that to properly address the question of what it meansto have a mind or how mentality arises and functions in human experience, we needan account of how our thinking relates to the world [7].

Burke draws our attention to the need to take account of the interactive nature ofexperience, and argues convincingly that designing software for an artificial agent’shead is not enough. Without some kind of socialization, an agent will have no wayto classify and hence objectify itself, and hence it will not have the kind of consti-tution it takes to engage in mental activity. He stresses that socialization must beworked into the process of building a thinking machine. The AI enterprise, he claims,will not begin to achieve its goals to build humanly intelligent and autonomousmachines without introducing a social dimension into the theoretical picture [7].

Currently, most AI is not about creating artificial, autonomous human-like beings,but about creating intelligent artifacts. However, in addressing the shortcomings ofthe reductionist approach to conceptualising intelligence, it seems that if we are to

Page 5: Artificial intelligence and the real world

783A. Jenkins / Futures 35 (2003) 779–786

improve artificial intelligence we need to give them a philosophical framework [4],some emotional capability [3] and a socialization process [7]. These eminent mem-bers of the AI community suggest that these qualities are necessary not for intelli-gence but for acting intelligently in our world. So, it seems that in order to makebetter AI we have to make them more human. Not only is this more difficult, as wedon’ t yet have a good understanding of these aspects of intelligence, there is a dangerthan we may have less chance of controlling the resulting AI, because they will beless predictable.

4. Risks and consequences

There are clearly many concerns in the development of this new technology. Thepromise of advanced intelligence needs to be balanced by responsibility, from bothcreator and created. We should hope that we can openly discuss the values of thosedeveloping what may become living entities. As we might expect to see a little ofthe creator in the creation.

4.1. Dominating language, competitive values

Bostrom’s paper uses the language of threats. The use of the ‘outsmart’ forexample implies volition, and ambition, that the machine is trying to get the betterof, to dominate, humans. But why assume this? It could happen if the machine valuesitself over humans on the basis of higher intelligence. But this would happen onlyif the programmers instil these values. It would be ironic to implicitly build in thevalues that prize intelligence over all else, (which could easily be the values heldby the developers, why else would they strive for greater intelligence), only to sufferas our creations live up to these values by downgrading our importance.

‘Outsmart’ also implies competition. Competition for control, power, resources,and that is the basis of the threat of AI. But this perspective is the dominant, western,masculine, technological, scientist paradigm. Where is the co-operative, non-aggress-ive, paradigm? Perhaps the biggest threat of AI stems actually from this perspectiveof AI, that the paradigm that is leading the creation of AI is creating it in its ownimage. Now that could be nasty. Would we reap our current problems (domination,competition, conflict, exploitation) magnified by something we couldn’ t control?

4.2. Control, dependence and obsolescence

“Machines are worshipped because they are beautiful and valued because theyconfer power; they are hated because they are hideous and loathed because theyimpose slavery.” Bertrand Russell, 1910

The dilemma Russell elegantly describes sums up much of the current debateabout the new technology. All new technology poses us the question of control. Whocontrols the technology, who has access to it, who benefits from it and who bears

Page 6: Artificial intelligence and the real world

784 A. Jenkins / Futures 35 (2003) 779–786

the different costs? The initial control of course lies with those designing it, decidinghow and what to design. In the case of AI though, the issue of control is two-fold, because this technology also has the possibility (remote maybe) to control us.(Advanced nanotechnology may also have this possibility, by controlling aspects ofour physical bodies and environment.) This is a real possibility, and is not welladdressed by the AI community at present. Understandably so, as this scenario isfar beyond what they are currently working on and their current levels of understand-ing. They can’ t provide answers because it is likely that they are generations andgenerations of developments away from this possibility. Still we should keep posingthe questions. Let’s hope that with the actual possibility come possible answers.

An important consequence of advanced AI is to do with the fact that our identityis developed through intellectual endeavour, thinking. Our learning is a large partof who we are and who we become. Kurzweil, Bostrom and others foresee the abilityto download knowledge and ‘minds’ [8,1]; we really need to ask what would down-loading do to this learning ability? Wouldn’ t downloading in part be treating peopleas machine? Some developers describe the possibility of using AI as ‘cognitive pro-stheses’ , benefiting and empowering those wearing them. However there is no dis-cussion of whether we would become dependent on these. Nor discussion as towhether we could lose our ability to learn, if a machine does all our learning forus, or whether we could develop machines that actually aimed to develop our humanlearning capabilities. My hope would be that AI’s role as learning-support for humanscould be the ‘killer-application’ for AI, so that our intelligence develops as well. Ifadvanced AI becomes a reality we will have to change with them, we could in factuse them to help us change, we could co-evolve.

Kurzweil suggests that in a scenario where successful, benign AI are fully inte-grated into the systems of our society, there will be no work for us humans. Andwe will become obsolete. Though I have to agree with Kurzweil here that the conceptof being made obsolete rather depends on our having had a purpose in the first place[9]. Do humans have a purpose for existing? are we useful in some cosmic way?I’m loathe to reduce our existence to a practical purpose. That aside, this scenariodeems AI not as a living entity but as a tool. And we must ask How will we knowhow to place, or prize these entities in our world?

Clearly how we see AI interacting in our society depends on the qualities the AIhave, and whether we deem them to have consciousness rather than simulating it,(no easy decision that).

4.3. Living, thinking, being

“Sometimes I just sits and thinks, and sometimes I just sits”An old man in a fable, asked what he is doing sitting quietly on a bench.

We must be certain that thinking is not the same thing as life. A machine thatthinks does not necessarily have life, although it could do. It depends on what levelof thinking it does. Some machines already perform some of the cognitive processesinvolved in thinking, and they are not considered alive. And to be sure, a machine

Page 7: Artificial intelligence and the real world

785A. Jenkins / Futures 35 (2003) 779–786

that thinks and has self awareness, volition and consciousness could be said to havelife, even if it had no physical form.

There is much more to human living than thinking (by which I mean all of infor-mation processing, acting intelligently, learning, musing, creating, questioning,imagining), there is just being as well. If our ability to think is what separates usfrom the animals, it might well be that it is our ability to just be, our ability to ‘notthink’ that would separate us from the thinking machines.

4.4. Our role for intelligence

In much of the literature promising the benefits of AI it seems that better intelli-gence has become a panacea. We may be motivated to increase our intelligence inthe hope of solving technical and societal problems, but there is a considerable gapbetween intelligence and outcomes in this real, political, divided world. I’ve longheld the opinion that intelligence is over-rated, especially when it comes to solvingmany of the problems humans face. In some cases we may know what the bestcourse of action is and yet find ourselves unable to take it. The psychological makeup and behaviour of humans is more influential in most outcomes, than a simplelack of intelligence. It seems to me that kindness and compassion would be moreuseful than higher intelligence. Especially if we develop the cold, computational kindof intelligence we are currently modelling.

4.5. Them and us

My sympathies lie with those technologists who are uneasy with creating thingswe can’ t control [10]. However, I can’ t help thinking that if advanced AI is created,and it behaves as if it were alive, should we really be talking in terms of control?Shouldn’ t we be asking whether we can live with them, especially if there is doubtabout our ability to know whether they have consciousness. In Orson Scott Card’sEnder trilogy, alien species are classified by their degrees of ‘ foreign-ness’ . Theyare “Utlanning—the human stranger from another part of our world, Framling—thehuman stranger from another world, Raman—the stranger than we recognise ashuman, but is of another species; and Varelse—the true alien, which includes allanimals for with them no conversation is possible. They live, but we cannot guesswhat purposes or causes make them act. They might be intelligent, or self-aware,but we cannot know it.” [11]. How would we classify these AI if they were created?Will we create Raman or Varelse?

5. Conclusion

If we continue developing AI with a limited computational definition of intelli-gence, there is doubt that autonomous AI will be developed, i.e. those that can actin the real world. However, even if we do develop AI, taking into account the per-

Page 8: Artificial intelligence and the real world

786 A. Jenkins / Futures 35 (2003) 779–786

spectives of Picard, McCarthy and Burke, the AI (autonomous or otherwise) willstill have a different way of thinking and a different intelligence from humans.

‘When machines outsmart humans’ is the wrong stance to take in considering thefutures of AI, not just for the underlying competitive values outlined above, but alsobecause it implies that the advanced, autonomous AI will be directly comparable tohumans. But if we can be sure of anything, we can be sure that they will be vastlydifferent to us. This is important because what we need to know is can we live withthem? Will there be any or enough understanding between the two races, enough toallow real conversation? We can place our hope in finding ways to co-evolve. Itcould provide the key to keeping the lines of communication open, and to narrowingthe gap between our ways of thinking.

“The more intelligent one is, the more men of originality one finds. Ordinarypeople find no difference between men.” Pascal, Pensees, 1670.

If Pascal is right, and if, after all, we can develop AI to include all the requiredelements of intelligence, maybe it will have the capacity to value our originality.Maybe it will take an artificial, superior intelligence to teach us why we shouldrespect and value our fellow humans.

References

[1] N. Bostrom, When machines outsmart humans, Futures 35 (2003) 7.[2] R. Penrose, The Emperor’s New Mind, OUP, Oxford, 1989.[3] R. Picard, Affective Computing, MIT Media Laboratory Perceptual Computing Section Technical

Report No. 321, 1995. http://www.media.mit.edu/~picard/.[4] J. McCarthy, What has AI in common with Philosophy? Stanford University, 1996. http://www-

formal.stanford.edu/jmc/aiphil/aiphil.htm.[5] K. Popper, Knowledge and the Body-Mind Problem, Routledge, London, 1994.[6] R. Picard, What does it mean for a computer to “have” emotions? in R. Trappl, P. Petta, S. Payr,

Emotions in Humans and Artifacts, MIT Press; and MIT Media Laboratory Technical Report#534, 2003.

[7] T. Burke, Dance Floor Blues: the case for a social AI, Stanford Humanities Review, volume 4, issue2: Constructions of the Mind, 1995. http://www.stanford.edu/group/SHR/4-2.

[8] R. Kurzweil, Your New Mind, Scientific American Presents, 2002, American Association of Arti-ficial Intelligence, 1997.

[9] R. Kurzweil, The Age of Intelligent Machines: Our concept of Ourselves, Kurzweil TechnologiesInc, 1996.

[10] B. Joy, Why the future doesn’ t need us, Wired 8 (2000) 4.[11] O. Scott Card, Speaker for the Dead, Orbit, United Kingdom, 1986.