Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she...

18
CHAPTER 8 Thinking of Yourself as a Machine When confronted with the AI vision of superintelligent com- puters of the future, computers thal could serve as psychothera- pists, judges, or physicians, many react with a force of feeling that surprises them. Some try to neutralize their discomfort by asserting the impossibility of anificial intelligence. Others insist that even if possible, it should not be allowed. The vehemence of response expresses our Slake in maintaining the line between the natural and the artificial, beth'cen the human and the mechanical. Discus- sion about computers becomes charged with feelings about what is special about people: their creativity. their sensuality. their pain and pleasure. But paradoxically, when faced with a machine that shows any degree of "intelligence," many of these same people seem pulled toward treating the machine as though it were a per- son. Jean is an airline ticket agent who uses a computer to make customer reservations. She first represented the computer as a neutral object-programmed, passive, completely under the COIl- trol of its operators, threatening only in its impersonality. But then she moved on to more ambivalent descriptions. Jean often confronts angry clients. The planes are overbooked; the computer has been programmed on the assumption thai some people won't show, but occasionally everybody shows, and not everyone can go. Jean's excuse is, "The computer fouled up." I talk to her about what she means. Is this a "manner of speaking"? It is not. She seems to experience the computer an autonomous en- tity that can act on its own behalf and is thus "blamable." This 271

Transcript of Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she...

Page 1: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

CHAPTER 8

Thinking of Yourself as a Machine

When confronted with the AI vision of superintelligent com­puters of the future, computers thal could serve as psychothera­pists, judges, or physicians, many react with a force of feeling thatsurprises them. Some try to neutralize their discomfort by assertingthe impossibility of anificial intelligence. Others insist that even ifpossible, it should not be allowed. The vehemence of responseexpresses our Slake in maintaining the line between the naturaland the artificial, beth'cen the human and the mechanical. Discus­sion about computers becomes charged with feelings about what isspecial about people: their creativity. their sensuality. their painand pleasure. But paradoxically, when faced with a machine thatshows any degree of "intelligence," many of these same peopleseem pulled toward treating the machine as though it were a per­son.

Jean is an airline ticket agent who uses a computer to makecustomer reservations. She first represented the computer as aneutral object-programmed, passive, completely under the COIl­

trol of its operators, threatening only in its impersonality. But thenshe moved on to more ambivalent descriptions.

Jean often confronts angry clients. The planes are overbooked;the computer has been programmed on the assumption thai somepeople won't show, but occasionally everybody shows, and noteveryone can go. Jean's excuse is, "The computer fouled up." I talkto her about what she means. Is this a "manner of speaking"? It isnot. She seems to experience the computer a.~ an autonomous en­tity that can act on its own behalf and is thus "blamable." This

271

Page 2: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

272 INTO A NEW AGE

itlllhropomorphization has conscqucnces. It means that whenthings go wrOllg Jean need r10t call the policies of her comparl)'into qucStion or lay blame on fellow \\'orkers. The anthropomor­phized machine gi\'es her an OUI. She can s}'mpathize with herincOll\'enienced diellls without jeopardizing her relationship wilhCO-\\'orkcrs or her security as a "company person."

The simplest force thaI makes the computer seem more than amach inc among uthcr machirles is its behavior. Jean asked hercomputer questions and it gave her answcrs. She tells mc that "ofcourse pcople give the campUler the flight schedules, they pro­gram them in so it will know the rules, but then it uses ilS ownmind." Like the ELIZA program whose psychotherapeutic dia·logue drew people into its "confidence," Jean's computer behavesin a way Ihat encourages he" 10 treal it as a person.

Jean's computer did not draw her inlo an explicitly metaphysicaldiscourse about minds and machines. Butlike the child who arguesabout fI.·ledin's cheating, she has taken a step toward allowing acOnlinuit}, between the psychologies of machines and people.

Jean's anthropomorphization is naive. She treats the compuleras a black box, and she docsn'tlook inside. But computers encour·aKe their anthropomorphiz<ltion because of more than Iheir behav­iur. In an important sense the computcr is irreducible. It is hard to

block the temptation to personify the computer by saying what it"reall}'" is, hard to block the suggestion Ihat the computer "thinks"b)' saying what it "really" does. II is hard to capture the computerby seeing it in terms of familiar objecls or processes that existedbefore it \\';)S invemed. The computer is not "like" an)'thillgelse in ,Ill)' simple sense,

Anthropomorphization and Irreducibility

Airplanes come in all shapes and Gill bc described in all sons ofwn}'s. but there is no conceptual problem in stating what they do:thcy fly. There is no equally elegant, compelling, or satisfying "'ayof defining the computer by its function. You can say "it com­pUles," and the compuler scientist can SCi up a conceptual frameof "dercnce in order to define "the computable." But even Ihenwhat has been isolated as "I he essential compuler" presents no ellS)'

analogies wilh other objccts in the world (as the airplane does thebi"d), except, of course, for iu analogies wilh people,

TIllNklNG OF YOURSELF AS A MAClllNE 273

Some might say that no mailer how complex the computationalproduct-a medical diagnosis, a move in a chess game-"all thecomputer really docs is add." In a certain sense this is true, Butsaying that a computer "dccided to move the queen b}' adding" isa liule bit like saying that Picasso "created Gutr1lica by makingbrushstrokes," Reducing things to this level of "local" descriptiongives no satisfying way to grasp the whole. When we talk aboutperception-for example, how we grasp visual images-there is alension bel\veen the atomic and the gestalt. Do we build lip thepicture from discrete bilS of inf67mation or do we grasp it as awhole? Similarly, in thinking about computers, there is <l tensionbetween the local simplicity of the individual "aclS" that comprisea program and the global complexity lhat can emergc when it isrun. When you talk about the overall behavior of the program,descriptions of local simplicities seem off the point, and thetemptation is to see the machine as something that borders 011

life.Computers are certain I}' not the only machines that evoke an­

thropomorphization. We 'Often lalk about machines, and even tomachines, as though they werc pcople, We complain that il car"wants to veer lcft." We pilrk it on a slope and warn it to "stay put."But usually, when wc "talk to technology," we know that any vol­untary action we may have ascribed to a machine is reall)' a seriesof unambiguously "mcchanical" events, We know that Ihe pressureof the emergency brake will prevent gravity from pulling a cardown a hill. But when we pIa)' chess with a compute,' and say Ihattile computer "decided" to move the queen, it is much harder totranslate this decision into physical terms.

In the early nineteenth century, the exhibition of what wasclaimed to be a chess-playing automaton created a sensation. I Themachine was a fake. Inside thc chess-playing machine there was achess-playing man. No one kne\\' how to make a machine with thecomplexity of behavior that goes into playing even poor chess. Noone knew what such a machine might be made of-wheels, gears.II'vers, strings, pipes with flowing liquids? And even if one knewwhat "pans" to usc, what principles could guide their organiza­tion?

It is now possible to "'alk into a toy store and buy a chess-playingmachine that probably pla}'s a beller game than the nineteenth­century man in the machine, We havc cOlllpllters programmed to

Page 3: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

274 INTO A NEW ACE

pltly t.:hess, but not because anyone found the right "parlS" to IISC.

Indeed, one might say that the breakthrough was the rcalizationthat any particular physical "part" is irrelevant. The physical partsof a computcr Illark states in a process. Most computers arc madcof clectrical markers, but they don't have LO be. A computer hasbeen constnlcted out of Tinkcrtoy parts. It was made at MIT to

dramatize that whal is important about a computer is the lll<lchine'sability 10 embody a process, to specify a sequcnce of rtllcs. In avery real sense, the chess machine and the Tinkerto)' machine, likeall compulers, are not made of wood, or water, or silicon. They aremade of logic. And thinking about the core of a machine as theexercise of lugic leads people back to thinking of the computer asmind.

But despite these encouragements 10 personify computcrs, pco­pie have a stake in seeing themselves as diffcrcnt. They assert thisdifference, much as we saw children do. They speak uf humanlovc, sensuality, and emotion. They spcak of the computer's lackof consciousness. originality, and intention. III thc end, lmmy sumup their sense of difference in the statement "Computers ilrc pro­grammed; people aren't," or "Computers only do what they arcprogrammed to do, nothing more, nothing less." This last responsehas a history. It is associated with Lady Ada Lovelace, a friend ,mdpalroness of Charles Babbage, inventor of the "analytical enginc,"the first machine that deserves to be called a computer in the mod­ern sense. In a memoir she wrote in 1842 she Oecame the firstperson known to go on record with a variant of "Computers onlydo what yOll tell them to do... •

From the perspective of this model of a computer program-letus call it "the Lovelace model"-people listcn wilh increclulit), tothe AI scientist who says that the human mind can be captured byprogram. They can't understand how anyonc could think that hisor her mind works that way. But AI idells arc moving OUI andcapturing the popular imagination because the AI community hasgenerated a set of ideas that undermine Lhe Lovelace model ofwhat it means to be programmed.2 This chapter is about some ofthese ideas and how they are picked up by those who come intocontact with them. People become sympathel.ic to the idea that

• LUI'~la(e pUI it Ijk~ Ihi~: "Th~ ,,"atqical F.IlKille has no rl't'I~lIsi"n5 whalc"~I' to <)I'illin~lr

IUll·lhiug. h «Ill lin whme"cl' we know how III order i( In flCrlurm:'

TIlIl'iKING OF \·OlJRSEI.F AS A MACIIINE 275

machine intelligence has something to teach us about human intel­ligence, and e,'en to thc idea that Lhe computer's psychology maybe close to our own, when they are drawn into thinking that some­hO\1I Lovelace must be wrong-that in some sense computers mustbe able to do more than "wh<lt you tell them to do."

Beyond Lovelace

Jordan is a sculptor. Until recenll~_.he never had anything to dowith computcrs, except for the one that dispenses cash at the bank.Jordan's relationship with C()lnpULeI'S changed \\'hen small chess­playing machines were introduced. Together \\'ith art, chess is his.passion. He bought a chess mach inc and found it was good enoughto give him an exciting game. Computers became more than un­differentiated "smart machines." Jordan wanted to know ho\\' thisparticular machine worked.

Our culture is rich in ways 01" thinking that answer Jordan's<lueslion in Lovelace style: for example, the chess machine plays byrules that have becn programmed into it. To this jord<ln's disap­poinled reaction was deal': computer chess has nothing to do withhuman chess. I-Ie was sure that he didn't pl<lY by consulting rulesand he rejected thc machine's intelligence despitc its intelligentbehavior: "It's not thinking, it'sjusl rules,"

Several months later Jordan changed his mind when he heardthat his chess machine doesn't play by rules but by "heuristics."something thai is ultimately a rule. but which feels like somethingelse, more like what we call a "rule of thumb,"

Since a chess program C<lllno! check through and evaluatc allihepossibilities for each move, it relies on general guidelines, "heuris­tics," 10 pick Ollt a few il will examine in deplh. The idea that aprogram can have a degree of flexibility and imprecision gave jor­dan the sense that it possesses something akin to what he calls"intuition." And with this, he could once again identify with thecomputer.

Jordan was able to sec the chess computer as like him, as a"thinking thing," once he moved from seeing its program as rif.;:idlyserial-one fixed action after anothcr-to thinking of it as multipleprocesses in interaction. More elaborated versions of this kind ofrepresentation allow AI scientists to see in the nOlion of "program""pt what is most inhuman about machines but what computers and

Page 4: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

276 INTO A NEW ACE

people must ha\'e in common. These "allli~Loveiace" representa·tions figure in the writings of AI theoreticians. They have spreadto their students and beyond. Recently, they have reached Hall)'·\vood. They are the representations that serve to break down peo·pic's resistance to seeing a continuity between computers andpeople.

Tron and a Society of Mind

I auend the Boston premiere of the Walt Disney movie Tran,ad\'ertised as "taking place inside of a computer." The film createsan "interior" computational landscape. and the hero, a hackernamed Flynn, spends three-quarters of the film trapped there,prisoner within a s)'stem he has built. TrOll shows the insides of acomputer as a community of programs, each personified as an"actor" with a histor)', a personality, and a function within a com·plex political orlfdnilation. We meet the inhabitants of this worldat the time of a political crisis. A program called the Master ControlProgram. unaffectionately referred to throughout as the Mell, hasassumed dictatorial powers. Brutal poli<.:e programs are used tobring all other programs under control. There arc skirmishes, bat·ties, and finally, with Flynn's help, all-out \,\'arfare within the soci·ety.

When the film is over and the lights go on I see Marvin I\linsky.Minsky has been charmed. "That was great," he says, "That's awhole 1m better than bils! I am in the middle of writing a paperwhich proposes to outlaw the whole idea of bits. It's no .....ay tothink about what goes on inside of a compUler."

When Minsky talks abom "outlawing the whole idea of bits" hemeans changing the first image most people get of computcrs. Itbegins with the idea that the computer is made of electronicswitches that are either on or off (thesc arc the bits) and builds upto the Lovelace step-by-step model of a program. If you arc tryingto lISe the computcr as a model of mind, the bit and Lovelacemodels are unsatisfying, just as thinking of "rules" was uns;alisfyingfor Jordan trying to identify wilh his chess machine. I ask r.'linskywhat he wants to put in place of the bits. He answers. with a looklhal makes it clear that the answer should be evident, "A society,of course. just like in Tron." "Sociely" is his mnemonic for multiple.simultaneously interacting programs within a complex computer

TltINKING Of" "OURSELf" AS A MACHINE 277

system. In the Troll landscape f\'linsky has found an image, how­c\'er fanciful. for what he has in mind.

The image of the computcr as a "bit machine" and the image ofthe computer as a society of programs arc not irreconcilable. Anycomputer has the bits that Minsky wants to outlaw, but the questionhere is at what level we choose to understand things. It has oflcnbeen suggesled that the relalionship of bits f.O program is analo­gous to that of brain to mind. In the "brain" there arc physicalthings-neurons, synapses, electrical impubes. In the "mind"there arc images, concepts. idcas:"languagc, and thought. Simi­larly, within the computcr thc:re arc physical things (the bits), andthen, at the level of "mind," there are programs.

In some sense, the activity of the neurons muSt be the ph)'sicalmanifeslation of the same cvents we experienc.:e as thoughts andimages, bUllisually the detail of how this happens is irrelevant forusing one's mind: fol' feeling, thinking, communicating. On theother hand. in the case of neurological disorder, priorities change.Then it becomes important to find someone \\'ho knows how toprobc, stimulate, and record the activity of the physical brain.

In 3 computer, different levels of understanding arc appropriateat different limes. The rcp"irman focuses his attention on bits, butif you want 10 think of mind as computer, yOll are not likely to bedrawn to the level of onloff switches or Lovclace models. You arcmore likely 10 be swept into the world of Trolt.

TrOll docs more than assert the primacy of software O\'e!' hard­lI'are, of program over electrical circuit. Its presentation of a com­puter system as an unruly society is in dramalic contrast with thclinear model of programming. E\'en if the members of the TrOllsociety had been produced by Lovelacian programming, their in­teraction leads to the playing out of an unpredictable drama..1

The idea of a com pUler system as a "society" of competing pro­grams is one of several key ideas from the AI communit), thatchallenge the image of the computer as I"ollmving stcp-by-step in­structions in a literal-minded way and make it easier fol' people tothi:lk of mind as machine. Another one of these is that the com­puter is capable of "learning." In that case, you don't ha\"C to tcllthe computer everything it needs to know; you hare to tell it howto learn. And yct another. which combines these two, is an ideilthat I will refer lO as "cmergence": you don't have to tell the com­puter everything it needs to know; you have to arrange for it to

Page 5: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

278 INTO A NEW ACE TIHNKINC or VOURS£Lr AS "MAClIINE 279

obtain-by being told or by learning-the clements out of whichsomething nun programmed can "emergc... •

In the 1950s, research began on the theory of random networKs,inspired b)' the impression that neurons in the brain are randomlyconnected and by the hope that intelligence might emerge fromthis disorder as a phenomenon of critical mass. And also in theI950s, Oliver Selfridge. then a brilliant young disciple of NorbertWiencr. proposed a machine, "Pandemonium," whose name cap­tures his idea of "'hat it might take for intelligent order to emerge. l

Unfortunately its name also captured what turned out to be theproblem with his idea: chaos.

Pandemonium and random networks did not give rise to suffi­cient order to be useful. But emergence remained a major themein Al research, although it was increasingly dear that emcrgentintelligencc would come only with more modulatcd attempts tobalance between control and chaos. Some of the best-known carl)'work in AI, such as Newell and Simon's highly controlled andcentralized General Problem Solver, led critics to fear that the in­Auence of the discipline would be its pressure toward mechanicalmodels of human psychology:~ But another thread in AI has de­veloped a way of thinking that Minsky saw reAected in hOTi. Thisis an image of a fragmented mind in which intelligence emergesfrom the interactions of conflicting, competing pans.

Now I take up this thread. first very brieAy as it has developedwithin Ihe technical discipline. and then, more centralIa my con­cern, as something that has moved out to influence how people onthe periphery of AI think about their minds. I believe that tracingthe movement of this idea is important. Jordan is t)'pical of manypeople I intervie\\'ed: the idea of emergence is a key c1ementthatbreaks down resistance to seeing mind as machine. t

• H[mrrll;ence·· wumb like magk. but naturr is ~ich in o:xamples that !len'e as metaphOr!fllr;1. Take Ihe spheriul ~hape IIf a ~II bubble: ...hen )'011 blo... Ihe bubble pIli do nOlh;ng.tell il nUlhing lfl Im,ke Ill(' buhble round; ill sllherica! Shal)C emcrges from the interacliollof it, muIL-<:ulu, nOllc Ill' "'hich is indi,·iduaUy programmed 10 be part uf a round bullble.Sociat life tOO is rich in examples of emergence: conn:n.atiuns I,mdllce mun: tllall theilldi,'idual panicipants could hll,'e produced or anlicip;l\ed.

f This chapler looks al how a fcw JI'Owerful ideas elaborated within Ihe Al w"rid ha'-rcaptured the ill1a~ill1l1i"nof people "'huare not in A I but duse enough to il and 10 compot·cu to be fCSponsl\'C 10 it. intluellce. I found one such group in Ilan·;lI d and MIT t1udenuill IichJs other Ihall cUl1ll'utcr science; another in the community of sc;enllM. who mtet Albut are not a part of its profcssional .tructure; another in a gTUUp one might (nil uGOdd.

The Idea of Emergence

One of the most famous and inAuential "learning" programs was\\'ritten by Anhur Samuel in the late 1950s.'; This program pia redcheckers according to built-in rules. In that sense, it did whal it wasprogrammed to do. But it was also programmed to modify its rulesbased on its "experience." It played many games. against man)'opponents. and it did get belter, finally achieving the status of a"world class" checkers player. But the dramatic moment in thisprogram's life was not the day it bt-at a champion, bUl the da)' itbeat its creator. The program became good enough to beatl\rthurSamuel at his own game.

Among those struck by the drama of that moment was NorbertWiener. For the mathemalician usually regarded as the founder ofcybernetics. the machine triumphing over ilS crcator symbolizcd anew era. In God mid Go/em, hit. he suggested that the implications'bordered on the thcological: "Can God playa significant game withhis own creature? Can any creator, even a limiled one, playa sig­nificant game with his own creature?"l

In the checkers program, Samuel took a decisive technical stepbeyond the Lovelace model of programming. To show how, con­sider the ways one mighl determine whether one arrangement ofpieces on a checkerboard is more "favorable" lhan another. One"rule of thumb" is to count pieces. In faet, children often decidewhether they are "winning" on this basis alone. Another methodwould be to look at how lIlany of the enemy pieces are "underattack." A program that acted simpl)' b), following eithcr of theseIWO rules would playa very weak game. To make a stronger pro­gram you could go in either of two direclions. The "Lovclace"

Uche~. BOlch enlhus;;UIS:' people who met AI ideas in th:!t wtlrk :tnd felt thai s<lIl1elhinKaboul their wa)' of seci"K the world h~d ch:Ulged. I all" f"und il useful to foUut\' .tudetltsthfOllgh their lirst prograUlll1ing courses Jlllci their first bl,,,~h with AI. In 1979-t10 I lookedIt three diffe~nl ilUruducwrv dasses. IIn~ at Han'ard ami two al MIT. I !>l'l«l~d t"'~IlU­

ItUdell1l from each lInker.it)" for more inletui,'e ~tud~'. Musl wcrt inler\'ie"·cnac"cr:.1 time:.allheir course Ill·ogreued.

In:~rvicl\·s with "ncwcomen" I"'..·cd to be ~ rich source or m~tcriallon lhe comll1lter asan oI¥cl lhat pro"okes Ilhilosophical rell«lion On h",,' Illintl~ might be 01· not be likem:I.Chin..,. This slUd)' made my inle~~t~ knol\'1> among the Illl'j,1 ~ltlnnKrJ.dume rOlllmunit)"~nd led to ~bout twcnl,··l;'·.. other conla~tS ,,·ith Har\'ard :lIld MIT stunenn.

I met Jeau while ,,'orking on ~ sUld,· thaI is not disc:uned ill this book. a stlldy .,f Ilc:Qpletl'OO meet COlllllllterl in lhe wodd of "·ork. She is here benuse her (()f,nict I'OCU.le. aproblem: ho..' we ue pulled. in Ipile or (luneh·es. to blur the line belw~" peuple andmilChines.

Page 6: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

280 INTO A NEW ACE

direClion would consist of adding systematic precise rules for mea­suring "being ahead," rules that would be increasingly subtle andcomplex. The othcr way, "hcuristic programming," uses a collec­tion of disparate rules of thumb. From this perspective, you takethe two "weak" rules (counting pieces and counting threats) to­gether, recognizing that cach has a place in the game even thoughinsufficient by itself. Then you might add other rulcs to make acolleuian of "agems" within the program, each with a particularpoim of view, each limited, but valid in its own way.

This was Samuel's strategy. Once he had chosen it, the problemfor him, as it would be for the manager of any organization whoseindividual members are confined to their own specialties, was how!O create a struclUl"e within which the agents could interacl. Samu­el's solution was to have them cast weighted votes. What was mostinnovative aboUl his program was that the program itself modifiedthe weights depending on the track record of the different agentsand of the program as a whole. Samuel's program worked on akind of "society" model. The choice of which move to make wasbased on a "democratic" process in which a number of agents, eachwith a partial, limited view of the situation, could vote for oragainst a parlicular move, and the majority would win. For thisprogram, "learning" consisted of adjusting the number of votesthat could be cast by any particular agent.

In God aud Golem, Norbert Wiener discussed whether the Samuelprogram "merely did what Samuel programmed it to do." It did.but in this case the programming had, in a certain sense, instructedit to take off on its own to make its own decisions. Lady Lovelace'sdictum begins to have a different feel when obedience means au­tonomy. For Wiener the working of the Samuel program startedto feel like overstepping an ancient taboo: the taboo of speaking of"living beings and machines in the same breath."s

The idea of the Samuel-style voting procedures (in more tech­nical language, called "perceptrons") generated excitement as amodel for getting machines to make intelligent decisions of allsorts, not just in checkers. But it soon became clear that it wasexceptional for a decision procedure to be reducible to a percep­trOll. Not all learning could be made as simple. What worked forcheckers would work for little else.9

In the classical perceptron that Samuel used, intelligenceemerged by simple addition of "votes." What dJdn't ~\lork out math-

TIfINKINC OF YOURSELF AS A MACHINE 281

ematically for a large number of cases was the idea of "simpleaddition." But what remained a strong metaphor within the AIworld was the idea of a society of limited agents whose intelligenceis emergent from their interaction, the idea that a computer systemas a whole will be significantly, qualitatively different than the sumafits parts. We use this idea when we think about living things andabout creation: for example, the DNA that emerged from the inertmolecules of the nonliving earth.. AI has seized it to think of howintelligence might "grow" in a computer program in a way thatmight model the way it grows in a c-hild.

"GrOWing" Intelligence

If we take a simple problem like trying to decide whether dlel'e,are more black or white marbles on a tabletop, something that veryyoung children can do, it is easy to write a Lovelace-style programthat could solve it. Have one part of the program, one subproce­dure, count the black marbles, have another subprocedure countthe white marbles. Have a third subprocedure compare the num­bers and report the mlor with the greatest number back to the toplevel of the program.

This Lovelace program does the job. But it presents a problemif you arc trying to model how the child does it. In order 10 decidewhether there are more black or more white marbles on the table­top, the Lovelace system needs 10 know how to count, how tocompare numbers, and, most complicated of all, how to integratethis counting and comparing ability into the solution of its prob­lem. But children can say "which is morc" long before they are ableto do these things. If you are trying to model a growing mind, itwould be more satisfying to imagine how "which is more" could bebuilt up out of the kinds of things that small children can do. Inother words, it would be more satisfying to model this piece ofintelligence as "emergent."

'fau can imagine a small child telling the difference between ablack marble and a white one, grabbing one of each, and puttingthem away. In order to tell which is more, all the child needs to dois repeat this action until there are either no marbles left or marblesof only one color. In the latter case, the child can then call out thecolor of the remaining marbles, and that color will be "more." This

Page 7: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

282 INTO A NEW AGE

description rcHects, better than the Lovelace "counting program,"the kinds or matching ability young children actually have. Thus,a computcr program that follows it will have greater psychologicalplausibility. Some AI theorisLS have suggested ways of writing lhiskind of program, For example, Alan Newell and Herbert Simon,who "';:lIlt to construct theories of human thought in the form ofprograms, have put fOl'lh the idea of a "production system," a styleof programming that moves ah'ay from Lovelace to get doser inspirit to the child soh'ing the "which is more" problem by grabbingthe marbles olT the table,

A prudtH.:tion s)'stcrn is a little society inhabited by agents eachof which can recognize a specific condition and, when it arises,carr)' out a specific job, A first agent might be able to recognizeblack and while pairs (its "triggering condition") and put themaside. "grab them 01'1' the table." Another agent does nothing un­less it sees :1 tabletop with only white marbles, in which case it says"more ,... hite," The third does nothing unless it sees a tabletop with(ml)' black marbles, in which case it says "more black," In tradi­tional Lovelace programming, the programmer fixes in advancethe order in which instructions will be executed. In a productionsystem, the order in which the agents do their jobs is not deter­mined by the program but by the environment in which the s}'stemfinds itself.

The Lovelace program had a subprocedure that knew how tocount, a relatively advanced ability, None of the three agents here(Newell and Simon call such agents "productions") has anythingapproaching this degree of smartness, All they can do is followsimple rules about grabbing pairs of marbles or making pro­nouncements about "more black" or "more white," In the Lovelaceprogram, numerical ability is programmed in. In the productionsystem, it emerges fro111 the interaction of agenLS who don't haveany. In the language or the students I interviewed, intelligencecomcs ('mm having a number of "dumb agents" that can producesman behavior when they wOI-k together,

Newell and Simon's production systcm and Samuel's checkersprogram call both be thought of as societies of agents (hal arerestrictcd in a panicular way, In Samuel's calie, agents act only byvoting, in the case of the productions, each agent has to wait it5turn to act. Marvin ,."tinsky and Seymour Papen have developed asocicty the01')' of emergent intelligence in which (he)' lift these

T1I1NKINC 0 .. YOURSELF AS A MACIIlNE 283

restriclions. They sec agents with a \... ide diversily of roles, includ­ing the administration and the censoring of other agents,lll

Paperl uses this (ype of model to explain how children developconservation of liquids, tbe knowledge lhat the quantity ofa \i<luiddoesn't change when you pour it from a shon, stout glass into alall, thin onc, Young children consistcntly judge how much waterthere is by the h'ater le\'e1. and so the higher le\'el in the tall glassmakes that Woller seem like "more."

Papen begillS by describing three agcnts, each of which judgesquantities in a different simplemin,eJcd way, The first judges quan­tity by height; ,Illy thing that goes higher is "nH)re," A second agentjudges quantilY by width, horizontal extent, And a third agent saysthat quantities arc the same if they once wcrc lhe same. This thirdagent, which Papen calls a "history" age11l, "sccms to sound like aconservat.ionist child. but this is an iliusion,"11 The hislory agentknows only how to say lhatlhings arc the same as they used to be,evcn things that have in fael changed, When lhe watcr is pouredfrom the stout to lhc thin glass in front of a pl-econservationistchild, "each of the tlll'ee agents makes its own 'decision' andclamors 1'01' itlo be adopled," For the preconservationist child, lheheigh! agent's voice speaks the loudest, bu! this changes as the childmoves on to the next stage.

Papen believes that lhe child moves on to the next stage whenthe height and width agenls get into a new relationship: they startto neutralize each other when Lhe)' give contradictory opinions.This happens because a Ilew agent called a "geometr)'" agent actsas a referee, When Lhe height. and width agents both say eithermorc or less, the geometr)' agent passes on their common messageto the "top level" of the information system thal is this child. But ifthey disagree, the authority of the geomctr), agent is undermined,in which case the history agelll's voice is loudest and the child "hasconservation," The geometry ,lgcnl doesn't know what the pro­grams under it arc called or whar they do or why, It knows onlythat before it can make a move il has to heal' from them, pass onlhl~il' message if both arc the same, go back to sleep if they arc not.

Papcrl's aCCOLlI1t makes the appearancc of conservation in thechild's mind "emergent" from the coming into being of the geom­etry agent. But this agellt was not i11lroduced "in order" to produceconservalion, Papcn belicves that in its most primitive form it waslhere 10llg before as all agem capable of recognizing disagrecment

Page 8: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

284 INTO A NEW ACE

when it sa..... it. The geometry agent came into being for reasonsquite independent of the conservation problem: in Papert's model,conservation is a product, but not the planned product, of mate·rials that happen to be there. This makes it auractive for thinkingabout building machine intelligence insofar as it suggests you canget intelligence without knO\"ing exactly how to build it.

Some dismiss this as circular. When it requires that somethingne\" get done. it posits a new agem to do it. But members of theartificial imclligence culture, and others who have come 10 acceptsome of its ground rules, insist that another way of saying circularis to say recursive, self-referential. This computational aestheticalmost always sees self-reference as the solution rather than theproblem. For Papel'l, circularity is a source of power: "... like reocursive programs ... the theory derives much of its power from aconstructive use of 'circular logic.' "I~

This kind of theory offers a bridge between the way people thinkabout themselves and a way of thinking about machines. The"emergent-imelligence" idea feels psychologically plausible, andthe theory's reliance on self-reference gives it an appealing intellec­tual tenor. Ik)'ond this, the ide.a that people as well as programshave many agents within them can be related to our experience ofinner conflict-the feeling of being of two. three, man)' minds.

But there is a certain unreality to this bridge-building betweenpeople and programs, since no society model dose to the elaborate­ness of what Papert is talking about can be run on present-daycomputers which move through a problem one step at a time instrict sequence. They cannot "hear IWO voices" at once. In orderfor agents to fight it out in a computational society. a "multiproces­sor" machine is needed. a computer that can execute many func­tions simultaneously. Today. parallel-processing computers withenough capacity are only a dream. 1j Many people speak of the daywhen therc will be "rcal multiprocessor computers" in a way remi·niscent of how othcrs speak of "when the revolution comes" or"after the millennium." People \"ho imagine a multiprocessingcomputer feel free to develop their own "socicty theories" and feelfreer to dcscribe its mind in tcrms of how they see their own.

THINKJNC OF" YOURSELF" AS A MACIIIN£ 285

Thinking of Yourself as a Machine

M;lrk is an !\lIT junior and a compulcr-science rn;Uor. He hasalways been inleresled in logic and syslems. When asked to thinkback 10 his childhood games, he docs nOt talk about playing II'itllother children. He recalls how he "soncd two thousand Legopieccs by color, sizc, and shape." He likcd the feel of sOJ'ling thepieces. .and hc liked making the Legos illw complex structurcs,"lhings thaI )'OU would never expect to be able 10 make fromLegos." Although Mark spends mew of his time working \\'ith com­pUlcrs, he secs himself as a professional in training rmher than asa computer hacker. He carcs about his schoolwork, gets top grades.and "tries to keep things other than compulers in my life." QlIe of'these is being "Dungeon Master" for one of r-.t IT's many ongoinggames of Dungeons and Dragons. His game has been going for acouplc of years. and he is prcu)' sure it is lhe besl game in theLloston area.

The people who pia)' Ill)' dungeon use it to express their persoll­alitics. ilnd also to plar out sides of Ihcir. personalilit's that the}'hide in thcir everyday life. That's one of the thiuK.'! tlmt is sofantastic about the game. lim )'Oll have to Ill;tke a great g,lIl1efOI' t1ll1llO reall), happen. You ha\'c 10 hold them in }'our "'orld,but )'Oll have to give them a lot of spatc to be themselves. It is avel')' complicated thing. an an.

Mark spends many hours a week preparing for the Sundar­aftcrnoon mceting of his dungeon: "at least five hOllrs uf prepara­tion for each hour of play. And somelimes wc play for h\'C hours.It's a responsibilit)'. I take it \'ery seriousl~'. It's one of the IlIOStcreativc things I do."

Mark has strong relalionships with the members of" his dllllgcon.He is more able to involve himself with them than with people hemeets outside of the structure 01" lhe game. The dungeon offershim a safe \\Iorld built on a cumplex set of rules. Wilhin it he canll.~e his artistry to build social situations much as he built his Legoconstructions. Dungeons and Dragons allows f\lal'k 10 be with peo­ple in a way that is as comfortable for him ~u dealing \\'ith things.It is his solution to a familiar dilemma: he needs and yet fcal'spersonal illlilll<tcy. He has found a way of being a IOIlCI' lI'ithout

Page 9: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

286 INTO A NEW AGE

being alone. One might say that for him Dungeons and Dragonsbecomes a social world structured like a machine.

There is another and more dramatic way in which Mark showshis preference for machinelike systems. He has elaborated a theoryof psychology that leads him to see himself and everyone else as amachine. The theory, which he claims to be his own, is made uplargely of ideas currem at the Artificiallmelligence Laboratory atMIT. We see in Mark how these ideas are appropriated by some­one who is not part of the AI world, but who is close enough to bea first link on the chain of these ideas moving OUl.l~

Over the past year, he has built up a detailed picture of exactlyhow the Mark-machine works through a self-consciously imrosp«­tive method. "Like Freud," he tells me. "I don't follow other peo­ple's theories. Jjust think about myself and make up my own-theway Freud came up with his theories and then looked around himand fit things in." When Freud "looked around" he "fit in" ideasfrom his scientific culture, from physics and biology. Mark's is acomputer culture. He uses computer systems to think aOOUl allcomplex systems, especially to consider the complexity of his ownmind.

Mark begins with the idea that the brain is a computer. "Thisdoes not mean that the structure of the brain resembles the archi­tecture of any present-day compuler system, but the brain can bemodeled using componems emulated by modern digital parts. Alno time does any part of the brain function in a way that cannOl beemulaled in digital or analog logic." In Trtm, the programs arecomplicated, psychological, and "motivated" beings. They do a lotof running around, planning, plouing, and fighting. In Mark'smodel the computational actors in the brain are simple. Each is aIiule computer with an even smaller program, and each "knowsonly one thought." Mark takes the "one thought" limitation seri­ously. "One processor might retain the visual impression of a com­puter cabinet. AnOlher might retain the auditory memory of akeyboard being typed at. A third processor might maintain thevisual image of a penguin."

In Mark's model, all of the processors have the same status: thc)'are "observel'S" at a long trough. Everything that appears in thetrough can be seen simultaneously by all the obsen·ers at ever)'point along it. The trough with its observers is a multiprocessingcom pUler system. Using computcr jargon, Mark describes thetrough as a "bus," a trunk line that puts actors in contact with c,lch

TIllSKING or YOURSELF A.S A MAClIINt: 287

other. Theil' communication options are vcr}' limited. Elich looksat the trough lind when something appears that relates to what"he" knows, "all he can do is put his knowledge in."

The trough plus obscrvers make lip the celllral proccssor of lhebrain. The consciousness of the brain is only a reRcction 01" whatis in the trough at a given time. Consciousness is a passi\'c ob­server looking at the trough. !l does nO( evcn see everything inthc trough, but only those things which <Ire very strong. eithcrbecause they werc dumped in -.the trough by more than nneobserver or they were in the trough for a very long time.

Mark goes on to elaborate this notion of consciousness as a pas­sive observer:

The processors, the observers. correspond to neurons in thebrain. If a researcher on the edgc of the brain could sample anumber of lhe neurons and decode what impression was activein the brain at a given time, that researcher could be "seeing"what the brain was thinking. That researcher would be perform­ing the function of the consciousness. Helpless to alter the chainreactions in the brain. this consciousness is rather carried alollgin the thought process, sensing whatever is lhe strongest impres­sion al a given lime. The consciousness is a helpless observer inthe process we call thought. It is a b)'product of the local eventsbetween neurons.'

Mark does not read philosophy. But his computer model ofmind forces him to grapple with some of the oldest philosophicalquestions: the idea of free will and lhe queslion of a ·'self." r-,-Iark'stheory has room for neither: the trough and the observers are adeterministic system. What we experiencc as consciousness is onlya "helpless bystander" who gets the strongest signals filtered up tohim. "Actions as well as thought," sa)"i Mark, "are determined bythe cacophony of the processor voices."

There is no fl'ce will in Mark's system. The individual's feelingof conscious decision-making is an illusion, or rather an imposture:one of the processors, just as dumb as the others, has arrogated toitself the "seeming" role of consciousness. 11 has no power of deci­sion, "it could just be a printer, allachcd lO the computer. Thestrongest messages fmlll the agents would just be printed Oll!."

Consciousness is epipbenomenon. Mark says that evcn if therc

Page 10: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

288 INTO A NEW ACt:

were agents that could act with "free will," it would still be "thcm"and not "him" who would have it. In Mark's way of looking atthings, there is no "me."

YOll lhink you're making- <l cleci~ion, bUI are you rcally? Forinstance, when you have :1 creativc idea. what happens? All of asudden, )'011 think of sOlllcthillg. Righl~ Wrong. YOll didn'tthink of it. It just lihcl'cd thrOll gIl-the con~iousllessprocessorjust sits there and watches Ihis cacophony of other processorsyclling onlo the bus ;Ind skims off the top \\'lIat he thinks is themost illlport:lnt thing, one thing at a time, A creative idea justmeans th:.t onc of the processors made a link bet....'cen two un­associated things because he thought thcy were related.

In the course or my interview with Mark, creativity, individualresponsibility, free will, and elllotion wcre all being dissolvcd, sim­ply grist for the little processors' mills. I asked l'vlark if he thOllghtthat "mind" is anything more than the feeling of having one. Hisanswer was clear: "You have to stop talking about your mind asthough it were thinking. It's not. It isjust doing."

Mark takes the idea of agems and runs wilh it as far as it cantake him-to the demolition of the idea of free will, to the demo­lition of tile idca Ihat he has a "responsible" self. And when I askhim aboul cmotions, he says, "OK, let's model it on a piece ofpaper ......

Even though Mark makes it clear that the \\'orld must wait fortomorrow's multiprocessing technology to achieve the working in­telligence that he has modeled, he !Urns his version of the "societytheory" into somcthing lhat uses toda)"s computer parts. His busand simple processors are doing things that Mark thinks he kno",'show to make complllers do (like recognize an "image of a pen·guin"). And when something comes up that he doesn't know howto make :t computer do, hc can postponc thc problem by claiming

• HI' makes a ~kel(h 111\11 shows et]1oli,," corrcsf)<lI1dillg to an o"cr;,11 (h~,,!o:c in lhc state Qrth .... IIrlKe5wr ,"'tem. He e~lllain) IImt e,"eh of the lillIe 1)1"Oe('5sor. miKln be able to nllt ill(i\'(' dinerem ,wtes Iltal could (''''"1"('51''",<1 III differenl thre.holds for at1raclilll!l their imet·CSt, The """111"\"" muc might IktcnsitilC tht' prtKc)wrs in.\'fll\'ed in working out Iollic~1conn«li"ns. The)' would still do their thing. but th('~' 00 it lar less insistentl~', while othc'rl)I"(xess"rs. f"r c~allllllc: thou' wnceflH.'(1 wilh sclf,dcl"enw:. might be Knsitilcd. M8~ittK"nl-:rr" is nOt th(' jtob of p'll,ticular ~gen\). tt is S"rIlethi"K that affeus th., funuioning of ~U"f lhem.

THINKJNC OF "OURSELF AS A MACIllNE 289

that intelligence will emerge through the interaction of the proces­sors.

Mark finesses problems that would require making his modelmore complicated or more specific. !-Ie talks about illlclligenccemerging from "twO or three or four hundred stupid ag<.:nts." Butconsidering the highly specific skill that he gives to e,l(h 01" thcm("onc knows how to recognize a picture of a computer. anotherknows how to recognize a picture of a com pUler cabinet"), three orfOllr hundrcd seem hopelessly few, even to allow the baby to rec­ognize the objects seen from Ihe ..5rib, the playpen, and thc highchair, Mark talks about his multiprocessor "modeL" but what hereally has is a multiprocessor metaphor. Despite its gcncralit), and\'agueness he finds his melaphor powerful. FirSl, he can identify",'ith it direclly. He can put himself in the place of the agems. 'Theylook, they shout, they slruggle, they assert their piece of smartnessin the crowd. And it is powerful because Mark's daily experiencewith computers makes his theory seem real to him. Without thecomputer Mark's lheory would fcel to him to be nothing more than"wish)'-washy hand-waving," the slur that engineers use to deridethe psychological models of the precomputalional past. He believesthat in his model the problem of intelligence has been reduced toa technical one. Since he sees himself as a technical pcrsoll, thismakes him happy. It gives him a sense of being very powel'fulindeed. 1l means the appropriation of psychology by the engineers.What sweet revenge if the "ugly ones" turned out to bc the Kurusof the mind.

Mark's way of talking is exceptional only because his ideas arcelaborated and he has such Uller confidence in them. But thc idcaof thinking of the self as a set of computer programs is widespre<ldamong studems I interviewed at Harvard and MIT who wcre fa­miliar with large computer systems, Like ~'Iark, they find that thecomplexity of these systems offers a \\'a)' to think about theirminds.

EJliOl, an MIT biology major, calls his agents the "gallery orsLUpids" who "input into dustcrs of special-interesl groups inside1m brain." And then these special-interest groups fight it out andforward the result of their debates to his conscious sell'.

Should I stud)", should I go to sleep? f,lost of the time, it COtl1C~

to a debate. I am a\\'are orthe conflict, but the debate gets p1ared

Page 11: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

290 INTO A NEW AGE

out by the agems. And it cominues while I sleep. Sometimes Idecide to go to sleep, and I awake with no other Ihought bUI to

sllldy. The agents have closed their debate. And the signals thatare most powerfully coming up 10 consciousness are the studysignals.

Ned, an MIT premedical student, also thinks of his mind as amultiprocessor:

Some of the agents are a little smarter than the others. The wayan op-amp is smarter than a transislOr, bUllhat still makes thema long way off from having consciousness or fret: will. Conscious­ness and free will are illusions created by having many of thesmarter processors and mall)' of the dumber processors linkedtogether by billions and billions of neural connections.

Children debating the metaphysical status of computer toys Jedme to say that computers bring philosophy illlo everyday life. Whatmade the children's discussions philosophical was not their conclu­sions, but the structure of their argumelll. the effort to resolveintellectual tensions. But Elliot and Ned use the computer presenceto bypass the mind/body problem: for them the reduction of mindto matter is unproblematic, as is the assertion that [ree will is anillusion. There is no tension, no need to examine their beliefs. Wecould say that Elliot and Ned are discussing free will and the mind!body problem, but their way of doing so suggests a qualification:sometimes computers bring philosophy into everyday life only to

take it away again.But if, for them, the mind/body problem does not exist, the

reliance of thei,' theories on the idea of multiple agents leaves themwith a dif[erelll problem: does the self exist. Has it been dissolvedin the fragmentation of quarreling processors?

Challenging the "I"

A model o[ mind as multiprocessor leaves you with a "decentral­ized" self: tbere is no "me," no "I," no 'unitary actor. Mark ex­pressed this when he admonished Ille not to talk aboul Ill)' mind asthough "1" was thinking. "AU that there is is a lot of pl'ocessol's-

TIIINKING OF YOURSELF AS .\ MACIUNE 291

not thinking, but each doing its little thing." Elliot put the samethought jokingly: "Nobody is home-just a lot of little bodies."

But theories that den)' and "decenter" the "I" challenge mostpeople's day.to-day experience of having one. The assumption thatthere is an "I" is solidly built into ordinary language, so much sothat it is almost impossible f.o express "ami-ego" theory in lan­guage. From the moment that \,'e begin to write or speak, we aretrapped in formulations such as "I want," "I do," "I think." Evenas we articulate a "decentered" theory there is a pull back awayfrom it, sometimes cOllSciouslY~.iometimesnot.

Of course, "mind as multiprocessor" is not the first challenge 10

the "I" that has encountered the resistance o[ everyday "common­sense" psychology and everyday speech. The idea of the Freudianunconscious is also incompatible with our belief that we. "knowwhat we want." We don't, says freud. "Our" wishes are hiddenfrom "us" by a complex process of censorship and repression. Weare driven by forces quite outside our knowledge 01' contml.

Freud divided lhe "I," but the theorists who followed him movedtoward restoring it. They did so by focusing all the ego turnedoutward toward reality. They began to see it as capable of integrat­ing the psy<:he. To them, the ego seemed almost a psychic hero asit battled o[f iel and superego at the same time that it tried to copewith the demands uf the everyday. Anna Freud w]"()te of its pow­erful artillery, the mechanisms of defense, which helped it in itsstruggles, and Heinz Hartmann argued thaltbe ego had an aspectthat was not tied up in the individual's neurotic omfliclS: it had a"conAict-free zone." This "unhampered" aspect of"the ego was freeto act and choose, independent 01" constraints. It <-llmost seemedthe seat for a rebo1"l1 notion of the will, the locus of morall'espoll­sibilily. Intellectual historian Russell Jacoby, writing of psydlOan­alytic ego psychology's l'eb01"l1, autonomous "I," uescribed it as the"forgetting of psychoanalysis." 1:,

A theory that had called the "self" imo question now had areassuring notion of lhe ego as a stable "objective" platform fromwhich to view the world. A decentCl"ed theory had been recentercd.A subversive theory had been normalized. Ego psychology is theversion of the unconscious most acceptable to the cons<:ious.

As a theory aspires to move from the world 01" high science tothai o[ the popular culture, there is a natural preSSure lu cast ilinto more acceptable I"ol"lns. Theories tllat call the self inlo question

Page 12: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

292 INTO A NEW AC£

live in a natural state of tension. Although much of their powercomes from the fact that they offer concrete images through whichto express our sense of being constrained or driven by forces be­yond our control, they are also under constant pressure from thatother side of our experience-our sense of ourselves as selvcs. Thiswas true of ps)'choanal)'sis. It is true of multiprocessor models ofmind.

The Reconstituted Center

Mark begins with the flat assertion that there is no self, no Call'

scious actor. "Consciousness is just a feeling of thinking." But e\'Cllas he makes his case, thc contradictions slip in. He claims tll;!tconsciousness 'Just sits there and watches this cacophony of othcrprocessors yelling onto the bus and skims off the top what hethinks is thc most important thing, one thing at a time." But whois "he"? Could this skimming really be done by a processor that isas "passive as a printer"?

Mark's description of how he developed his theory. his descrip.tion of his imrospective method, illustrates the contradiction in hisposition.

My only way to know all this, my only way to tell, is by dosingmy eyes and Lrying to figute OUl what I'm doing at any givenLime. ~ly theory Il'as not developed as a "neat hack" to explainsomething, but rather "I thought this was going on" based onthe small amount of evidence lhat filtered through to my can·sciousness of .....h.\I's going on in my mind.

Here there is the reappearance of a "subject," something eithersmart enough lO make up the theory itself or able to send ideasthat come up "back down the line" for further processing. Butaccording to Mark's theory as he states it rigorously, neither ispossible: thc consciousness processor is dumb ('just a primer")and, at mOSt, has the power to "send back a bit 01" two." Mark bringsin something like a self despile himself. He. would like to acceptthe pure decelllcl'ed theory, but this is nOt easy for anyone. We arepulled back to common-sense ways of thinking that are familiarsince childhood and supported by language, ways of thinking thatmake us feel that therc is a self. that the "'" is in C0l1tl'Oi.

THINKINC OF "OUIlS£U' AS A MACIlIN£ 293

Throughout hislOry. philosophers havc challengcd what seemsobvious to common sense. Some have argued that wc don't reallyknow that objects exist or thal othcr people have minds, or thatthere really are relations of cause and effccl. The eightcenth-cen­tury Scottish philosopher David HtllTle held to lhis last view: CVCIllS

might follow one another, but we never have lhe right to say thatone c;luscd lhe olher. While this might be e<lsy enough in thephilosophy seminar, in ordinary life we arc obliged to act and talkas though we had the right to make such assumptions. And thus\'Ie understand Hume's cri (it! cOt'wr.. "Skepl icism can be thought hut!lOt lived."

The decclltered theory of mind, whether in psychoanal)'lic orcomputational terms, is as hard to live with as Hume's skepticism,and for similar reasons. The cornerstone of ordinary lifc is thattherc is an "J" that causes things lO happcn. A philosopher likcHurnc takes away the causality. Decentered psychologies take awaythe "I." Thus onc sympathizes when ~Iark slips and reintroducesthe "I." Others who have been caughl up in multiproccssor modelsof mind wke more deliberate actions lO make them morc "livable."

SOUle "rcrenter" their multiprocessor models by talking aboutone of the agems in the sucicl)' of programs, usually olle the)' callconsciousness, as "not quite so dumb as the rest." When this hap­pens, the other agems acquire a new role. Cerwin kinds of intelli.gence can still emerge from their intcraction, but this intclligenceis at the scrvice of the smaner agent.

"I belie\'e thal the mind isjust a computer," sa),s Susan, "but oneof the programs that runs on it gets to see the stuff that tilters upfrom the dumber agems all of the limc." This agcm might swnOllt "as dumb as all the rest," but, according to Susan, it hlls cenain"panern-recognition" properties and "it begins to gro\.,. in its abilityto manipulate data." The dumb agents get to "talk to each other,but the consciousness agent has contact with the whole, and withthe outside .....odd. with all of the sensory systems." As a mediator,it develops neh' abilities. Freud's ego stood bet.....een id, superego,and impinging realities. Susan's theory replaces mind wilh a socielyof dumb agents, but her "pallern-recognizing" agent is an egoreborn.

According to Andy, one of the things thal the "consciousness"agent kno.....s how to do is lO "Aip off" its connection to the dumbprocessors. On the computer system he uses, this "interrUpl" func­tion is known as Control G. In Andy's Illodel of how his mind

Page 13: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

<."

294 INTO A NEW AGE

works, "Conlrol C-ing" allows whatever bits of information happcnto be stored in the consciousness agent to undergo reprocessing."Iikc putling them in a new stew-they get to mix around, illleractmore than they usually gct to do when they just filter up to con­sciousness." Compared to the trillions of bits of information in thcs)'stem at anyone time. the bits that are made available for repro­cessing are rare. But there are cnough of them to allow a scnse offree will to emerge. "When the systcm shuts off you do whatcvcrcomes Qut of the new mix. And it is heavily influenced by mcssagesfrom your body, whether you are scxually turned on, whether youare tired or feeling cxhilarated-it is influenced by all of thesechemical things, whereas the dumb processors are usually isolatedfrom all of that," In Andy's picture, the "I" cmerges from egolcsscognition touched by the sensual and animal.

Some of the students J interviewed think of their minds as COlll­

puter systems, but they reconstitute an "J" in a more dircct waythan Andy. It is not just a question of a dccentcred system beingperiodically interrupted to allow for a moment of conscious actionand free will. They split the system, putting a multiprocessor incharge of all cognitivc functions, but rcserving the "emotional"things for another power. "1 see my mind as twO systems that haveworked out a process of peaceful or maybe not very peaceful co­existence," says Arlene, a computer-science major who is studyingacting in her spare time.

Thel'e i5 a computational part, that's the p<lrt with the agentsthal somehow through their intcrllction have real intelligencecome through. This pan does my reasoning, my logic, my mathhome.....ork. my abilit}· to learn hislOry. But then I have anothersystem. It is built up from instincts. Evolution. My animal part.II is i",'olved with love, feelings, relating to people. It can'l con­trolthe computer part. But it live5 with it. Sometimes fights withit. And this is the part that give5 me the feeling of being me.

Amy is laking her first programming course. Three months intothe course, she "discovered il. , .. I realized how good I am atprogramming" and she became taken with "all the possible analo­gies between my mind and a computer. 1 he,lrd about Minskysaying 'the mind is a meat machine,' and r thought that must beright." Now six months into the course, she is modifying her ideas

THINKINC OF YOURSELF AS A MACHINE 295

about what kind of machine the mind might be. She thinks thatthere mUSl be a special agent that intcl"vencs in the mcssages "thatcome up from the brain's cClllral processing unit." What makcs thcagent special is that it has its own sources of information. Thescsources are not purely rational: "You could think of it as Cod or)'ou could call il evolution-I think that one of the agents has becninfluenced by evolution." SIle says the special agent acls as a"buffer" for linal decision-making. The messagcs from the CPUrest thcre for a moment.

In general I 5ee my mind in tenns of continual processing b)'internal prognlllls. But lhe weight gi\'cn to the outpul of thescprograms can he inAucnced by emotion. And thell when the)'come up to consciousness, they come up to a level where thereis this other kind of agent-the special agenl. Thc one in touchwith my histol'y and with cvolution.

Theories that challenge tlte "I" need to be caSt in more accept­able forms. We sa\\' ho\V in ego psychology psychoanalysis con­structed a version of the unconscious acceptable to the conscious.one that brought psychoanalysis back into line h'ith more tradi­tional and reassuring models of mind. A similar process of \l0l"­

malization is at work in the case of machine models of mind. Thcstudents I interviewed used ideas about multiproccssing and emer­gence to describe how their minds worked. In their hands, thctechnical ideas thatll'ere devcloped within the artificial intelligencecommunity werc cast in nontcchnical forms, such as the "agents attnc trough." But lhen, there was a tension because these images ofdeccntered minds do nO! sit easily with the sense of being a personand acting in the world.

The discrepancy between theory and the experience of the selfshows itself in different ways. Mark docs notrccognize it at all, andit is visible only in the form of inconsistencies in his descriptions ofhis theory and himself. In the cases of the other students, Susan,Allcly, Arlene. and Amy, the strain of fully accepting a computa­tiollal model of the mind lhat dissolved the "I" was handlcd moreexplicitly. It led them to substantial modifications of the theOl·)'.They destroy the purity of the multiproccssing model: they arerelying on ad-hoc fixes to soften lhe theory's fragmentation of selfand destruction of the notion of free will.

Page 14: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

296 INTO A NI:W AGE

Frank. a devout CUlholic, an MIT junior in compuler scicncewho hopes somcday to work in artificial intclligcnce l'cscarch, docsnot rely on a "local fix" to make space for othcr commitments, Inhis casc a commitment to the idea of soul. Hc thinks thai the brainis hardware, He is nol pUl off by Marvin Minsky's description of itas a "meat machine." The brain has some rudimentary capabilities-\'ision, sensory perception, instinctual reactions-and beyondthis there are the familiar computational agents whose interactioncould potentially lead to intelligence. But here is lhe point whereFrank pans company with many of his friends, The complexity,the elegance, and the possibilities of the human mind machines arcso great thaI souls come 10 inhabit lhem.

tr you go ;llong with a straight computational model, the idea offrec ....·ill has to gct dUlllped along with the idca that there is aself, That is, to say the least. disturbing 10 those of us who wouldlike lD think th,lt we control our own actions, Btlt wait. We doindecd control our own actions. but anI)' if we consider oUl'SclveslO be our souls and not just the collection of neurons thilt formour indivi{hml meat machines. The soul exists, It has frce will.It is I.

Frank is skeplical about multiprocessor theories that refer toemergent intelligence as requiring a "critical mass" of agentsaround. What is needed is not critical mass, but just the right kindof "delicate programming." The combination of perfectly pro­grammed agents is what could make intelligence emerge. There isan analogy with the early bath of organic molecules that gave birlhto life. It had to bejustlhe right molecules in the right proportions.The agents of lhe mind need to be delicately programmed, andthis, says Frank, "is the province of the soul." It is the soul thatinfuses the meat machine with what makes us human. But the soulhas only one way of acting, and that is through its interaction withour hardware: "It twiddles a few bits here and there." Frank seesthe relationship of soul to brain as not quite a split between spiritand malleI'. It is more like the hacker to hardware,

Thc brain is a computer and lhe soul son of programs. I thinkof it <IS .. combination of doing some rewiring and kC)'ing insome bytes, but therc's another aspect to tll<ll, too, And lhat is

TIIINKINC OF "OliRSELF AS A MAGIIINt: 297

that thc soul. in addition 10 ha\'ing the console in frolll of it.there \\'Ould also be a direct line lhrough lhe human form. Thispan is hard to sa}', It bothers me. But the soul is nOt in a simplerelationship to the !><xly. It is like a programmer and a COIll'

puter. There is a harlllollY, A fit. The soul is sometimes sittingthere at the console, ,II least one pan of il is. and pan of it is inthe mach inc, It is continuousl}' aware of the state of thc m<lchineand can change an)' part of it at will at anr time. So I like to seelhat as emerging, the soul is nOt just typing in. it's a spiritualthing .....hich inhabits lhis computt\·,

Sometimcs when Frank programs he gets the feeling that he is"part of the machine," He knows "it is only like a hallucination,"but having had an experience of feeling both programme. andpan of the thing bcing programmed makes it easicr for him toimagine this kind of relationship for the soul in the machine. Itcertainly doesn't seem impossible, not even implausible. When ex·periences with the computer make him feel he is building some­thing that is also a part of him, he considers lhem points of contactwith the expcrience of the soul. In this sense, working with com·puters has become pan of his religious practice.

I run into Frank while he is in the middle of an argument with afriend, Mike, who challcnges Frank's attempt to reconcile his reli­gious and computational views, If Frank reall}· thinks thaI soulsinhabit only human computers and that this is the natural order ofthings, isn't artificial illlelligencc \\'ith its pr~jeet for a non-humanmind a doomed project, perhaps even a new kind of blasphemy?Frank's answer surprises i\'(ikc and surprises me.

Soub reside in people because the}' are special. there hasn't beenanything else like them in the \\'orld, Nothing else with the com­putational C<lpacity, But now .....hat I feel is that we might, peoplemight be 3t the slage where they can replicate their computa·tional capabilities in their own manner. And then. if that hap­l>ens. it ma}' come to pass. if God allows it to happen. that afterwe connect together several trillion NAND gatcs or their equh'­alents, it may come to pass that some adventurous soul will de­cide to usc that machine as its "receptaclc" ill this unil'el'se,*

• NAND g~ltl ~ro: Olll: 01 elll: clcml:ntar}· "logic componel1ls" lor buildillg digital d,e"it.,TlK:y ha\'e a sp«ialllropeny: lht} an: unh·trsal. In principle, all other In!tic wml'0llt'llhlan be buill out or them.

Page 15: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

298 INTO A NEW ACE

Untillhe "soul invasion," the only intclligence we can create willbe limited-a good chess program, a slightly better version ofELIZA, programs that operate by slack reasoning, mathematicalantilysis. But "it's aU left-brain stuff-nothing that requires tOOmuch or Ihe right brain. What it would not have are those thingswe cannot quantize. It would not have the ability to feel. It wouldbe unemotional. It \\'ould cenainl)' nm be self·awarc. Theseuniquely human functions are the province of the soul. and it isthe soul dwt allows us to feel:'

Frank has his heroes. Edwin Lind, Ihe inventor of instantaneouspholOgraphy, the creatQl· and until recentl)' chairman of the boardof the Polaroid Corporation, is one of them because he was some­thing of a "hacker." In the enthusiasm of his identification withLand, Frank moves Land's alma mater from Harvard to closer tohome. "He flunked out of MIT in ordcr to devote himself 10 thethings thill MIT Slands for. I like thaI." Marvin Minsky is anotherhcro, and here the relationship is rcvercntial. "Thc noblcst of ac­tivities," sap Frank, "is to crcatc vessels for the wandering souls,bUI it can'tjusl be a smart computer with a lot of knowledge stuffedin, a machine with a lot of showy programs and 'bells and whis­ties.'" The aesthetic has to be right. "You have to want to capturethe esscnce or humanity. What I like about ~..Iinsky is that he seemsto want to make a computer that;:1 soul \\'ould want to live in."

Appropriable Models of Mind

The question of what it takes for people to feel a kinship ..... ithcomputational views of mind is imponant. When there is this senseof kinship, there is the groundwork for the development of a newpsychological culture, a "computational culture" with new mela­phors for thinking about the mind as program and informationprocessor. This cullllre spreads through the diffusion of compu­tational ideas much as the psychoanalytic culture spread throughthe dilTmioll of Freudian ideas.

When \\'e interpret our dreams or comment on our friends' slipsof lhe tongue, wc are weaving psychoanalysis into the world of thee\·erydily. What kind of processes make theories of mind cap,lbleof "moving out" from the scientific and academic environments inwhich they arc oorn to a largcr culture? What makes a scicnce ofmind "appropriablc"?

THINKINC OF YOURSELF AS A MACIIINE 299

Freudian theory suggests the beginnings of an answer, one wayto think ahout its o\\'n appeal.

Consider Freud's theory of slips. It is a theory of why peoplemake slips: we try to repress unacceptable wishes, but. they breilkthrough all t.he same. By extension, I read it as a theory of whypeople like to think about slips: they allo\\' us contact with thesetaboo wishes. And, most central to my concern, I read it as a theoryof why people ,u-e attracted to Freudian ideas: Freudian inter­pretations offer liS a way to come closer 10 aspects of ollr.~c1\'es,

like sexuality and aggression, ~;hich ,,'e censor but at lhe sallletime want to be in contact with. the ullcivilizcd that makes ushuman.

Computer models aloe seductive because they too put liS in con­tact with issues that are both threalening and fascinating. Thequestion here is nol which theory, thc psychoanalytic or the com­putational, is true, but rather how thesc \'ery different ways ofthinking about ourselves capture our imagination. Behind the pop­ular acccpwnce of the Freudian thcory was a nervous, often guilt),preoccupation with Ihe sci Lois sexual; bchind the widesprcad ililer­cst in computational interprctations is an equally nervous preoc­cupation with the self as <L mach inc. Playing with psychoanalyticand computational theories allows us to play with aspccts of ournature that we experience as taboo.

People are afraid to think of themselves as machines, lllal theyare com rolled. prcdictable, determined, just as they are afraid 10

think of themselves as "driven" by sexual or aggressive impulses.But in the end, even if fearful, people want to explOl·e thcir sexualand aggressive dimensions; hence, the evocative power and popu­lar appeal of psychoanalytic ideas. Similarl)', although fearful. peo­pic want to find a war to think ilboUI what they experience as themachine aspect of their natures; this is at the heart of the com­puter's holding powcr. Thinking about the self as a machine in­cludes the feeling of being "run" from the outside. out of controlbecause in the control of something beyond the self. Exploring thepam of ourselves that we do not feel in control of is a way to beginto own them, a way to feel more whol(~.

Thus, the "Freudian" explanation for Freud's appeal explainspart of the appeal of computational models: each model allows usto think abollt highly charged mat.erillis. Another clement thatmakes many computational models appealing is more dependelll

Page 16: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

300 INTO A NEW AGE

on their form than on their content. "Appropriablc" thcories ofmind, ideas that move out into the culture at large, tend to be thosein which we can become "actively" involved. They tend to be theo­ries that we can "play with."

In the case of psychoanalysis, what we play with are "Freudianobjects" such as dreams, slips, and jokes. We become used to look­ing for them and manipulating them, both seriously and playfully.And as we do so, psychoanalytic theory, or at least the idca thatthere is an unconscious, stans to feel "natural." In the case ofcomputational models of mind, there are what I call "canier ob·jects" that cncompass both thc physical computcr and ideas Ihmgrow out of programming.

Recall, for example, what is at work lO help people appropriateDonald Norman's computational model of slips and, through iI,the idea of mind as computer.l~ According to the model, people,like complex computer programs. become momentarily derailedand make slips. And when people "slip" it doesn't reflect sexualfantasies or forbidden desires. 1l simply reveals information over·load and the mechanisms of computation.

You pour ketchup into your coffee inMead of cream. Normanlets you imagine a computcr in your place and think of its workingsas a process involving numerous subprograms. One program iscalled up to locate the crcam, another to locatc the colTee, anotherto get ,I fix on the location of the hand, anOlher to get it going inthe right direction, and others to check that it is on the right path,to verify that the position of hand, creamer, cup have not changed,and so 011. In this bUlZ of activity it is hardly surprising that one ofthese programs might go wrong. At a given moment, the programthat verifies that the position of X has not changed might "mistak·enly" lakc X to be ketchup instcad of cream, because another pro­gram on which it depends is still registering ketchup as "the objectof grcatcst salicnce," a morsel of infomlation left over from a pre·\'iOliS phase of the menl during which kctchup was appropriatelybeing poured on a cheeseburger. Once you have identified withlhe computer rou cannot fail to identify Wilh the error, since eachof us kno.....s that ,,'e make many more when "we" are consciollslyinvolvcd in a scenario as complex as this one. 'We are ablc 10 iden·dfy with the programs and so becomc sYlllpathetic to lheir confu·slon.

Nonnan's slips theory is "appropriable" bccause its description

TIlII'IKING or YOURSELF AS A MACHINE 301

of how a computer \\'o"ks sounds so much like a confused personthat people identify with the compliler to the point of believingthat it makes slips for the same reason they do. From here it is asmall step to feeling thaL we make slips becausc we arc computersand that we arc computcrs because of the way we make slips.

As a formal, logical <lrgument this is circulilr. But as a descriptionof how people adopt a point of view, of how they weavc it intotheir intuitions, it is quite straightforward. Norman's theory foundits way out of academicjournals-;;a version of it appcared in PSJ­t.hology TodD)' and then in Reader"s Digest-and was h,jdcly read.People easily imagine themseh'es in the role of the procedures, andacting out these roles feels enough like Norman's theory to givethe scnse of understanding it. One may forget lhe details or thetheory. but it offers an experience of "playing computer" and orfeeling comfortable in the rolc thal has a longer-lasting effect. Theexperience breaks down resistance to seeing mind as machine.

The computational theory of slips is the kind of theory thatpasses easily into the general culture. It is able to lake a nontech­nical form, people call imagine themselves "in the theory," it pro­vides a handle to think about thc evcryday, and it touches onemotionally charged concems. Mark's theory of the dumb multi­processing agents had all of these <Jualities. Like Norman's, it an­thropomorphized IIHlChine processes so that it was easy to identifywith them, one could put oneself in the place of agents around thetrough, it was a way to think about fragmented idelHity, a problem­atic aspect of the self. Such experiences are the stuff upon which acomputer culture is built.

Mark's computational theory of psychology is personal. He hasgiven littlc thought to whcre it stands in rclalion to othcr theories.llUl a growing cuhme needs something more. It needs 10 be con­scioLis of itself as a culture, it needs symbols, it needs heroes, ilneeds antiheroes. Such elements exist in the worlds of computersubcultures. Thc hackcrs had LISP as symbol oftbe true, the beau·tiful, and the good; I Bi\'I's FORTRAN was their symbol of degcn­Cl acy. One su re sign of the movement uf the com pUler cu lltl rc intothe larger world is the appearilnce or a lilerature that carries theseout to a l:uger public.

Douglas Hofstmlter's t9i8 Dodd. Esther. Bat.h: An Hilmal Goldellthaid is an example of that literature. It affirms computation as aculture-along willl ilS symbols, its language. its humor, its ways of

Page 17: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

302 INTO A NEW AGE

breaking down resistance lO the idea of mind as machine. Hofstad­tel' begins his book with a declaration of purpose:

Computt:rs by their very nature are the most inflexible, desire­less, rule-following of beasts. Fast though the)' may be, they arenoncthcles.s the epilOme of unconsciousness. How, then. can in­lelligcnl behavior be programmed? Isn', ,his the most blatant ofcontradictions in terms? One of the major theses of this book isto urge each reader to confrom (he apparent comradiction headon. to savor it, to turn it over, to take it apan, 10 wallow in it, sothaI. in the end the reader might emerge II'i,1l new insights intothe seemingly un breachable gulf between the formal and tileinformal, the animate and the inanimate, the flexible and theinnexible. 11

Hofsladter goes about this by providing puzzles, thought exper­iments, riddles, and dialogues. many of them dedicated to offeringa way to address a stumbling block to the idea that the mind is acomputer: "If machines are perfect and I am human, how can mymind be a computer?" The answer: the perfect machine is by itsnature imperfect.

The mathematician and the logician know this idea in the tech­nical form of Code!'s theorem. After a period in the late nine­teenth and early twentieth centuries when a proliferation ofincreasingly powerful formal methods created hopes of a universalmathematics that could solve all mathematical problems. KurtCode! produced a dramatic theorem that scuttled once and for allany hopes that there could be such a thing.l~ In principle. saysCodel, evcry formal system is limited. Any mathematics that wouldbe surrlciently powerful to seem to promise completeness. univer­sality, would necessarily be incomplete: it would be possible to askit questions that could not be answered using its methods. Thisidea that a system is incomplete because it is strong goes againstcommon sense.

Codel's thoroughly mathematical proof that a formal s),stem(this could be a machine) is vulnerable because strong had beenacccssible only to a mathematical elite. Hofstadter brings it to aliu'ger circle. His strategies are numerous. Most powerful amongthem is a technique he borrows from Lewis Carroll: dialogues be­tween characters that Carroll himself used. for example Achilles

THltolKJtolG or YOURSELr AS A MACHIN£ 303

and the Tortoise. to which Hofstadter adds others as necessary,among these the Crab. a personage who is the subject of an infi­nitely repeating drawing by Escher.

In my favorite dialogue. Achilles is talking to the Tortoise aboutwhether it is possible to have a perfect phonograph. III The answerturns out to be no. And the way of discussing the possibilities forphonograph perfection, like those for mathematical perfectionand machine perfection, begins with the idea of self-reference.

A crystal glass can be shattered by a singer who hits exactly theright note. By extension. for an)''l''piece of matter. including a pho­nograph, there is a pitch that will shatter it. If a record whose"song" contains this pitch is played on the phonograph, the ma­chine will "sclf destnlct." Thus any phonograph must be imperfectin one of two ways. It can be imperfect because it is unable to playnotes with enough fidelity to reproduce the destructive tone, inwhich case it does not destroy itself, but is obviously imperfect.imperfect through weakness. Or it can be imperfect throughstrength, in which case it is able to reproduce notes in full fidelity.and the record with that "certain note" will shatter it to pieces.

The paradox at the heart of GtkJel's theorem is this: if the formalsystem is really powerful there will be a question that can be posedwithin it which it cannot answer. If the phonograph is really per­fect, it will be able to reproduce the sound that would shatter it topieces. And, indeed. the Tortoise has been driving his friend theCrab to distraction by shattering an increasingly expensive seriesof phonographs with specially designed records that he calls"Music to Break Phonographs By."

"Music to Break Phonographs By." like the proof of Codel'stheorem. plays on the idea of self·reference.* In the history ofmathematics. self-reference has been controversial. Bertrand Rus­sell and Alfred North Whitehead, among many other logicians,

• G61kl ,,'anted to I)rol'e Ihill ari,hmelic is "incumplele." The problem is lU lind a slateillelll~OOUI numocr:s thai Call1HlI be prol'ed or di5pr(l\'(:d. tiildcl found Olle hy cll'awing un thelall10us paradOlt of Kif-reference. If a propCllitinn I' layl Ihat the prnpolilion P is falle. b~'

In Irgun~nt kno...n lince the lime of lhe Grukllueh a proJl'lnition can be neither lrue norfibe. Auume P il tru('. P aneru itl o...n fabily. SIl, if true. it mUlt be lall('. Anume it isfalst. 11 Jayl il il falM:, In il IIIl1lt be true. G&lel fuuml an arithmetical e(luil'alent of lhi~

p:.radox. B(' assillned e'Mle numocrslU propositionl in lu(h a "'ay that a errtnin propositionP sal'lthat Ih(' prop<>sitilln "'hose code nUlnu.:r il .1 (annOI be prnl·cd. Alld ...hen "'C loukup the code "'e lind that II ilthe cooe number for th(' proposilion P itself. The propositionr Ihul declares itself 10 he IIllpnl\'abk. J>rol'iIlK It "'''lIld be absurd. dilprOl'il'K it e'lllall~' §(l.

Page 18: Thinking ofYourselfas a Machine · Thinking ofYourselfas a Machine ... as a black box, and she docsn'tlookinside. ... black marble and a white one, grabbing one of each, and putting

304 ISTO A Nf;W ACE

had placcd a ban on it: propositions that referred to themselves(like the statement "This Statement is false") were declared mean·ingless. But GOdel found wa)'s of doing indirecll)' something akin1.0 what had been outlawed. smuggling enough self-rcference backimo arithmetic to show that there is at least one proposition in itIhat cannot be proven true or proven false.

Godelmade a breach in the wall crected by logicians against sclf­reference. but it remained small bec;'luse Codel's propositionssecmed csoteric. The computer presence enlarged the bre'lch be­cause, in programming, self-reference is an cveryday matter of thegreatest practical use, a powerful tool for building complex pro­grams. Within the AI and hacker communit)" COdel's theorembecame a s)'Inbolthat "we," the computer culture, had won a battleagainst "them," tile logicians. Codel's "hack" had broken !he s)'s­tcm. Hofstadlcr dramatized the struggle and put his readers in aposition to feel participant in the triumph of a new intellectualaesthetic. The might)' had fallen. Russell and Whitehead had becnsacrificed all the altar of computational logic.

The growing computer culturc drah's its strength from awarc­ness of its roots and from aggressive assertion of continuity withthc culturc into which it is penetrating. By placing self-referenceat the center of his imcllectual world Hofstadtcr is able to recruitBach and Escher into the computer culture. Bach (who used it inhis endlessl)' rising canons) and Escher (who used it in his endlesslyrising staircases) are made into cultural heroes just as Russell andWhitehead arc made into cultural enemies. After over sc\'cnhundred pages of Cadel, Escher, Bach, Hofstadter's readers havelittle sympathy with the traditional logicians. not just because theywanted to ban paradox, but because !hey belonged to an intellec­tual culture that saw in paradox problems rather !han power. Hor­stadler succeeds in getting his readers to sense themselves as partof a new culture, a computer culture, strong cnough 10 shrug offlhe cuhure of Russell, Whitehead, and traditional philosoph)' andlogic.

Hofsladter began COrIeI, Escher, Bach with a declaration of pur­pose: 10 bridge the gap between the mental' and the mechanical.Through the phonograph slory-ifyou are sufficientl)' slrong )'OU

arc b), definition vulnerable and incomplete-and his many Dlhercxamples of what he calls "strange loops," HofstadtCl" gives people

THINKINC OF" "OURSELF AS A MAClllNf; 305

a language for talking about the questions that come up when weconsider ,.,.hcthcr we arc machines. "If we are machines, are weperfect?" asks Frank. the student who put the soul into the ma­chine. No, he comes back saying, "if we are the most powerful kindof machines. the kind thai Cod would create," we are limited,vulnerable, wcak. "You sec, Professor Turkle, what Achilles andthe Tortoise and the Crab are sa),ing is I hat if we arc machines, weare human."

<••