5.05 Electrosphere InSearchofElectronicBrain

7
In Search of the Electronic Brain By Michael Gruber For decades, Al programs haven't stacked up to 2 billion years of evolution. But, as one backgammon-playing bot proves,they're coming close. You are reading this with a pretty good computer. It is highly portable (weighing just 3 pounds), draws little power, has a lot of memory, is adept at pattern recognition, and has the ability - unique so far among all computing entities - to generate and process natural languages. All this and stereo sound, too. On the downside, it is terribly slow - just a few floating-point calculations a second - it's down for at least a third of every day, and its software is full of bugs, despite having spent the last quarter of a million years in beta. Nevertheless, this computer - the human brain - has always been the gold standard among people who devise electronic computing devices: we would dearly love to have a machine that did all, or even many, of the things brains (and so far only brains) are capable of doing: talking in natural language, figuring out novel solutions to problems, learning, showing a little common sense. To create something in the lab that nature has taken millennia to evolve is more than a pipe dream to those in the field of artificial intelligence. Warring schools of thought debated the problems since the 1950s and roadblocks loomed until the work was lulled into a sort of dormancy. But after years of relative silence, AI has been rejuvenated by the field of evolutionary computing, which uses techniques that mimic nature. The battles between connection-ist and symbolist rage anew, albeit in mutated form. We have been trying to make a brainlike machine for a long time now - almost from the beginning, when computers were called electronic brains. We thought it would be easy. People do math; computers (it was instantly discovered) could do math, too - faster and more accurately than people. People play games, from ticktacktoe to chess; computer programs play games as well - better than most people. People have memory; they use logic to solve problems - and so do computers. The brain, it was thought, is clearly a sort of computer (what else could it be?) and so must be running some sort of software. In the '50s, when John von Neumann and others were laying out the theoretical basis for electronic computation - when the currently familiar distinctions between hardware and software, memory and processor, were first established - this seemed a straightforward and feasible task. It was a principle of this early work that the instruction-set of any so-called von Neumann machine (that is, nearly every electronic computer) could be made to run on any other von Neumann machine. This became a common dodge: it's no trick to create a Mac or a PC inside, say, a Sun workstation. So, the theory went, by using rigorous analysis, symbolic logic, and theoretical linguistics, just figure out what kind of software the brain is running, install it in a computer of adequate capacity, and there you'd have it - an electronic device that would be functionally indistinguishable from a brain. In pursuing this optimistic program, the symbolist AI community declined to seriously investigate the only item capable of creating it: the brain. What was of concern, however, was what the brain did. After all, went the metaphor common at the time, you wouldn't spend a lot of time analyzing the wings and feathers of birds if you were designing an airplane; you'd look at the basic principles of flight - lift, drag, motive force, and so on. But there soon arose another camp of researchers, the connectionists, who used a quite different metaphor. The brain, they observed, was made up of small, elaborately interconnected, information- processing units called neurons. Perhaps this interconnection of small units was not irrelevant to brainlike functions, but the essence of it. Perhaps if you built a tangle of little electronic information- 5.05: electrosphere http://www.wired.com/wired/archive/5.05/es_evolutionary_pr.html 1 of 7 3/9/2013 11:39 PM

description

sear

Transcript of 5.05 Electrosphere InSearchofElectronicBrain

In Search of the Electronic BrainBy Michael Gruber

For decades, Al programs haven't stacked up to 2 billion years of evolution. But, as onebackgammon-playing bot proves,they're coming close.

You are reading this with a pretty good computer. It is highly portable (weighing just 3 pounds),draws little power, has a lot of memory, is adept at pattern recognition, and has the ability - uniqueso far among all computing entities - to generate and process natural languages. All this and stereosound, too. On the downside, it is terribly slow - just a few floating-point calculations a second - it'sdown for at least a third of every day, and its software is full of bugs, despite having spent the lastquarter of a million years in beta. Nevertheless, this computer - the human brain - has always beenthe gold standard among people who devise electronic computing devices: we would dearly love tohave a machine that did all, or even many, of the things brains (and so far only brains) are capableof doing: talking in natural language, figuring out novel solutions to problems, learning, showing alittle common sense.

To create something in the lab that nature has taken millennia to evolve is more than a pipe dreamto those in the field of artificial intelligence. Warring schools of thought debated the problems sincethe 1950s and roadblocks loomed until the work was lulled into a sort of dormancy. But after yearsof relative silence, AI has been rejuvenated by the field of evolutionary computing, which usestechniques that mimic nature. The battles between connection-ist and symbolist rage anew, albeit inmutated form.

We have been trying to make a brainlike machine for a long time now - almost from the beginning,when computers were called electronic brains. We thought it would be easy. People do math;computers (it was instantly discovered) could do math, too - faster and more accurately than people.People play games, from ticktacktoe to chess; computer programs play games as well - better thanmost people. People have memory; they use logic to solve problems - and so do computers. Thebrain, it was thought, is clearly a sort of computer (what else could it be?) and so must be runningsome sort of software. In the '50s, when John von Neumann and others were laying out thetheoretical basis for electronic computation - when the currently familiar distinctions betweenhardware and software, memory and processor, were first established - this seemed astraightforward and feasible task. It was a principle of this early work that the instruction-set of anyso-called von Neumann machine (that is, nearly every electronic computer) could be made to run onany other von Neumann machine. This became a common dodge: it's no trick to create a Mac or aPC inside, say, a Sun workstation. So, the theory went, by using rigorous analysis, symbolic logic,and theoretical linguistics, just figure out what kind of software the brain is running, install it in acomputer of adequate capacity, and there you'd have it - an electronic device that would befunctionally indistinguishable from a brain.

In pursuing this optimistic program, the symbolist AI community declined to seriously investigatethe only item capable of creating it: the brain. What was of concern, however, was what the braindid. After all, went the metaphor common at the time, you wouldn't spend a lot of time analyzingthe wings and feathers of birds if you were designing an airplane; you'd look at the basic principlesof flight - lift, drag, motive force, and so on.

But there soon arose another camp of researchers, the connectionists, who used a quite differentmetaphor. The brain, they observed, was made up of small, elaborately interconnected, information-processing units called neurons. Perhaps this interconnection of small units was not irrelevant tobrainlike functions, but the essence of it. Perhaps if you built a tangle of little electronic information-

5.05: electrosphere http://www.wired.com/wired/archive/5.05/es_evolutionary_pr.html

1 of 7 3/9/2013 11:39 PM

processing units (transistors and capacitors, et cetera), brainlike functions might resultspontaneously, without the necessity of endless lines of code.

In the '60s, the hopes of the connectionist school were largely embodied in a set of devices calledperceptrons. Within these components, photosensitive detectors were connected in various ways tointermediate electronic units, which were then connected to some kind of output device.

It worked something like this: you'd begin by holding up, say, a triangular cutout in front of thephotoreceptors. The lights on the output device would then flash, first at random, and then, ascertain circuits were given more juice and others less, the intermediate layer would rearrange itselfuntil the flashing took on a more ordered pattern; gradually, the lights would form the shape of atriangle. Do this enough times, and you'd soon wind up with a system that seemed to distinguishthat triangle from, say, a circle. The system appeared to learn.

The early connectionists were wildly enthusiastic, arguably far more so than their results warranted.Advanced perceptronlike devices, many connectionists claimed, would soon learn to read andrecognize complex images. In 1969, however, the symbolists attacked. Marvin Minsky and SeymourPapert, writing from the center of symbolist thought - the MIT AI Lab - presented in their book,Perceptrons: An Introduction to Computational Geometry, an elegant and devastating mathematicalproof that the devices, as they existed, could never "learn" to recognize complex shapes and socould never become more than interesting toys. As a result of this one book, connectionism nearlyevaporated as funding and interest fled. But, a decade later, the connectionist school is back, and inquite a different form.

On the big workstation screen in Jordan Pollack's Brandeis University lab, the computer is playingbackgammon with itself - game after game. The black-and-white discs leap across the points; theimages of the dice flash their numbers almost too quickly to read. So what? you might say. Kidsprogram games like this in their spare time and give the results away on bulletin boards. Pollack, alarge, bearded man with the exuberant air of a youngish Santa, explains the difference: nobodyprogrammed this backgammon player. The programs (actually neural networks) programmedthemselves. Within the simplified environment represented by the rules of backgammon, entitiesmade up of numbers compete against each other. The winners create hybrid offspring; the losersdie. There is mutation in this world as well. Sometimes these alterations are beneficial, sometimesnot. Just as in real life. Watching the games flash by is akin to looking into the electronic equivalentof one of those Precambrian soups, where clumps of chemicals are inventing self-organization andstarting to become something more important. This is evolutionary computing, one of a family ofefforts aimed at finessing the seemingly insoluble problems that have prevented the programming ofanything recognizable as an artificial human intelligence.

Pollack, though a sort of connectionist himself, believes, perhaps paradoxically, that Perceptrons willstand as one of the intellectual monuments in the development of connectionism. "It had anherbicidal effect on the field," he says. "Symbolic AI flowered, but connectionism wasn't completelykilled off. The '70s were sleepy and boring, but in the '80s, connectionism blossomed. In the '90s,it's a really interesting field again."

So what happened?

According to Pollack, parallel processing became cheaper and more important, so people becameinterested in how you tied together all those processors - basically a connectionist problem. Theassociate professor of computer science and complex systems is quick to point out that the militaryalso got interested in the problem and figured a connectionist orientation could help solve it. Soon,money started to flow again. Pollack postulates that the symbolic camp then began to wane as thelimitations inherent in its theoretical approach began to show. But isn't there a double standardoperating here? Pollack starts talking about a review he wrote in 1988 on the reissue ofPerceptrons. One of the critiques leveled by symbolic AI at connectionism is that the things you cando with networks at low orders of complexity are pretty trivial; when you try to scale up you runinto intractable problems. Pollack quickly points out that the same is true of symbolic AI.

Everyone who has ever struggled to write a computer program or screamed in fury at a buggyapplication understands the problem at some level. All computer programs are sets of logical rulesthat, generally speaking, do simple things: add lines 3, 18, and 87 and compare the outcome to

5.05: electrosphere http://www.wired.com/wired/archive/5.05/es_evolutionary_pr.html

2 of 7 3/9/2013 11:39 PM

value x: if larger, do y; if smaller, do z. Add enough of these simple things together and you have auseful, relatively stupid program; one that might enable you to do a short stack of things with yourcomputer. Imagine then, how hard it is to write the rules necessary to do truly complex things, likecomprehend a sentence in English or generate the correct response from a database of thousands ofresponses. Imagine how much more difficult it is to get large numbers of these complex rules todance together to the same tune. "No rule-based system," Pollack explains, "has survived more thanaround 10,000 rules, and the problems of maintaining such large rule bases are unsolved. Soscaling is a disease that affects all kinds of AI, including the symbolic kind." He smiles. "Minsky wasmad at me for about four years after I published that review, but now we're friends again."

Pollack has a foot in both the symbolist and connectionist camps. He started as a Lisp jockey (Lispbeing a contraction of List Programming, an early, high-order programming language), doing whatused to be called "knowledge engineering" on mainframes.

The goal of knowledge engineering was to develop so-called expert systems, a methodology ofsymbolic AI. The idea was simple: people's brains are full of facts, and people make decisions basedon those facts according to logical rules. If you loaded all the relevant facts about some technicalfield - say internal medicine - into a computer, and then wrote decision rules (in Lisp) thatmarshaled the appropriate facts against a real-world problem, and if you had a powerful enoughparser (a program that interprets questions and pulls out the appropriate facts) then, in effect, youwould have created a sort of brain - an internist's brain - inside a computer. These kinds ofconstructs are also known as rule-based systems. The dream of knowledge engineering was that anexpert system rich enough in rules would one day be able to process natural human language. Butthe theory failed to live up to its early promise (which is why we still go to doctors who play golf).

As the backgammon games reel behind him, Pollack explains the disillusion. "To get any rule-basedsystem to really mock up human mentation, you need lots and lots and lots of rules; and not only isthis terrifically difficult from a programming standpoint, but even if you do get all those ruleswritten, you still lack something essential. I came to realize that human psychology was different inessence from what went on when you ran a Lisp program." He pauses to think about how toillustrate the difference. "The astronomer married a star," he says, grinning. "That's a legitimatesentence in English: you and I can extract some meaning from it, but I can't conceive of a set ofrules that would enable a computer to interpret that in the way we do."

Here is where Pollack moves to the connectionist camp. "The unavoidable thing," he explains, "isthat human behavior is complex, and it arises from complexity, so you're going to need 10 billion,100 billion of something. I decided that something was not going to be rules."

What, then? Might that something be connections among nodes in a neural net? Possible pathsthrough a network? "Something like that," Pollack responds. "It's not entirely clear what, but it isclear - to me at least - that it's not going to be 10 billion rules. Whatever the theoretical aspects, inpractical terms it can't be done."

Pollack is referring to a version of what early programmer Frederick Brooks called the "mythicalman-month" problem. When they first started to write big programs, they thought thatprogramming was analogous to other group activities in industry, like building dams or factories. Ifthe work wasn't going fast enough, you added a couple of hundred man-months, and the work spedup. But when they tried to do that with programmers, not only did the work not speed up, it sloweddown. Integrating the work of the individual programmers so that all the code would work togetheras a functional whole became virtually impossible because of incompatible internal communicationbetween program elements.

"The biggest programs now running are about 100 million lines of code, and they're extremely hardto maintain," Pollack says. "To sit down and write a mind, even assuming that you knew what towrite, would take what? Ten billion lines? It's in the same class as weather prediction, which I guesswe've finally given up on. You can't do it. But the founders of AI still have this naïve idea that youcan attack psychology symbolically, formalize the mind that way, and program it."

Pollack and I leave the lab and walk back to his office, which is the typical small academic box. Whilehe makes a call, I take the time to look around the room. Many have observed that the exquisiteprecision required of people who program computers is not often reflected in their physical

5.05: electrosphere http://www.wired.com/wired/archive/5.05/es_evolutionary_pr.html

3 of 7 3/9/2013 11:39 PM

surroundings. Here, every level surface, including the floor, is burdened with stacks, heaps of papersin no apparent order. On the wall is a poster for a conference Pollack is in the process of organizing.The conference is called From Animals to Animats, and on the poster is a painting of an eagledancing with a shiny mechanical lobster.

He gets off the phone, and I ask him for a copy of the perceptrons paper he mentioned earlier.Unerringly, he pulls a copy out of one of the piles and hands it over; I realize that this sort ofretrieval would be hard to program using symbolic AI. We chat briefly about his conference -apparently there really is a robot lobster in existence (a neural-net device, of course), although itdoes not actually dance with eagles. We talk about the incredible difficulties of getting evenlobsterlike behavior out of a machine, and then he starts in about AI again.

"Let me use an aeronomics metaphor," Pollack says. "You have to understand how central thismetaphor is to the symbolist argument. They want you to think that nonsymbolic approaches arelike those silly flapping-wing airplanes you always see collapsing in old films. So, the story goes,building AI on a neural base, say, is like building a plane on a bird base, with flapping wings. But acouple of years ago, I actually looked at what the Wright brothers were doing and thinking, and it'snot like that at all."

Pollack deconstructs the analogy between AI and mechanical flight, pointing out that the realachievement of the Wrights was not the airfoil, which had been around for centuries, or even theuse of the internal combustion engine. Others had used both before the Wrights, and most of theirdesigns had crashed and burned. Why? Because the pilots tried to maintain equilibrium in theaircraft simply by shifting the weight of their bodies - a technique that works fine in a light gliderbut becomes ineffective in a heavier machine. As Pollack explains, "It's a scaling problem. What theWrights invented and what made mechanical flight possible was essentially the aileron, a controlsurface. And where did they get it from? From studying hovering birds! Look, flight evolved. Firstyou had soaring on rigid airfoils. Then you got the ability to balance in wind currents using thetrailing wing feathers as ailerons." Pollack's point is that motive power came last. Thus, focusing onall the flapping obscures the real achievement, which is precise control.

Analogously, the symbolic AI programs that actually work are similar to little lightweight gliders. Thecode-tweaking that's necessary to get them running is a lot like a pilot moving his body to balancethe plane. But beyond a certain size, you can't maintain stability that way: once these programsreach around 10 million lines of code, they will collapse under their own weight. What is missing issome kind of control principle, something that will maintain the dynamic coherence of the program -the plane - in the face of what amounts to a windy sky.

The talk about the Wrights and the electronic lobster gets me thinking about what the greattinkerers have given to the world, and it strikes me that Pollack, and maybe connectionists ingeneral, are of this breed - people who want to fuss with the stuff, with analogs of the infinitesimalunits encased within our skulls that, wired together, produce thought. I ask Pollack if he inventsthings, and, somewhat sheepishly, he says he does and brings out a black plastic unit the size andshape of an ocarina covered with little buttons. He plugs it into a laptop that sits balanced atop apile of papers and, one-handed, begins to produce text on the screen. It's a mouse; it's a keyboard. Ilove it and find it typically Pollackian - it's simple, it's useful, it works.

Because of the failure of the more grandiose hopes of AI, Pollack is extremely cautious about whatcan be done by connectionist approaches. He certainly does not pretend to have the key to resolvingthe software engineering crisis, but he believes its solution rests with evolving systems from thebottom up. That means developing robust and stable programlike elements locked into long-term,gamelike situations.

"What I want to do in the near term," Pollack explains, "is show how to learn complex behaviorsfrom relatively simple initial programs without making grandiose claims - the point being to showreal growth in functionality, not to just talk cognitive theory or biological plausibility."

To achieve that kind of growth, Pollack is focusing on an AI technique called coevolution. In biology,coevolution defines the ways in which species change their environment and one another, as well asthe way the modified environment feeds back to further change the biota. (A classic example can befound by studying prehistoric Earth: anaerobic organisms formed and adapted to an oxygen-poor

5.05: electrosphere http://www.wired.com/wired/archive/5.05/es_evolutionary_pr.html

4 of 7 3/9/2013 11:39 PM

environment; over eons, their by products produced an environment rich in oxygen, to which theirdescendants then had to adapt.) In the machine version, you establish a large population of learningentities in an environment that challenges them to succeed at some simple task, like winning agame against a player making random, legal moves. When these entities succeed, they are allowedto reproduce. Thus, the general population of players becomes better at the game. (What "better"means on the level of neural network code is simple: winning strategies are assigned greater"weights." The higher the weight, the more likely a player is to use that strategy. The act of winningis what assigns weights, much like in real life.) In order to survive in this changed environment,succeeding generations must become better still. That is, once everyone can beat random players,you have to make even better moves to beat players in succeeding generations. Pollack calls this an"arms race."

As an aside, Pollack tells me about a problem that emerged early in the backgammon arms race - aphenomenon Pollack calls the Buster Douglas effect, after the hapless pug who recently became,exceedingly briefly, the heavyweight champion of the world. Backgammon is a game of chance aswell as skill, so it is possible for a champion with a great strategy to lose to a duffer with a run ofluck. The postdoc on the project, Alan Blair, quickly figured out how to douse the effect bycrossbreeding the champion with a successful challenger, rather than replacing it.

The technique of using self-challenging computers to master a cognitive domain (like a game) hasbeen around since almost the beginning of AI, but had long been relegated to the margins of thefield because, as Pollack explains, "the computers often come up with weird and brittle strategiesthat allow them to draw one another, yet play poorly against humans and other symbolicallyengineered programs. It's particularly a problem in deterministic games - games without randomelements, like ticktacktoe and chess. What happens is that the competing programs may tend toignore interesting, more difficult kinds of play and converge on a mediocre stable state where theyplay endless drawn matches. It looks like competition, but it's actually a form of cooperation. Yousee something like this in human education - the students 'reward' the teacher by getting all theeasy answers right; the teacher 'rewards' the students by not asking harder questions. But a coupleof years ago, Gerald Tesauro at IBM developed a self-playing backgammon network that became oneof the best backgammon players in the world."

Indeed, Tesauro's work was tremendously interesting and exciting to Pollack and others in his fieldbecause it demonstrated that a learning machine starting from a minimal set of specifications couldrise to great sophistication. The question was How did this happen? Was it some cleverness inassigning weights, some subtlety in the learning technology he used, or was it something about thegame? Well, the nature of the game does make it especially suitable for a self-playing net. Unlikechess, backgammon can't end in a draw, and the dice rolls insert a randomness into play that forcesthe artificial players to explore a wider range of strategies than would be the case in a deterministicgame. Beyond that, however, Pollack suspected that the real key was in the coevolutionary nature ofthe players' competition.

To test this theory, he and his crew decided that they were going to make their initial two playersreally, truly stupid, by providing them with only the most primitive possible algorithm or learningrule. Among cognitive scientists, this is called hill climbing. Imagine a program so dumb that anearthworm looks like John von Neumann in comparison. This creature has only one goal in life: toclimb to the top of the hill and stay there. It has only one rule: take a step, and if that step is in anup direction, take another step in that direction; and if the direction is down, don't step there -change direction and try again. There's no problem on a perfectly smooth, conical hill - the thinggets to the peak with no problem. But what if the hill has a little peaklet on it? A pimple? Thecreature will inevitably climb to the top of the pimple and stay there, because every step it takes offthe pimple's peak is down. The behavior is far from interesting.

In the backgammon hill climb, that simple first rule was "make a legal move." The initial digitalcontender starts with zero weights in its network, which amounts to random play, and is set tocompete against a slightly mutated challenger. The winner gets the right to reproduce. The resultinggeneration competes in the next cycle against a new mutant challenger. If this arms-race process issuccessful, the winning nets grow more complex, more evolutionarily fit at backgammon. Pollackdecided to use hill climbing because, he says, "It's so simple. Nobody would ascribe some amazinglypowerful internal structure to hill climbing alone. The fact that it worked so well is an indication ofhow important the arms race aspect really is."

5.05: electrosphere http://www.wired.com/wired/archive/5.05/es_evolutionary_pr.html

5 of 7 3/9/2013 11:39 PM

The arms race avoids certain problems common in the field of evolutionary computing, partlybecause it works with what are called genetic algorithms. These algorithms are termed "genetic"because they mimic how genes behave in natural selection. The technique starts with an artificialpopulation made of random strings of 1s and 0s, which are rated by a set of classifier rules. Forexample, we might want a classifier rule that identified cats. In that case, we might establish that 1sin certain places on the string designate cat attributes such as "purrs," "catches mice," "furry," "hasclaws," and so on. The 0s might represent noncat attributes: "metallic," "winged," "votesRepublican." A set of these classifier rules, or tests, can be written so that, when combined, theysolve a particular real-world problem. The complete test set is known as a fitness function - a termsuggesting the fitness that prompts the survival of organisms and the evolution of species. Inpractice, a population of code strings is subjected to the régime of the fitness function. Thoseincluding bits favored by that function survive and "mate," the others perish. These entities mayexchange bits of code, rather like microorganisms exchanging strips of DNA, to make novel - andperhaps more fit - genomes. Over many generations, the strings will come closer and closer to agood solution posed by the problem.

Such genetic approaches can create programs with functionalities that could not have easily beenprogrammed in the traditional way. Invented independently by John Holland at the University ofMichigan and (as "evolutionary programming" or "natural selection programming") by LawrenceFogel in the late '60s, the field has recently picked up new steam as John Koza demonstrated howgenetic algorithms relying on the ability of coded expressions (ordinarily written in Lisp) can actuallybe used to solve lots of difficult problems, in business, in calculating game payoffs, in jet enginedesign, and so on.

The problem with such procedures, says Pollack, lies in writing the fitness function.

"Koza and many others in this field are essentially engineers searching for useful products in theshort term. In fact, Koza wanted to call the field genetic engineering, but that term was, of course,already claimed by the real biologists. So these engineers are used to writing fairly complex fitnessfunctions to drive the population of genetic primitives to produce something usable in a reasonablenumber of cycles. But, naturally, once you start doing that, you tend to run into the same sorts ofproblems as the symbolists do - the fitness functions start getting as complex and unwieldy asregular AI programs. It's something of a shell game: you're just investing your knowledge-engineering energy in a different place."

We head back to the lab for another look at the backgammon players and a demonstration of aprogram that plays the Japanese game of go, which is infamously difficult to program and not readyfor prime time. On the way, we pass through an old-fashioned machine shop, a place of turret lathesand grinders that contrasts rather startlingly with the rest of the lab. "We plan on making robots,"says Pollack off-handedly. "I'd like to try to evolve lifelike behaviors inside virtual worlds and thendownload them to the real world. This is all in the future, of course."

Using coevolution?

"Probably. The really interesting thing about it is that there's no need to gener-ate an absolutefitness function, because it's based on the relative fitness of competing units - competing 'genetic'lines - as it is in nature. I think that's how you capture the raw unequaled power of naturalselection. As the players - the genetic primitives - get better and better, the fitness function changeswith the population. I mean, fitness dynamically changes, just like an environment changes andbecomes richer, with more niches spawning more and variant forms of life as the individualorganisms within it evolve."

He has a point: evolutionary arms races of the type that have raged on this planet for more than 2billion years are the only process we know for sure that can produce bodies, brains, and, ultimately,minds. The real question for modern connectionists is whether any constructable network will havethe capacity and control necessary to do the things that now only brains can do. Neither Pollack noranyone else can yet specify how such a net might come into being, but Pollack points to thepossibility that connectionism will sweep AI into the current revolution of thought now transformingthe physical and biological sciences - a revolution based on a new appreciation of fractal geometries,complexity, and chaos theory. On the other hand, it may all go bust, as it did back in the '60s.Pollack acknowledges that possibility but adds that if it doesn't crash within 10 years, connectionism

5.05: electrosphere http://www.wired.com/wired/archive/5.05/es_evolutionary_pr.html

6 of 7 3/9/2013 11:39 PM

will have overcome its current limitations and become a booming field.

Meanwhile, there is backgammon.

If you play the game and would like to try your hand against the ghost in the machine, you can doso by logging on to Pollack's Web site at www.demo.cs.brandeis.edu/bkg.html. But don't wait toolong. The machine is getting better.

Michael Gruber ([email protected]) , a former biologist, writes thrillers in Seattle.

Copyright © 1993-2004 The Condé Nast Publications Inc. All rights reserved.

Copyright © 1994-2003 Wired Digital, Inc. All rights reserved.

5.05: electrosphere http://www.wired.com/wired/archive/5.05/es_evolutionary_pr.html

7 of 7 3/9/2013 11:39 PM