Writing the Future: Computers in Science...

9
0018-9162/00/$10.00 © 2000 IEEE January 2000 29 COVER FEATURE Writing the Future: Computers in Science Fiction S peculation about our future relationship to computers—and to technology in general— has been the province of science fiction for at least a hundred years. But not all of that spec- ulation has been as optimistic as those in the computing profession might assume. For example, in Harlan Ellison’s chilling “I Have No Mouth and I Must Scream,” 1 three political super- powers construct vast subterranean computer com- plexes for the purpose of waging global war. Instead of carrying out their commands, the computers housed in these complexes grow indignant at the flaws the humans have introduced into their systems. These self- repairing machines eventually rebel against their cre- ators and unite to destroy the entire human race. Collectively calling itself AM—as in “I think therefore I am”—the spiteful system preserves the last five peo- ple on the planet and holds them prisoner so it can tor- ment them endlessly. While cautionary tales like this in science fiction are plentiful and varied, the genre is also filled with more optimistic speculation about computer technology that will help save time, improve health, and generally ben- efit life as we know it. If we take a look at some of this speculation—both optimistic and pessimistic—as if it were prediction, it turns out that many science fiction authors have envisioned the future as accurately as his- torians have chronicled the past. Thirty years ago, for example, Fritz Lieber wrestled with the implications of a computer some day beating humans at chess. In “The 64-Square Madhouse,” 2 Lieber offers a fascinatingly detailed exposition of the first inter- national grandmaster tournament in which an electronic computing machine plays chess. One of the most poignant elements of the story—particularly in light of Deep Blue’s victory over Kasparov—is Lieber’s allusion to the grandmaster Mikhail Botvinnik, Russian world champion for 13 years, who once said “Man is limited, man gets tired, man’s program changes very slowly. Computer not tired, has great memory, is very fast.” 3 Of course fiction doesn’t always come this close to getting it right. Or perhaps in some cases we aren’t far enough along to tell. It remains to be seen, for exam- ple, whether certain golden-age science fiction novel- ists of the 1920s and 1930s did a poor job of predicting computer technology and its impact on our world. After all, many golden-age authors envisioned that in our time we would be flying personal helicopters built with vacuum-tube electronics. Of course we’re not going to be basing a design these days on vacuum tubes, but it might be too early to judge whether we’ll one day be flitting about in personal helicopters. Prediction is difficult, goes the joke, especially when it comes to the future. Yet science fiction authors have taken their self-imposed charters seriously; they’ve tested countless technologies in the virtual environ- ments of their fiction. Perhaps in this sense science fiction isn’t all that dif- ferent from plain old product proposals and spec sheets that chart the effects of a new technology on our lives. The main difference—literary merits aside, of course—is the time it takes to move from product inception (the idea) to production and adoption. OUTTHINKING THE SMALL Generally speaking, science fiction has adhered to a kind of Moore’s law of its own, with each successive generation of writers attempting to outthink earlier generations’ technologies in terms of both form and function. Often, outdoing earlier fictions simply entailed imagining a device smaller or more portable or with greater functionality—exactly the kind of enhancements at the heart of competition in the com- puting marketplace today. It wasn’t until the birth Although we cannot be certain that science fiction directly influenced the course that computing technology has taken over the past 50 years, the genre has—at the very least—anticipated the technologies we’re using and developing. Jonathan Vos Post Independent Consultant Kirk L. Kroeker Computer

Transcript of Writing the Future: Computers in Science...

0018-9162/00/$10.00 © 2000 IEEE January 2000 29

C O V E R F E A T U R E

Writing the Future:Computers inScience Fiction

Speculation about our future relationship tocomputers—and to technology in general—has been the province of science fiction for atleast a hundred years. But not all of that spec-ulation has been as optimistic as those in the

computing profession might assume.For example, in Harlan Ellison’s chilling “I Have

No Mouth and I Must Scream,”1 three political super-powers construct vast subterranean computer com-plexes for the purpose of waging global war. Insteadof carrying out their commands, the computers housedin these complexes grow indignant at the flaws thehumans have introduced into their systems. These self-repairing machines eventually rebel against their cre-ators and unite to destroy the entire human race.Collectively calling itself AM—as in “I think thereforeI am”—the spiteful system preserves the last five peo-ple on the planet and holds them prisoner so it can tor-ment them endlessly.

While cautionary tales like this in science fiction areplentiful and varied, the genre is also filled with moreoptimistic speculation about computer technology thatwill help save time, improve health, and generally ben-efit life as we know it. If we take a look at some of thisspeculation—both optimistic and pessimistic—as if itwere prediction, it turns out that many science fictionauthors have envisioned the future as accurately as his-torians have chronicled the past.

Thirty years ago, for example, Fritz Lieber wrestledwith the implications of a computer some day beatinghumans at chess. In “The 64-Square Madhouse,”2 Lieberoffers a fascinatingly detailed exposition of the first inter-national grandmaster tournament in which an electroniccomputing machine plays chess. One of the mostpoignant elements of the story—particularly in light ofDeep Blue’s victory over Kasparov—is Lieber’s allusionto the grandmaster Mikhail Botvinnik, Russian world

champion for 13 years, who once said “Man is limited,man gets tired, man’s program changes very slowly.Computer not tired, has great memory, is very fast.”3

Of course fiction doesn’t always come this close togetting it right. Or perhaps in some cases we aren’t farenough along to tell. It remains to be seen, for exam-ple, whether certain golden-age science fiction novel-ists of the 1920s and 1930s did a poor job of predictingcomputer technology and its impact on our world.After all, many golden-age authors envisioned that inour time we would be flying personal helicopters builtwith vacuum-tube electronics. Of course we’re notgoing to be basing a design these days on vacuumtubes, but it might be too early to judge whether we’llone day be flitting about in personal helicopters.

Prediction is difficult, goes the joke, especially whenit comes to the future. Yet science fiction authors havetaken their self-imposed charters seriously; they’vetested countless technologies in the virtual environ-ments of their fiction.

Perhaps in this sense science fiction isn’t all that dif-ferent from plain old product proposals and specsheets that chart the effects of a new technology onour lives. The main difference—literary merits aside,of course—is the time it takes to move from productinception (the idea) to production and adoption.

OUTTHINKING THE SMALLGenerally speaking, science fiction has adhered to

a kind of Moore’s law of its own, with each successivegeneration of writers attempting to outthink earliergenerations’ technologies in terms of both form andfunction. Often, outdoing earlier fictions simplyentailed imagining a device smaller or more portableor with greater functionality—exactly the kind ofenhancements at the heart of competition in the com-puting marketplace today. It wasn’t until the birth

Although we cannot be certain that science fiction directly influenced thecourse that computing technology has taken over the past 50 years, thegenre has—at the very least—anticipated the technologies we’re using anddeveloping.

Jonathan Vos PostIndependentConsultant

Kirk L.KroekerComputer

30 Computer

Digital AngelsIn modern philosophy from Descartes

to the present, you’ll often see medievaltheologians mocked for their zealous inter-est in minutia; many wrote huge volumesdedicated to questions so speculative as tobe rhetorical art rather than anything else.A prime example: How many angels candance on the head of a pin?

Since the spiritual realm was to amedieval theologian what the atomic realm

is to a twentieth-century scientist, it’s nothard to suggest that these theologians werespeculating about the same realm astoday’s scientists. What both perspectivesamount to is a powerful urge to under-stand the world beyond human sight. Afterall, one of the richest traditions of specu-lative fiction is medieval theology.

The more we have seen in themicroworld, the more we’ve been able tocreate smaller and smaller tools to help us

look even further. When it comes to com-puters, it has taken us a while to catch upto some of the technologies that writershave been toying with in their fiction fora hundred years. In this sense, though, sci-ence fiction functions just like a micro-scope: It’s an imaginative tool that helpsfocus scientists on all the interesting pos-sibilities that miniaturization makes avail-able. Without science fiction, we mightnever have gone from Eckert andMauchly’s 1946 ENIAC, a computer thatfilled a large room and weighed 50 tons,to much more powerful computers thathave the distinct advantage of being ableto fit into the palm of your hand.

And it is also highly likely that withoutscience fiction—almost single-handedlyresponsible for putting geek and chic inthe same sentence—we never would havebeen able to see the possibility of incor-porating electronic tools into our attire.While wearable junkies (like MIT’s ThadStarner and Josh Weaver) can network inat least two senses, seeing someone walkaround with one eye only a few centime-ters from a miniature computer screen stilllooks a little bizarre.ENIAC. (AP/Wide World Photos)

of the space program, however, that real-world re-searchers began to feel the pressure to do likewise bymaking their electronics both smaller and lighter.

Fostered in part by the personal computing revolu-tion, in part by the military, and in part by theadvancements in related fields, computers have shrunkfrom multiton mainframe monsters of vacuum tubesand relays to ten-pound desktops and two-ouncehandhelds. How long can this trend continue? Here isone forecast, the author of which may surprise you:

Miniaturization breakthroughs—combined with thescaling benefits of the quantum transistor, the utilityof voice recognition, and novel human/machine inter-face technologies—will make the concept of a com-puter the size of a lapel pin a reality in the earlydecades of the 21st century.

No, this isn’t from the realm of speculative fiction; it’sfrom Texas Instruments’ Web site.

Nobel Laureate Richard Feynman wrote in 1959 thatphysics did not prevent devices, motors, and computersfrom being built, atom by atom, at the molecular level.4

Feynman’s basic idea was to build a microfactory thatwe would use to build an even smaller factory, which inturn would make one even smaller until we were able toaccess and manipulate the individual atom.

While we haven’t yet realized Feynman’s goal of anatomic-level machine, we’ve been steadily moving inthat direction for at least 50 years. Perhaps signalingthat move was John Mauchley and J. Presper Eckert’s

top secret BINAC, delivered to the Pentagon in 1949as a prototype stellar navigation computer.5 From thewarhead of a missile, it could look at the stars andcompute locations and trajectories.

BINAC signaled a general change in the spirit ofdesign. Successful miniaturization efforts like BINACfed back into science fiction to the extent that keyauthors began to explore, even to a greater degreethan in the first half of the century, the implicationsof miniaturization.

In 1958, Isaac Asimov described a handheld pro-grammable calculator—multicolored for civilians andblue-steel for the military.6 In terms of hardware andsoftware, Asimov’s story flawlessly described the kindof calculators we use today. The calculator, you’llrecall, made it to market about 20 years later. But whydidn’t Asimov make his calculator even smaller? Whydidn’t he embed it in a pen or in a necklace?

Computer scientist David H. Levy identifies theergonomic threshold as the point at which we move fromelectronic-limited miniaturization to interface-limitedminiaturization.7 In other words, there will be a point inthe future when we’ll be able to pack just about any com-puting feature we want into a very small form factor, butwhen we reduce the dimensions of a device beyond acertain point, it becomes difficult to use.

After all, the dimensions of the human body requirecertain interfaces. Working around these requirementstakes not only a great deal of imagination, but almostalways a breakthrough in technology. Eliminatingkeyboard-based input from laptops, for example,

January 2000 31

enabled an entire generation of PDAs that rely pri-marily on various kinds of handwriting recognitiontechnology for entering and retrieving information.Reducing the size of this generation of PDAs willlikely require reliable voice recognition and voice syn-thesis technology.

PORTABLE FUSIONEarly science fiction maintained an erroneous but

very popular idea about miniaturization that was basedon a literal interpretation of the Bohr atom metaphor—where the atomic nucleus is actually a sun and the elec-trons circling it are the planets. Quite a few authors inthe first half of the twentieth century toyed with the ideathat if you could shrink yourself to a small enough size,you would find people on those planets whose atomsare themselves solar systems, and so on, ad infinitum.

We’ve come to understand, of course, that such acontinuum isn’t all that likely and that we might eveneventually reach the limits of the kind of miniaturiza-tion described by Moore’s law. After all, continuing toshrink active electronic components on computer chipsis already running into quantum problems.

Quantum computing holds the key to computersthat are exponentially faster than conventional com-puters for certain problems. A phenomenon knownas quantum parallelism allows exponentially manysuch computations to take place simultaneously, thusvastly increasing the speed of computation.Unfortunately, the development of a practical quan-tum computer still seems far away.

Meanwhile, science fiction has taken on the quan-tum issue, with one story even suggesting a storagesystem based on “notched quanta.” We don’t knowhow to “notch” quanta, but since quantum com-puting is just now beginning to emerge, it might verywell be shortsighted to believe that computer scien-tists 20 years from now won’t scoff at our idea thatwe are approaching the limits of miniaturization andspeed.

Consider that in 1959 Howard Fast described tech-nologies that we’re capable of producing today but thatin the late 1950s sounded impossible (if not ludicrous)to most people. In his classic story “The MartianShop,”8 Fast described a calculator endowed withspeech recognition capabilities. He also described aminiature music box with a vast repertoire of recordedmusic—not unlike a small CD or MP3 player—and afusion-powered outboard motor. Forty years after pub-lication, the first two of these three have become reality.

Using a fusion-powered outboard motor—or anuclear-powered car for that matter—will requiremore than a revolutionary breakthrough, but it’s stilltoo early to tell whether or not it’s at all possible.

COMIC-STRIP STRATEGIESRobert A. Heinlein—science fiction author and inven-

tor of the waterbed—worked in the 1940s on pressuresuit technology for the US Navy; this work led almostdirectly to the development of space suits. But some 21years before Armstrong and Aldrin even walked on themoon, Heinlein published a short story in which an astro-

It’s more than likely, however, that itwon’t be long before fashion shows likethe one depicted here become the normrather than the exception. After all, eachsleek new technology that has alreadybeen adopted as a kind of fashion state-ment—like small digital phones or feath-erweight MP3 players—has had to movegradually from bizarre to normal. Thistrend will likely continue, which meansthat wearables like those worn by Starnerand Weaver will become as much a fash-ion norm as Levi’s jeans. This trend willbe aided and accelerated by technologiesthat not only make these small systemssmaller still, but also work more effectivelythan the technology they’re replacing.

One tool that does just that is this pairof glasses created by MIT and Micro-Optical. These glasses house a tiny colorvideo display that is generated in theglasses’ temple piece and projected to asmall mirror embedded in the lens. Thismirror reflects the image directly into theeye. Unlike the first generation of bulkyscreens that completely covered one eye,this miniature display isn’t obtrusive. Justas many houses built now are wired for

high-speed data access, eyewear storeswill soon be stocking video-ready frames.

It seems that science fiction authorsoften visit the ideas entertained bymedieval mystics—more often than coin-cidence alone would allow. Plenty ofauthors’ characters break through toanother reality that they describe onlywith the language of theology. Given thisphenomenon, it shouldn’t seem all thatstrange that we in the twentieth centuryhave begun to explore the possibilities oftechnologies like quantum computingand neural networking. Might not thesepursuits just be another way of asking

how many angels can dance on the headof a pin?

Color QVGA ClipOn Display. (Courtesy ofMicroOptical)

Members of the MIT wearables group.(AP/Wide World Photos)

Katrina Barillova (center), cofounder ofInfoCharms, a spin-off of the MIT Media Lab.

32 Computer

naut experiences a problem with his oxygen; by lookingat a small device attached to his belt, the astronaut con-firms that the oxygen content in his blood has fallen.9

Such a device might not seem all that impressive tous today, particularly since, in the past 20 or 30 years,portable medical devices like this have become com-monplace technologies in popular media like TV andfilm. Each generation of Star Trek doctors, for exam-ple, uses similar devices. But Heinlein was among the

first writers to describe a device based on the idea ofreal-time biofeedback.

And now, wearable computers—including biofeed-back devices nearly as sophisticated as Heinlein’s—have clearly passed from technological speculationand science fiction into real-world use. Millions ofpeople grew up with the comic-strip character DickTracy, who used a two-way wristwatch radio. Overthe decades, he upgraded his wrist gadgetry to be

Future TransportAlthough Richard Trevithick built the

first practical steam locomotive in 1804,the railroad revolution had to wait 21years—until the opening of the Stockton-Darlington railway—before it could beginits rapid expansion across Europe and theUS. But by the end of the century, railroadswere nearly everywhere and having a pow-erful effect on people’s lives. Understand-ably, then, the speculative writers of the latenineteenth and early twentieth centuriesshould find the future of transportation oneof their most inspiring themes.

The internal combustion engine andHenry Ford’s 1909 Model-T fueled theimaginative fires of authors who envisioneda day when we’d all have our own auto-mobiles; beyond ground transportation,writers from Jules Verne forward imaginedpersonal air transportation machines soconsistently and vividly that such vehiclesmoved from possibility to postulate. PhilipK. Dick’s 1968 near-future novel DoAndroids Dream of Electric Sheep?—morecommonly known in its movie form as“Blade Runner”—describes a city in whichmany residents travel in flying cars.

Although such vehicles might seem

attractive but far-fetched even now, MollerInternational has developed one: theM400 Skycar. Cruising comfortably at563 kilometers per hour and using regularautomobile fuel, the M400 effortlesslyavoids traffic, red lights, and speeding tick-ets. The skycar has three on-board com-puters and eight engines. According to thecompany, the M400 is so automated you

don’t need a pilot’s license to fly it. Flightsafety, always a prime concern, is ensuredby the M400’s eight engines, four of whichare redundant.

Moller believes their skycar is but aninterim step on the path to gravity inde-pendence; flying automobiles are the nextstep. Although the M400 costs around $1million, Moller expects that the cost willdrop to that of a standard luxury car oncemass production begins.

Meanwhile, the Millennium Jet com-pany has invented a new kind of personaltransportation, which they call theSoloTrek Exo-Skeletor Flying Vehicle(XFV). You step into the machine, strap iton, and fly. Like a helicopter, the SoloTrekXFV can provide a bird’s-eye view of theground and lets you fly through a varietyof terrain at up to 128 kph.

Less complex than a helicopter, theXFV is easier to maintain, and can launchand land at sites the size of a dining roomtable. Best of all, the XFV costs just 5 per-cent of the typical helicopter’s $1 millionprice tag. Because it uses ordinary 87octane automobile gasoline, not aviationfuel, you can refuel your XFV at the near-est service station.

The SoloTrek Exo-Skeletor Flying Vehicle.(Courtesy of Millennium Jet, http://www.solotrek.com)

The M400 Skycar. (Reprinted with permission of Moller International)Moller International’s conception of an M400 Police Skycar.

January 2000 33

capable of receiving a video signal. At the November1999 Comdex, Hewlett-Packard’s CEO Carly Fiorinaannounced to an enthusiastic Las Vegas audience thatHP would be collaborating with Swatch to manu-facture watches with wireless Internet connectivity.

It is of course difficult—if not impossible—to estab-lish a causal relationship between science fiction andreal-world technology, unless we consider the nameswe give our technology, which often come directly fromscience fiction. We’ve taken “cyberspace” from thework of William Gibson, “robot” from Karel Capek,“robotics” from Isaac Asimov, hacker-created “wormprograms” from John Brunner, and a term from StarTrek, “borg,” which is used by today’s aficionados ofwearable computing devices. But beyond names, it isfairly safe to suggest that just as early science fictionpopularized the notion of space travel—and made itmuch easier to fund a very expensive space program—science fiction also made popular the idea that our mostuseful tools could be both portable and intelligent.

MECHANICAL COMPUTINGBefore there were electronic computers, there were

mechanical calculating devices: technologies (like theabacus) designed to save people time. One of the mostelaborate of such devices was Charles Babbage’sunfinished calculating machine, which he called theDifference Engine; this device is often credited asbeing the most important nineteenth-century ances-tor of the computer.

Science fiction authors, particularly in the 1920sand 1930s, drew conclusions from Babbage’s workand created in their fiction elaborately designedandroids driven by mechanical brains. It wasn’t untilroughly the middle of the twentieth century that theBabbage computing model gave way to electronic

computing, well after the development of hugemechanical integrators in the 1930s and 1940s, mostnotably built at MIT under Vannevar Bush. But whatif Charles Babbage had finished his work? What ifmechanical computers actually brought about thecomputer revolution a century early?

One of the provinces of science fiction—and in thiscase what some would instead call speculative fic-tion—is alternate history, an extended indulgence inwhat-if scenarios. So what if Babbage had actually fin-ished his machine? One answer to this question is TheDifference Engine, a novel by William Gibson andBruce Sterling10 in which the British empire by 1855controls the entire world through cybersurveillance.

In addition to portraying an entire age driven by ascience that never happened, Gibson and Sterlingindulge in speculation about how this change mighthave affected twentieth-century ideas. For example, apunch-card program proves Kurt Godel’s theory 80years early—that every language complex enough toinclude arithmetic contains statements that are com-pletely impossible to prove or disprove. And JohnKeats, unable to make a living from poetry, becomesthe leading Royal Society kinetropist—essentially adirector of computer-generated special effects.

Even though the mechanical computing model even-tually gave way to electronic computing, Babbage’sideas—coupled, no doubt, with all the fiction writtenabout androids with mechanical brains—inspired cre-ations like the animatronic automata that amusementparks like Disneyland use for entertainment. Disney’sfirst fully automated show was the tiki room, whichopened in 1963 with more than 225 animatronic crea-tures. Of all the automata at Disneyland, though, per-haps most familiar is the mechanical Abraham Lincoln,which even inspired a Philip K. Dick novel.

Arlan Andrews Sr.Andrews Creative Enterprises

In 1984 I wrote “2020: The ChimeraEngineer” for the Electronic Design Newscontest to predict a day in the life of theelectronics engineer of the year 2020. Itdid not win and wasn’t published until1995, in conjunction with a talk I gave onpredicting the future.1

In that story, my character used a solar-system-wide virtual reality computer-based system for real-time CAD to designa device in less than an hour for manufac-turing in a Japanese orbiting factory. Thestory featured what is today called agilemanufacturing or concurrent engineering,which is having all team members togetherfor all crucial decisions and implementingall aspects of work concurrently. The storyalso featured real-time computer graphics,which I called “spatial animation” and

which were done by sculpting in openspace with fingertip transmitters.

Practically, that vision of “2020” mademe realize that such a system could actuallybe invented. It inspired me to search for anappropriate VR system that could actuallydo what I had envisioned (even though theterm hadn’t been invented yet). In the early1980s, computer graphics couldn’t do thekind of real-time rendering of CAD draw-ings that I needed, and head-tracking was-n’t mature. While in D.C. in the early1990s, I witnessed the DoD’s crude militaryVR animations and despaired of ever find-ing what I was looking for.

But in 1993, after returning from theWhite House to Sandia Labs, I found sucha software system under developmentthere, and in 1994 I founded a start-upcompany, Viga Technologies, to com-mercialize that system. In 1995, togetherwith the inventors of the software system,

I cofounded Viga’s successor firm, MuseTechnologies, which now markets Muse(multidimensional user-oriented syntheticenvironment). Muse is the VR system thatNASA uses for conference calls among itscenters, and the software system it uses tocoordinate the real-time construction ofthe International Space Station.

My story is an example of science fic-tion providing a vision and a goal thatresults eventually in the founding of a realcompany to implement the dream. I leftMuse Technologies in 1997, but the com-pany continues to expand, and my sciencefiction vision of the early 1980s is nowpretty much fulfilled.

Reference1. A. Andrews, “2020: The Chimera Engi-

neer,” Proc. Agile Manufacturing Conf.,University of New Mexico Press, 1995.

From Fiction to Fact: A Self-Fulfilling Prophecy

34 Computer

EXAGGERATED ERRORWhile it is almost always easy to see the benefits of

a new technology, it isn’t always easy to foresee thedangers. We generally consider automotive trans-portation a necessity, for example, but we don’t oftenconsider that if there were no automobiles there would

also be no automotive-related injuries. The same mightalso be said of the Space Shuttle program.

It would be fairly easy to counter these observationsby suggesting that these technologies also save lives. Onthe surface, automotive technology enables ambulancesand fire engines to bring aid much more quickly than

The Feedback Effect: Artificial Life

Cybernetics is an interdisciplinary sci-ence that deals with communication andcontrol systems in living organisms,machines, and organizations. NorbertWiener first applied the term in 1948 tothe theory of control mechanisms. Sincethen, cybernetics has developed into theinvestigation of how information can betransformed into performance.

Cybernetics considers the systems ofcommunication and control in living organ-isms and in machines analogous. In thehuman body, the brain and nervous systemfunction to coordinate information, whichis then used to determine a future course ofaction. Control mechanisms for self-cor-rection in machines serve a similar purpose:The principle, known as feedback, is a fun-damental concept of automation.

If you accept that feedback capabilitymeasures technological sophistication,you’ll see that the more capable our feed-back technology, the more lifelike our arti-ficial creatures. The same has held true forscience fiction. It wasn’t until well afterindustrialization—automated factories,mass production, and the birth of con-sumer culture—that fiction writers movedfrom imagining organic creations like thegolem and Frankenstein’s monster to those built with mechanical brains. The

more sophisticated the science, it seems,the further the imagination can go. Fictionand science fuse with particular effect inthe term “cyborg”—the epitome of cyber-netics—as Chris Hables Gray explains inThe Cyborg Handbook.1

By the late nineteenth century, sciencefiction writers had already begun to imag-ine mechanical people. As early as 1924—with the debut of Fritz Lang’s film“Metropolis”—the image of intelligentartificial people began seeping into the pop-ular imagination. In 1940, imaginationbecame metal when Sparko and Elektroappeared at the World’s Fair—connectedto external power supplies but nonethelesscapable of significant movement.

Today’s artificial creatures seem morereal than ever. Mitsubishi recently debutedan artificial fish that behaves so much like

an organic one you can’t tell the differenceunless you look closely at the fish’s eyes.Mitsubishi spent more than $1 million todevelop the fully automatic mechanicalfish, using technology they hope will oneday power ships and submersibles. Plansfor a consumer version remain uncertainbecause the fish requires a special tankwith built-in sensors.

Sony’s $2,500 artificial dog Aibo (pic-tured with ball) replaces remote controlwith a CPU and sensors powerful enoughto perform complex actions. Further,Aibo’s software gives it the semblance ofemotions, instincts, and the ability to learnand mature. Speaking through musicaltones and body language, Aibo stores itscommunication data and maturity level inits memory stick, which can be transferredto another Aibo.

Aibo’s eye, a 180,000-pixel miniaturecolor video camera, can recognize objectsin the environment. Its head-mountedtouch sensor—when given a short, force-ful poke—lets Aibo know that it’s beingscolded. Pressing the sensor more gently fortwo or three seconds translates as praise.

Reference1. C.H. Gray, ed., The Cyborg Handbook,

Routledge, New York & London, 1995.

Metropolis. (© Bettmann/Corbis)

Aibo. (Courtesy of Sony Electronics)

Mitsubishi’s artificial fish. (Reuters)

Sparko and Elektro. (© Bettmann/Corbis)

January 2000 35

earlier technologies allowed; and the space shuttle pro-gram, it could easily be argued, generates a great deal ofresearch that will no doubt eventually be used to enhanceour quality of life. Science fiction authors as early asMary Shelley have dealt with hard trade-offs like thesein their fiction, often attempting to anticipate the dan-gers of a new technology before it is even invented.

Echoing Shelley’s method in Frankenstein, twenti-eth-century science fiction authors dealing with theissue of technology running amok often exaggeratecomputer glitches to warn of the potentially unfore-seen ills of computing technology. For instance, inAmbrose Bierce’s “Moxon’s Master,” a chess-playingrobot loses its temper upon being beaten at a gameand murders Moxon, the robot’s creator.11 FredericBrown’s “Answer,” a story so famous as to havepassed into modern folklore, describes the birth of thefirst supercomputer.12 When asked “Is there a God?”

the computer answers “Yes, now there is,” and killshis creator when he goes for the plug.

Frank Herbert’s “Bu-Sab” stories13 describe waysof keeping computers from making government tooefficient. In Herbert’s imagined future, computeriza-tion has accelerated the pace of government so thatcomputers automatically pass and amend laws in amatter of minutes. The speed of government is so fastthat no human can possibly understand or keep pacewith the legal system. So government saboteurs delib-erately introduce bugs to slow things down, evenassassinating those who stand in their way.

Finally, in Fritz Lieber’s “A Bad Day for Sales,”14 avending robot named Robbie is baffled by the start ofatomic war, signaled by an airburst above New York’sTimes Square. Robbie is incapable of dispensing waterto the thirsty burn victims who can’t slip coins intohis coin slot. In this image of technology (not running

A Reflection of ValuesIn 1948, Buckminster Fuller developed

his synergetic-energetic system of geome-try, which resulted in the geodesic dome.Such a dome comprises a network ofinterconnected tetrahedrons (four-sidedpyramids of equilateral triangles) forminga three-way grid that distributes stressevenly to all members of an entire semi-spherical structure.

Fuller’s work led to extended study ofeconomical space-spanning structures. In1953, the Ford Motor Company com-missioned Fuller to design the FordRotunda Dome in Dearborn, Michigan.Thereafter, Fuller designed domes hous-ing a variety of machinery like militaryradar antennas, in addition to the Ameri-can pavilion dome at Expo 1967 in Mon-treal (shown here).

The Antarctic research dome, whosefrozen interior is pictured, is roughly 20meters high and 55 meters across. Trashrecycling bins and cold-resistant emer-gency food supplies are stacked inside the

dome near sleeping modules that canaccommodate up to 33 people.

Creative building ideas like Fuller’s ledeventually to closed-system tests likeBiosphere 2 and to inventive, automatedliving environments. The move towardhigh-tech housing continues to accelerate,

and soon the computer will be as commonto the home as the television or washingmachine. While research shows that mostpeople use their home computer primarilyfor entertainment, in the future it willlikely become a central part of the home’sstructure. Software systems could be con-nected to external utilities like telephoneand electricity and could be used to mon-itor usage and billing. This technology,already used in large commercial build-ings, promises to make real the automatedhome science fiction writers have pre-dicted for more than a century.

Perhaps in a few years, Microsoft willoust Martha Stewart as arbiter of taste inthe home. After all, when we’re shoppingfor towels in 2020, we’ll likely want themto match our favorite desktop themesbecause our PCs will be everywhere,including our bathrooms. In addition tobeing just the right color, we’ll also wantMicrosoft’s towels because they’ll be for-mulated to be particularly compatiblewith our Windows-run dryers.

Geodesic dome at the 1967 Montreal Expo. (© Lawrence A. Martin, www.GreatBuildings.com)

Biosphere 2. (AP/Wide World Photo) Antarctic dome. (AP/Wide World Photo)

36 Computer

amok so much as) missing the mark, Lieber antici-pates a question regarding technology that nearlyeveryone ever frustrated by a computing device hasasked: Why doesn’t it work?

Computing technology seems to invite a differentlevel of expectation than other technologies. The waywe’ve defined computing—that it should be life-enhancing, time-saving, reliable, simple, and adapt-able—doesn’t make allowance for problems likesystem crashes or hardware malfunctions. Problems

like this tend to annoy us, especially when we can atleast imagine creating devices that either repair them-selves or don’t fail in the first place. From no otherclass of tool do we expect so much, which is likelywhy we feel anxiety at Robbie’s plight. Had his cre-ators anticipated an emergency situation like nuclearwar, he might have been able to help.

There are almost countless examples like these thataddress some of the potential problems with the tech-nology we’re developing now and will be developing

Dangers Born of InnovationInnovation in technology tends to trans-

form traditional cultural systems, fre-quently with unexpected social con-sequence. At the minimum, an innovationobsolesces a preexisting technology; atmost, it may obsolesce a mode of inquiryor even a way of thinking. Thus, whilemany point to technology as being respon-sible for a host of positive developmentslike increased lifespans and greater educa-tional opportunities, innovation can alsobe a destructive process.

World War I and the ensuing Depres-sion forced a sobering reassessment of allthe technological and industrial innova-tion percolating at the turn of the century.The development of submarines, machineguns, battleships, and chemical warfaremade increasingly clear the destructive sideof technology—which golden-age sciencefiction seemed to promote through its fas-cination with future weaponry and spacewarfare. Beneath the surface, however,even many popular science fiction pieceschallenged some of science’s most long-standing assumptions about progress.

Take, for example, James Whale’s 1931film “Frankenstein,” which immediately

became a hit despite emerging in an era inwhich the kind of industrialized sciencedepicted in the film had recently produceda great deal of death and destruction. Thesubtitle of Mary Shelley’s 1818 novel (towhich all twentieth-century iterations ofthe Frankenstein story owe their inspira-tion) is “a modern Prometheus.” LikePrometheus, who is punished by the godsfor his hubris, Victor Frankenstein createsa monster incompatible with the worldand dies in the end as a result.

Disasters that parallel Frankenstein’s—in real-world science and in fiction—aren’toften the results of so-called mad scien-tists. Like Frankenstein’s monster, StanleyKubrick’s HAL 9000, from the film“2001: A Space Odyssey,” exemplifies therisks inherent in even the most seeminglybenign technologies.

Loosely based on an early 1950s ArthurC. Clarke story, the movie poses questionsabout the best kind of tool humans arecapable of creating: artificial intelligence.Ironically, HAL seems the most human ofall the movie’s characters, despite havingno human features except an eye-like opti-cal device that blankly stares at his inter-locutors. More ironic still, one of the mosthuman-like elements of his personality—his paranoia—leads him eventually to goinsane and murder the crew.

These stories, and stories like them,focus on and anticipate what might oth-erwise have been difficult to imagine: gravedisasters that stem directly from techno-logical advancements. One needn’t looktoo far to see evidence. Nobody antici-pated, for example, that the seven mem-bers of the Challenger crew would dieaboard that Space Shuttle in 1986.

HAL, from “2001: A Space Odyssey.” (AP/WideWorld Photo)

Frankenstein. (© Lisette LeBon/SuperStock) Space Shuttle Challenger, 1986. (AP/Wide World Photo)

in the future. You needn’t look very far to find sciencefiction in the first part of the century that anticipatedproblems like Y2K and computer viruses.

FAITH IN MACHINERY“Faith in machinery,” wrote Matthew Arnold in

Culture and Anarchy in 1869,15 “is our besetting dan-ger.” The first coherent vision of a world transformedinto a future run entirely by computers—dramaticallyillustrating Arnold’s argument—may have been E.M.Forster’s “The Machine Stops,” published in 1909,16

in which people basically become hive creatures in aworldwide city run by a massive machine:

No one confessed the Machine was out of hand. Yearby year it was served with increased efficiency anddecreased intelligence. The better a man knew his ownduties upon it, the less he understood the duties of hisneighbor, and in all the world there was not one whounderstood the monster as a whole. Those masterbrains had perished. They had left full directions, it istrue, and their successors had each of them mastereda portion of those directions. But humanity, in itsdesire for comfort, had over-reached itself. It hadexplored the riches of nature too far. Quietly and com-placently, it was sinking into decadence, and progresshad come to mean the progress of the Machine.

Eventually, goes Forster’s story, in this world wherepeople are fed, clothed, and housed by the Machine,and never see each other face to face (but only throughtwo-way video), the system collapses.

In Arthur C. Clarke’s classic novel The City and theStars,17 published in the 1950s, we see a similar ideafleshed out. People live in an enclosed city run by amachine called the Central Computer. Unlike Forster’smore primitive technology, the Central Computermaterializes everything out of its memory banks,including consumer items, human inhabitants, andthe physical form of the city itself.

Here, once again, technology has done too well, anda dependent humanity is trapped in a prison of its ownconstruction, having developed a fear of the world out-side the city. And here too they enjoy only virtual expe-riences—lifelike fantasies induced by the computer tocreate real-world illusions. Clarke names the cityDiaspar, as if to suggest that when humanity surren-ders its will to technology, and loses itself in an artifi-cial world of its own creation, it is in a kind of Diaspora,a state of exile, from the world and from itself.

The quality of any prediction about the future—whether cautionary like Forster’s and Clarke’sor promotional like golden-age fiction’s—

depends on the agenda of the person making the pre-diction. As such, science fiction will likely never

perfectly predict the future of technology. Attemptingto do so, however, is only one of several goals sciencefiction authors typically admit to targeting. ❖

References1. H. Ellison, “I Have No Mouth and I Must Scream,”

Galaxy, 1967.2. F. Lieber, “The 64-Square Madhouse,” If, May 1962.3. J.V. Post, “Cybernetic War,” Omni, May 1979, pp. 44-104.4. R. Feynman, “There’s Plenty of Room at the Bottom,”

Engineering and Science, Feb. 1960.5. N. Stern, “The BINAC: A Controversial Milestone,”

From ENIAC to UNIVAC: An Appraisal of the Eckert-Mauchly Computers, Digital Press, Bedford, Mass.,1981, pp. 116-136.

6. I. Asimov, “A Feeling of Power,” If, Feb. 1958.7. D.H. Levy, “Portable Product Miniaturization and the

Ergonomic Threshold,” MIT doctoral dissertation,Cambridge, Mass., Sept. 1997.

8. H. Fast, “The Martian Shop,” The Magazine of Fantasy& Science Fiction, Nov. 1959.

9. R.A. Heinlein, “Nothing Ever Happens on the Moon,”Boy’s Life, April-May 1949.

10. W. Gibson and B. Sterling, The Difference Engine, Ban-tam, New York, 1991.

11. A. Bierce, “Moxon’s Master,” The Complete Short Sto-ries, E.J. Hopkins, ed., Doubleday, New York, 1970.

12. F. Brown, “Answer,” The Best of Frederic Brown, Bal-lantine, New York, 1977.

13. F. Herbert, “The Tactful Saboteur,” Galaxy, Oct. 1964.14. F. Lieber, “A Bad Day for Sales,” Galaxy, July 1953.15. M. Arnold, Culture and Anarchy, Yale Univ. Press, New

Haven, Conn., 1994.16. E.M. Forster, “The Machine Stops,” The Machine Stops

and Other Stories, R. Mengham, ed., Trafalgar Square,London, 1998.

17. A.C. Clarke, The City and the Stars, Signet, New York,1957.

Jonathan Vos Post is an independent consultant spe-cializing in venture-capital strategies for high-tech startups. He has worked in the aerospace and com-puter industry for 20 years and has published bothtechnical aerospace articles and science fiction. Posthas a BS in mathematics and English literature fromCaltech and did doctoral work in computers andinformation science at the University of Massachu-setts, Amherst. Contact Vos Post at [email protected]; http://magicdragon.com.

Kirk L. Kroeker is associate editor at Computer mag-azine. Contact Kroeker at [email protected].

January 2000 37