Working Models and the Synthetic - Peter Asaro Working Models.pdf · Science Studies 1/2006 Science...

23
Science Studies 1/2006 Science Studies, Vol. 19(2006) No. 1, 12–34 Working Models and the Synthetic Method: Electronic Brains as Mediators Between Neurons and Behavior Peter Asaro This article examines the construction of electronic brain models in the 1940s as an instance of “working models” in science. It argues that the best way to understand the scientific role of these synthetic brains is through combining aspects of the “mod- els as mediators” approach (Morgan and Morrison, 1999) and the “synthetic method” (Cordeschi, 2002). Taken together these approaches allow a fuller understanding of how working models functioned within the brain sciences of the time. This com- bined approach to understanding models is applied to an investigation of two elec- tronic brains built in the late 1940s, the Homeostat of W. Ross Ashby, and the Tortoise of W. Grey Walter. It also examines the writings of Ashby, a psychiatrist and leading proponent of the synthetic brain models, and Walter, a brain electro-physiologist, and their ideas on the pragmatic values of such models. I conclude that rather than mere toys or publicity stunts, these electronic brains are best understood by consid- ering the roles they played as mediators between disparate theories of brain func- tion and animal behavior, and their combined metaphorical and material power. Keywords: models, psychology, cybernetics nical features that make the first so dif- ficult. From here on, then, I shall take as a basis the thesis that the first virtue of a model is to be useful. (Ashby, 1972: 96). In the first half of the 20 th century a small scientific movement emerged and took an unexpected turn, beginning an in- triguing journey towards a mechanistic understanding of the mind. What made It seems to me that this purely prag- matic reason for using a model is fun- damental, even if it is less pretentious than some of the more “philosophical” reasons. Take for instance, the idea that the good model has a “deeper” truth– to what does this idea lead us? No elec- tronic model of a cat’s brain can possi- bly be as true as that provided by the brain of another cat; yet of what use is the latter as a model? Its very closeness means that it also presents all the tech-

Transcript of Working Models and the Synthetic - Peter Asaro Working Models.pdf · Science Studies 1/2006 Science...

Science Studies 1/2006

Science Studies, Vol. 19(2006) No. 1, 12–34

Working Models and the SyntheticMethod:

Electronic Brains as MediatorsBetween Neurons and Behavior

Peter Asaro

This article examines the construction of electronic brain models in the 1940s as aninstance of “working models” in science. It argues that the best way to understandthe scientific role of these synthetic brains is through combining aspects of the “mod-els as mediators” approach (Morgan and Morrison, 1999) and the “synthetic method”(Cordeschi, 2002). Taken together these approaches allow a fuller understanding ofhow working models functioned within the brain sciences of the time. This com-bined approach to understanding models is applied to an investigation of two elec-tronic brains built in the late 1940s, the Homeostat of W. Ross Ashby, and the Tortoiseof W. Grey Walter. It also examines the writings of Ashby, a psychiatrist and leadingproponent of the synthetic brain models, and Walter, a brain electro-physiologist,and their ideas on the pragmatic values of such models. I conclude that rather thanmere toys or publicity stunts, these electronic brains are best understood by consid-ering the roles they played as mediators between disparate theories of brain func-tion and animal behavior, and their combined metaphorical and material power.

Keywords: models, psychology, cybernetics

nical features that make the first so dif-ficult. From here on, then, I shall takeas a basis the thesis that the first virtueof a model is to be useful. (Ashby, 1972:96).

In the first half of the 20th century a smallscientific movement emerged and tookan unexpected turn, beginning an in-triguing journey towards a mechanisticunderstanding of the mind. What made

It seems to me that this purely prag-matic reason for using a model is fun-damental, even if it is less pretentiousthan some of the more “philosophical”reasons. Take for instance, the idea thatthe good model has a “deeper” truth–to what does this idea lead us? No elec-tronic model of a cat’s brain can possi-bly be as true as that provided by thebrain of another cat; yet of what use isthe latter as a model? Its very closenessmeans that it also presents all the tech-

13

Peter Asaro

this journey intriguing was that much ofit was not focused on humans nor ani-mals nor brain tissues, but on machines–electronic models of the brain. While insome sense a revival of 17th centurymechanistic mental philosophy, this eramarked a departure from the dominant19th century mental philosophy of ra-tionalism, as well as the empirical ap-proaches of neurophysiology and psy-chopathology. Yet, it was these strangeelectronic brains which eventuallyproved to be central in forging funda-mental connections between those dif-ferent traditional approaches to themind, and provided a sense that a com-prehensive understanding of the mind,brain and behavior was within the graspof science. A new found openness toempirical approaches to mental phi-losophy following pragmatists likeWilliam James, and an increasing diver-sity of new empirical methods comingfrom psychology and neurology led to anexperimental era in the mental scienceswhich sought an empirical basis for un-derstanding behavior and mental activ-ity. The ensuing cognitive revolution inthe brain sciences depended upon theseelectronic brain models in an essentialway–indeed the digital computer was inmany ways itself conceived and con-structed as such an electronic brainmodel and eventually served as the cen-tral metaphor for the brain. This articleaims to understand the nature and roleof these machines as working models. Itwas because these electronic brainswere able to function as working mod-els that they were so successful in ad-vancing the mechanistic view of themind.

While the Cybernetics movement ofthe 1940s and 1950s is often cited as a

precursor to the cognitive revolution inthe late 1950s brain sciences, its role isoften seen as limited to providing gen-eral inspiration and some rudimentarytheory. I argue, however, that its role wasfar more significant, despite the fact thatthe particular details of its theories werelargely discarded by mainstream cogni-tive science in the following decades. Inparticular, little attention has been paiduntil recently to the devices that thecyberneticians actually built, and howthey devised a scientific methodologyaround the construction of their syn-thetic brains. Roberto Cordeschi (2002)has written a history of the developmentof these devices beginning in the firstyears of the 20th century, and has de-scribed their approach as the “syntheticmethod.” Though such a methodologyhad been used in other sciences for cen-turies, the weaving of constructed tech-nologies and scientific explanation intoa synthetic approach was certainly novelto the brain sciences. In his history, twodevices stand out as exemplary cases ofthis methodology: The Homeostat andthe Tortoise, designed by the British Cy-bernetics pioneers W. Ross Ashby and W.Grey Walter, respectively, in 1948. WhileCordeschi (2002) presents much of thehistorical background for these devices,and recognizes the new methodology atwork, he does not go so far as to providean analysis of how the new syntheticmethod actually functioned, or why itsucceeded. This is precisely what I aimto do in this article, and I believe the keyto understanding the synthetic methodlies in the epistemology of working mod-els and their role as mediators betweendisciplines and theories.

In recent years researchers in the so-cial studies of science have begun to pay

Science Studies 1/2006

14

a great deal of attention to scientificmodels. There are many views of whatmodels are, but they are traditionallyassociated with abstractions–math-ematical or cognitive models–and areoften treated as the metaphysical gluewhich binds theory to empirical data(Hesse, 1966). The tradition in the phi-losophy of science that has paid the mostattention to models has been the anti-realists. Beginning with Bas van Fraassen(1980), those seeking to challenge real-ist interpretations of scientific theorieshave turned to an examination of therole of models. While van Fraassen’s ap-proach placed a heavy reliance on whathe called “partial isomorphisms,” simi-lar approaches were followed by IanHacking (1983) and Nancy Cartwright(1983) that placed less significance inisomorphism as an epistemic property,instead placing it in scientific practicesmore generally. For Hacking these prac-tices are representing and intervening,while for Cartwright they involve captur-ing the phenomenal qualities of naturalevents, and the causal structure of thoseevents when they can be found. RonaldGiere (1999) adopts a very similar ac-count of models, but in the context of asomewhat different project–that of offer-ing a cognitive understanding of science.Finally, the notion of an ideal or com-plete isomorphism, or what Paul Teller(2001) has called the “Perfect ModelModel,” has been identified as underly-ing much of the philosophical theoriz-ing of models. He demonstrates that theperfect model model is untenable andfrustrates our attempts to get a clear pic-ture of the role of models in science.

In this article, I will not seek to sup-plant the abstract view of models, whichI will call “theoretical models,” but rather

will seek to add to it the concept of a dif-ferent kind of model which also appearsin science–the “working model.” Work-ing models differ from theoretical mod-els in that they have a material realiza-tion, and are thus subject to manipula-tion and interactive experimentation. Itis this dynamic material agency whichsets working models apart from otherclosely related kinds of scientific objects,such as theoretical models, simulationsand experimental instruments.

This is not to say that they lack anytheoretical basis or significance, indeedtheir relevance to the broader field ofscientific knowledge depends upon hav-ing some form of theoretical implica-tion. However, unlike theoretical mod-els they are not solely abstract con-structs, nor merely the practical formu-lation of a theory for some specific ap-plication. By having a material face,working models participate in the worldof material culture. In virtue of this,working models function in significantlydifferent ways than theoretical modelsfunction. Most importantly, workingmodels exhibit what Pickering (1995)calls material agency–in virtue of theirmateriality they behave in ways that areunintentional, undesired and unex-pected to their designers. This is by nomeans a negative feature, and it is oftenessential to the productive contributionof such models– such as becomingoracles, providing new insights, present-ing novel phenomena, and offeringserendipitous knowledge. The dynamicmaterial agency inherent in workingmodels is prerequisite for autonomousand open-ended interactions with theenvironment and with experimenters–interactions that emerge temporally. Itis in the dynamic flow of interactions

15

Peter Asaro

between researchers and material mod-els that new techniques are developed,and practical knowledge of the systemand its behavior emerges. Accordingly,through these interactions scientific un-derstanding can be extended–new ex-planations can be derived and tested,and new phenomena can be observed,manipulated and brought under control.

While I wish to single out workingmodels for more careful scrutiny, I donot necessarily wish to argue for a strictdistinction between them and otherclosely related scientific objects–such assimulations, instruments and experi-mental apparatus that often share manyof the same properties and participatein the same material culture of science.In the universe of scientific models,working models fall somewhere be-tween simulations and instruments, ormight even be seen as a sort of hybrid ofthe two. Clearly defining the boundariesbetween these, if such a project is possi-ble, is well beyond the scope of thepresent article. It is however useful tonote the similarities to other kinds ofmodels described in the science studiesliterature, in particular discussions ofsimulations, scale models, and modelsas mediators can all help inform an un-derstanding of working models. I willreview some of these briefly before ex-amining the case of working models inthe brain sciences of the 1940s.

Models in Science Studies

From this point of view, there is no suchthing as the true model of such a com-plex system as a cat’s brain. Consider,for instance, the following four possi-ble models, each justifiable in its owncontext:

1. An exact anatomical model in wax.

2. A suitably shaped jelly that vibrates,when concussed, with just the samewaves as occur in the real brain.

3. A biochemical soup that reacts bio-chemically just as does the cat’s brainwhen drugs are added.

4. A programmed computer that givesjust the same responses to auditorystimuli as does the cat’s brain.

Clearly, complex systems are capable ofproviding a great variety of models,with no one able to claim absolute au-thority. (Ashby, 1972: 97)

Recently there has been a growing inter-est in the nature of computationalsimulations and their use in science. Inone of the more carefully thought outanalyses, Winsberg (2003) examines theuse of simulated experiments in phys-ics. He identifies these simulations as ascientific practice that lies somewherebetween traditional theorizing and tra-ditional experimentation. In these simu-lations, theory is applied to virtual sys-tems in an effort to test and extend thetheory. Like experiments, simulationshave what Hacking (1992) calls a “life oftheir own.” Hacking originally appliedthis notion to thought experiments, butit implies that there is an autonomousagency to the simulated experiments –they are not simply an expression orextension of the experimenters. ForWinsberg, this autonomous agency ex-presses itself through the life-cycle of asimulated experiment – the recyclingand retooling that goes into developinga useful simulation responds to the au-tonomous agencies expressed in thepractices of simulation over time.

Science Studies 1/2006

16

Working models might easily be con-strued as simulated experiments in thissense. That is, they too lie somewherebetween theory and experiment in thetraditional senses, and have a “life oftheir own.” However, it is important toacknowledge the material basis of thesesimulations. While it may be tempting toconceive of computational simulationsas virtual systems devoid of materiality,this is not really true. Computers are verysophisticated material artifacts, andtheir calculations are not ethereal or dis-embodied. A vast number of circuits andelectrical currents are involved in real-izing a computational simulation. Butthe nature of computational simu-lations, which are really an outgrowth ofpencil-and-paper mathematical mod-els, is to realize a mathematical formal-ism as precisely as possible. While cer-tain mathematical techniques can neverbe realized by computers, the generalaim of these simulations is to at leastapproximate the proper formalism. Assuch, the materiality of these computa-tional simulations is usually disparagedas the cause of errors, rather than as asource of any insights into the theory.Working models, by contrast, are morelikely to embrace their materiality, ratherthan hide it. While they often start byrealizing a mathematical theory, theyfurther aim to demonstrate phenomenathat may not be easily expressed bymathematical formalisms. And the “life”of the working model dwells heavily inthe material realm, though it also getsexpressed in the iterative redesign ofmodels.

Recent work on scale models in archi-tecture, especially Yaneva (2005), dem-onstrate that these models function inmany ways similar to the working mod-

els I will describe here, and embracetheir material nature. These models areused in architectural practice to developa subjective understanding of interiorand exterior space that is not possible indrawings or other 2-dimensional repre-sentations. These scale models of build-ings are involved in an iterative designprocess that is autonomous from certainengineering concerns and constraints,but which necessarily conforms to spa-tial requirements. These scale modelsare thus a kind of working model, thougha somewhat limited form due to theirstatic character. That is to say that theworking models I find most interestingare the dynamic models that exhibitvarious behaviors and emergent prop-erties. While architectural scale modelsdo this, it is only in the sense that theirforms evoke a cognitive response in de-signers and architects, who largely de-pend on expertise to “read” these arti-facts. They are thus a form of sophisti-cated image or visualization, and as suchtheir use and significance outside of thatpractice is limited. Working models, dueto their dynamic character, are oftenfound outside of their scientific contextsand are often admired by non-experts,as we shall see with the Homeostat andTortoise.

Perhaps the most insightful perspec-tive on models in the recent science stud-ies literature has been on their role asmediators. In the introduction to theiredited collection on the subject, Morganand Morrison (1999) articulated the no-tion of models as mediators:

[W]e want to outline... an account ofmodels as autonomous agents, and toshow how they function as instrumentsof investigation... It is precisely becausemodels are partially independent of

17

Peter Asaro

both theories and the world that theyhave this autonomous component andso can be used as instruments in the ex-ploration in both domains. (Morganand Morrison, 1999: 10)

The idea that models are autonomousagents is crucial to a pragmatic under-standing of science and its material cul-ture. In this statement we can see thatthe notion of models they have in mindis one which stands in between theoryand the world–whether “the world” isconstrued as phenomena, data or ex-periment. And we see in the variouscases described in the volume that mod-els are seen as offering a space of play ina conceptual realm where theory andworld are not strictly related, but theirinter-relation is in play through the me-diation of models. While this is a niceimage, and captures important aspectsof the presented cases well, it is not theonly notion of autonomous agency op-erative in the working models of thebrain sciences. There, the mediation isbeing done between multiple theories –the electronic brains mediated betweentheories of neurons and theories of be-havior, not between theory and data –and the agency also lies within the ma-terial world.

Moreover, Morgan and Morrison’s no-tion of autonomy is more concernedwith the fact that models are not deter-mined by theory or by data–that theyhave freedom from both. But all thiscomes to in practice is that scientists areable to manipulate models freely, wheretheories are constrained formally anddata are constrained empirically. Mod-els then become a plastic medium (atleast to the extent that it is not con-strained formally or materially) and issubject to willful manipulation by hu-

man agents. It is difficult to find muchagency in the models themselves inthese accounts, though the models dobecome the focus of much scientificwork.

I am not the first to recognize the sig-nificance of working models in the brainsciences. Not only did the scientists whodevised those electronic brains engagein a critical reflection on the nature anduse of their own models, but Cordeschi(2002) has written a wonderful historyof the development of robots and me-chanistic psychology up to and includ-ing Ashby and Walter. In his account, thesynthetic method is supposed to standagainst an analytic method–according towhich understanding is obtained by iso-lating and manipulating various factorsin a phenomenon until one can deter-mine the control variables of the phe-nomenon and their causal effects. Ac-cording to the synthetic method, scien-tific understanding can also be derivedthrough constructing a complex model,and examining its properties and be-haviors. There is, of course, an elementof analysis involved in the determinationof which aspects of the phenomena tosynthesize in one’s model, but the scien-tific practices of experimentation areinstead focused on the synthetic model,rather than the original phenomenon.The cyberneticians were also interestedin the formal sense of this distinction,seeing the brain as a multi-variable sys-tem of such great complexity that ana-lytic mathematical techniques would failto reveal its secrets, and so it requiredradical new techniques. The computa-tional models of complex systems devel-oped by John von Neumann, and his useof Monte Carlo methods in particular,are one of the clearer examples of this

Science Studies 1/2006

18

(Aspray, 1990). In this article, I do notwish to stick to this strictly formal defi-nition, however, and prefer to use “syn-thetic” to connote the constructed anddynamic character of models, ratherthan the nature of the mathematicaltechniques they employ.

When viewed in the context of recentscience studies, the synthetic method isa methodology that derives its efficacyby enhancing the material practices thatscientists can employ in studying a phe-nomena. This allows science to circum-vent theoretical deadlocks, as well asbegin the investigation of phenomenawhich are not otherwise accessible ex-perimentally.

I believe that the “synthetic method”and “models as mediators” approachescan be brought together to inform ourconception of working models. It wouldseem that a complete explanation of theepistemic basis of the synthetic methodrequires an explanation of the scientificcontext in which such models are con-structed. At least, this is the case if wewish to explain the construction of thesynthetic brain models as contributionsto the understanding of cognitive neu-roscience in the 1940s. And in this case,we can only understand the role of thesemodels by examining the mediationsthey achieved between the disjoint andincomplete theories being developed inthe various disciplines studying differ-ent aspects of the mind, brain, andbehavior.

It is important to recognize that thereare multiple ways in which a model canserve as a mediator. While the models asmediators literature focuses primarilyon the role of models as mediating be-tween theory and data, there are at leasttwo other senses of mediation which I

want to focus on in this article. The firstis the notion that a model can mediatebetween two theories. The theories in-volved could be very similar, or couldrefer to different entities, different typesof entities, or the same phenomena atdifferent levels of analysis. In each caseit is sometimes possible to build a modelwhich includes key aspects of thosetheories, and shows how they mightwork in conjunction, reinforce one an-other, or even suggest that a theoretical“synthesis” of the two is possible. Thesecond is that models, including thesimulations and scale models just de-scribed and working models in particu-lar, have the ability to “stand in” for natu-ral systems and phenomena during ex-perimental investigations. This ability isan integral part of the synthetic methodthat Cordeschi describes. By building thesynthetic brains according to certainprinciples of construction, these modelswere convincingly argued to exhibit cer-tain theories of mind and mechanismsof behavior. By mediating between theo-ries and simultaneously acting as astand-in, these models provided an in-novative approach to the sciences of themind.

The Tortoise and the Homeostat

In order to study this [feedback] ab-straction more easily, models havebeen built containing only two ele-ments connected with two receptors,one for light and one for touch, and twoeffectors giving progress and rotation,with various possibilities of intercon-nection. This device is in the nature ofa toy rather than a tool and remindsone of the speculations of Craik and thehomeostat of Ashby. (Walter, 1953: 3).

19

Peter Asaro

W. Ross Ashby began designing theHomeostat in 1946, and gave his firstpublic demonstration at a meeting of theElectroencephalorgaphy (EEG) Societyin 1948. His design actually began witha set of mathematical functions whichexhibited a property Ashby called ultra-stability (similar to what is now calledconvergence), meaning that iteratedcomputations would eventually result instable, unchanging values or a repeatingsequence. He then set about to devise amechanical or electronic device thatwould exhibit this behavior. After vari-ous attempts he arrived at an electro-mechanical design that included fourHomeostat units, each receiving an in-put current from each of the other threeunits and sending an output current toeach of the other three units (Ashby,1952). The units themselves were builtout of war surplus parts, mainly old ra-dar units. Each unit consisted of a blackbox with four rows of switches, and awater trough on top with a movable nee-dle resting in the water. The state of themachine is displayed by this needle, andthe machine is “stable” when the needleis in the center of the trough, and “un-stable” when the needle moves to oneend or the other, or moves back andforth. The currents from the other unitsare routed through a resistor, selectedfrom a fixed set of resistors by a dial, tothe water trough, where the needle re-sponds by moving in relation to the cur-rent gradient created in the trough.

The behavior of each unit dependsupon its specific configuration ofswitches and dials. In the mundane con-figuration, each unit merely displays thesummation of currents by the positionof its needle, and the needle is only sta-ble, i.e. in the middle of the trough, by

coincidence. However, when the unit isswitched so that, instead of a simple re-sistor, its Uniselector circuit is engaged,it becomes ultrastable. In this configu-ration, when the needle is not in thecenter of the trough, a circuit is closedwhich charges up a coil (capacitor) that,when it passes a certain threshold, dis-charges to the Uniselector causing it tochange to a new resistance (chosen froma pre-arranged randomized set of resist-ance values). It is thus a sort of randomresistor and instantiates a trial-and-er-ror search to find a resistance that stabi-lizes the needle. The result is that the unitwill continue changing its internal or-ganization of resistance values until itfinds an equilibrium with the needle inthe middle of the trough.

Because the four units are intercon-nected, they must each find an equilib-rium in the environment of inputs sup-plied by the other units, in other wordsthey must all reach an equilibrium at thesame time. If a particular state is unsta-ble, a slight disturbance–like pushing aneedle out of place–will throw the nee-dles out of equilibrium and the unsta-ble units will continue searching untilthey find another equilibrium state. Anynumber of additional disturbances canbe introduced, including switching a re-sistance value or an input’s polarity,holding a needle in place, or even tyingtwo needles together or to a rod thatforces them to move in unison. Thesesort of interventions provide a basis forsystematic experimentation with thedevice that we will consider shortly.Thus, by a mechanism of searchingthrough possible reorganizations by ran-dom trial and error, the Homeostat willeventually find its desired equilibrium.

W. Grey Walter’s Tortoise was a differ-

Science Studies 1/2006

20

ent animal than the Homeostat. In fact,Walter was fond of describing his Tor-toise as an electronic animal, while theHomeostat was described as an elec-tronic vegetable. Still the devices werebuilt in the same year, 1948, and exhib-ited a similar regard for the importanceof feedback. The Tortoise was really asimple autonomous robot, based on a 3-wheeled chassis built from war surplusradar parts and the mechanical gearsfrom an old gas meter. The two rearwheels spun freely, while the front wheelwas driven by an electric motor, andcould rotate back and forth or all the wayaround a 360o rotation by a second mo-tor. Normally, both motors moved at aconstant speed and direction, and theresulting motion of the robot was totravel in an elliptical sort of path, at leastuntil it was disturbed. The two motorswere subject to the combined control oftwo sensors. The first sensor was a sim-ple contact switch between the tortoise-like shell of the robot and its base, suchthat any collision or contact with an ob-ject or obstacle would push the shellagainst the base and close the circuit.The Tortoise would react to this as a col-lision by reversing its drive motor. Theother sensor was a photocell set atop thefront wheel assembly, such that it alwayspointed in the same direction as thefront wheel as it rotated about. As such,it acted as a scanning device, and reactedto bright lights. When the photocell re-ceived enough light, it would temporar-ily stop the motor that caused the turn-ing (but not the drive motor) resultingin the robot driving in a straight pathtowards the bright light.

Working Models and the SyntheticMethod

Roberto Cordeschi’s (2002) brilliant his-tory of what he calls the discovery of theartificial tells the story of the building ofbehavioral models before, during andafter the cybernetic era in of the 1940s.Central to this story is the syntheticmethod and how it aided the restorationof psychology as a legitimate sciencethrough the construction of electronic,robotic and computer simulations of themind and brain. But how were these syn-thetic brain models able to do this?Cordeschi’s account of the syntheticmethod draws upon the concept of aworking model, but does little to defineor explain it. He does provide one key tounderstanding working models with thenotion of a model’s principles of con-struction. While he has described the his-torical context and development of thesemodels, I wish to consider what thosedevelopments can tell us about theepistemic nature and scientific use ofmodels. I thus want to start whereCordeschi leaves off.

Cordeschi frequently stresses the im-portance of working models over othertypes of models. Unfortunately, he offersonly brief comments on just what makesthem so desirable or effective. He does,however, contrast working models withanalogies:

Mechanical analogies for nervous func-tions, however, occupy a secondaryposition in the present book, comparedto the, albeit naive, working models ofsuch functions which were designed orphysically realized. Those analogies arediscussed in some sections of thepresent book because examining themallows one to clarify the context inwhich attempts to build those working

21

Peter Asaro

models were made (Cordeschi, 2002:xvi).

Cordeschi thus finds the physicality ofworking models to be an empiricallypowerful force. Implicit in this is thatactually realizing a model that “works”imposes some significant constraints onfree-wheeling theorization. An impor-tant aspect of the constraints on build-ing a model is that it must be designed,and the technical achievement of gettinga physical model of this kind to work im-plies that the design is successful, alongwith the principles that underlie the de-sign. We should also note in the abovepassage and throughout Cordeschi’sbook that he does not consider any hy-brids which might lie somewhere in be-tween working physical models and de-scriptive analogies. However, this is pre-cisely where we might wish to placemany computer programs and simula-tions – part material and part formal.

Cordeschi also sees that workingmodels, as simulated experiments, canbe used to test hypotheses:

Working models or functioning arti-facts, rather than scanty analogies, arethe core of the discovery of the artifi-cial, and in two important ways. Firstonly such models, and not the analo-gies, can be viewed as tools for testinghypotheses on organism behavior. Theway to establish the possibility thatcomplex forms of behavior are not nec-essarily peculiar to living organisms is“to realize [this possibility] by actualtrial,” as Hull wrote in 1931 about hisown models of learning... The secondway in which working models, ratherthan analogies, are at the heart of thediscovery of the artificial [is that]behavioral models can be tested(Cordeschi, 2002: xvi).

Though it is not clear to me what the dis-tinction between the two forms of test-

ing is meant to be, there do seem to betwo different kinds of demonstration go-ing on. The first point that he makes isreally that working models were a de-monstrative “sufficiency” argument inthe debates over mechanistic approachesto biology going on at the time. This kindof argument for models seeks to dem-onstrate that mechanisms of certaintypes are sufficient for certain complexbehaviors. By demonstrating that an in-organic mechanism is capable of some-thing assumed to be a strictly organic orbiological function, one can falsify anargument to the contrary. This is a fairlyweak form of argument, but was crucialat the time to defend the developmentof an empirical psychology and to takeit in a mechanistic direction.

The second point in Cordeschi’s dis-cussion of working models is about sci-entific methodology – that they can beused to actually test theories. This is dif-ferent from the ways in which theoreti-cal models have been argued to serve inthe confirmation of theories, however.Theoretical models are typically arguedto serve as bridges between theoreticalentities and empirical elements (data orobservations). They thus offer explana-tions of experimental results by relatingtheoretical explanations to real phe-nomena. This is the traditional form ofmediation performed by models–thatbetween theory and data–and it pur-posely obscures the work done in con-figuring material reality and setting upa controlled experiment in order tomake the demonstration effective. Work-ing models exercise their agency fromthe material domain and generate realphenomena which are also in somesense simulations. The metaphysics ofsimulations is rather complicated, espe-

Science Studies 1/2006

22

cially insofar as one is tempted to arguethat there is an easy distinction to bemade between “the real” and “the simu-lation” which has metaphysical import.Of course, a working model (unlike theo-retical models) actually does somethingand in virtue of that is part of a concretedynamic material reality (where theo-retical models remain in the realm ofabstract static concepts). So syntheticbrains may not be real brains, but theyare real electronic devices. In virtue ofthis they generate real phenomenawhich demand their own explanation.That is to say, the machines exhibit realbehaviors, even if they are not realbrains.

There are thus two senses of “demon-stration” that can operate in workingmodels. The first sense is that of “dem-onstration proof” in which the workingmodel is brought directly to bear ontheoretical debates. The second sense isthe communicative and pedagogicalsense of a “vivid demonstration.” Whilethe basic notion of demonstration is thesame, these kinds of models can con-tinue to be effective long after the scien-tific debates are settled. In this sense,science museums continue to educatethe public with displays that featuredemonstrations that were at one timecrucial to a theoretical debate, and stu-dents still replicate experiments, such asGalileo’s experiments with accelerationand inclined planes, as a pedagogicaldevice. In fact, the cyberneticians usedmany of their electronic devices in theclassroom as well as the laboratory. Wenow turn to the cybernetician’s ownanalysis of their use of models, begin-ning with their demonstrative abilities.

Working Models as Demonstrations

I now want to explore the account ofmodels and simulations offered by thecyberneticians themselves, and in par-ticular the one offered by W. Ross Ashby.This will bring us back to the specific vir-tues of working models that Cordeschiis concerned with, as well as the signifi-cance of the principles of constructionfor such models. Ashby gave a great dealof serious thought to how biologically-inspired machines could serve scientificdiscourse. One of the key values of mod-els, especially working machines, he ar-rived at was the vivid communication ofspecific scientific principles:

Simulation for vividness. Simulationmay be employed to emphasize andclarify some concept. Statements aboutmachines in the abstract tend to bethin, unconvincing, and not provoca-tive of further thinking. A model thatshows a point vividly not only carriesconviction, but stimulates the watcherinto seeing all sorts of further conse-quences and developments. No workerin these subjects should deprive him-self of the very strong stimulus of see-ing a good model carry out some ofthese activities “before his very eyes.”(Ashby, 1962: 461).

As the use of the term “vividness” con-veys, concepts are here considered to belike images, either literally or analogi-cally. Much can be said about the meta-phorical and analogical strength of theo-ries, and it might be debated whetherthis compels acceptance of the theory,or aids in its application to new prob-lems, or in fact does only superficialwork. Vividness points again to the cog-nitive component of science, and thatthe ease of comprehension and under-standing is important for the prolifera-tion of knowledge.

23

Peter Asaro

The analogical strength of a model issomewhat disparaged by Cordeschi, infavor of the pragmatics of working mod-els, but the value of a good analogy can-not be completely denied. Metaphors,analogies and images have a relationalcomponent, the similarity relations thatare used to bring out aspects of real phe-nomena. They also serve as a sort of con-nective tissue that draws together differ-ent ideas and shows their relation–anal-ogy is itself a form of mediation. Whilemuch of the epistemic work lies outsidethe specific relation, the similarity rela-tion serves an important role in organ-izing various practices–instantiating,demonstrating, measuring, verifying,extending, etc. A central metaphor oranalogy, like isomorphism, can be thegoal towards which those epistemicpractices aim, and either achieve or failto achieve. Models in this sense can bethe compelling exemplars described byKuhn (1962) around which scientificparadigms are organized. These are si-multaneously demonstrations of ac-cepted explanations of some controlledphenomena, and the basis of theoreti-cal and experimental extension throughpuzzle-solving. They can also be produc-tive metaphors in the way that 18th cen-tury anatomical drawing furtheredmedical understanding through imagesand their metaphorical relations, as dis-cussed by Barbara Stafford (1991).

Ashby’s notion of simulation for vivid-ness is closely related to the notion ofdemonstration in science. Demonstra-tion devices go back to the beginnings ofmodern science. Schaffer (1994) has writ-ten on the use of mechanical demonstra-tions of Newtonian physics as being cru-cial to their adoption in 17th century Eng-land. While much of his history focuses

on the academic political matrix in whichNewtonian physics sought to establishitself, it was the engineers who built thedemonstration devices, and the show-man-like scientists who demonstratedthem that thrust Newton’s mechanicsonto the English scientific community ofthe time. Indeed the Homeostat and Tor-toise were often used by their builders topromote the concepts and promise ofcybernetics to the public at large, and tofellow scientists. As a means to popular-ize the emerging science of cybernetics,the Tortoises were hugely successful, ap-pearing at the Festival of Britain, on BBCtelevision and several Life magazine arti-cles from 1950-1952.

While demonstrations often aim toconvince skeptical members of the sci-entific community of some theoreticalunderstanding, there is also a strongpedagogical function inherent in modelswhen they are utilized in the educationof young scientists and engineers. Theirability to convey ideas vividly applies notonly to other members of the scientificcommunity, but also to those being in-troduced to a field as students, and thewider public. By having students observe,or even better, build such devices, theycome to appreciate the power of simplefeedback mechanisms. Thus, the com-municative power of working modelswas to be used not merely to convey newideas to the existing scientific commu-nity, but to train and inspire a new gen-eration of scientific researchers throughthe transfer of scientific and engineer-ing practices. Indeed, as Pickering (forth-coming) points out, in the later part ofhis career Ashby saw his role as the elderstatesmen of cybernetics to be that ofeducation. Toward that end he built sev-eral devices intended purely to demon-

Science Studies 1/2006

24

strate cybernetic principles in the class-room. The pedagogical advantage of aworking model, as opposed to a theoreti-cal model, is that students actually learnlaboratory techniques by building sucha model themselves, and can interactwith the finished model in a multi-mo-dal way. That is, while some studentslearn best through text and somethrough equations, some may learn bestthrough visual forms, others throughtactile forms. Hands-on experience con-veys practical and often tacit knowledgein ways that formal expressions ofknowledge can rarely achieve. Moreover,the “hands-on” experience developslaboratory skills and intuitions that arecrucial to scientific practice.

The Homeostat was, as its name im-plied, intended to demonstrate the bio-logical principle of homeostasis. Accord-ing to this principle, an organism wouldadjust its condition through whatevermeans were accessible in order to main-tain certain critical conditions within theorganism. If an animal was cold, it wouldseek out a warmer place, if its blood pres-sure got too low, it might increase itsheart rate, etc. Ashby’s insight was that ageneral purpose learning mechanismcould be derived from this principle. Ifan organism or machine were allowed tosearch randomly through its possibleactions in the world, it could find for it-self ways to maintain the critical condi-tions of its internal environment. Thiswould work even when the system getsdisturbed by unexpected outside influ-ences, including malicious experiment-ers. The bottom line is that a mechanismwith a feedback-induced random searchcould exhibit the same principle ofhomeostasis that living creatures did. Itwas thus an adaptive system, and dem-

onstrated Ashby’s extension of it to theconcept of ultrastability.

Indeed, in reading Design for A Brain(Ashby, 1952) it is often difficult to dis-tinguish Ashby’s empirical argumentsfrom his pedagogical illustrations. Hemoves so rapidly between arguments infavor of the homeostatic principle asbeing an explanation of certain aspectsof the brain’s behavior, to analogies be-tween the behavior of the Homeostatand various brain phenomena, to argu-ments that the Homeostat embodies theprinciple of homeostasis, that these allseem to hang together. It is difficult tosay whether this style of rhetoric is a con-tribution to the synthetic method or aproduct of it, as the distinctions betweenbrain, theory and working model all be-gin to blur into systems based on ashared set of underlying principles.

Unlike Ashby’s Homeostat, which wasbuilt on an explicit set of formal equa-tions, W. Grey Walter’s robotic Tortoiseswere built upon a rather loose set of prin-ciples of construction, and his work onthese models correspondingly focusedmore on the phenomena that they gen-erated than on the demonstration of anyspecific formal principles. The generalprinciple that was demonstrated wassimply that a small number of feedbackloops, two actually, could generate alarge number of behaviors through theirinteractions with one another and theenvironment. Walter frequently arguedthat much of their value lay in the factthat it was a demonstration proof thatsimple mechanisms could exhibit com-plex biological phenomena. But it wasthe character and range of behaviorsthat he focused on.

In particular, he argued that the vari-ous behaviors of his robots were most

25

Peter Asaro

easily described using biological andpsychological terminology. The biologi-cal principles that he claimed could befound in these various behaviors in-cluded: parsimony (they had simple cir-cuits, yet exhibited numerous and ap-parently complex behaviors), specula-tion (they explored the world autono-mously), positive and negative tropisms(they were variously attracted and re-pulsed by lights of different intensities,depending on their internal states), dis-cernment (the machine could sublimatelong-term goals in order to achieveshort-term goals like avoiding obsta-cles), optima (rather than sit motionlessbetween two equally attractive stimuli,like Buridan’s ass, they would automati-cally seek out one stimulus, and thenperhaps the other), self-recognition (bysensing and reacting to its own light in amirror), mutual recognition (by sensingthe light of another Tortoise), and inter-nal stability (by returning to their hutchto recharge when their batteries go low)(Walter, 1953). Rhodri Hayward (2001)has also shown how Walter used the Tor-toises to argue for the fundamental sim-plicity underlying all human behaviorand especially emotion and love.

The Homeostat was a demonstrationproof of the power of the homeostaticprinciple to explain brain phenomena.The principle of homeostasis was al-ready well known in biology, and feed-back controllers were widely studied byelectrical and mechanical engineers.What was new to the Homeostat was thenotion that a set of four interconnectedfeedback controllers could not only bestable, but would always tend towardsstability if each unit were allowed theadditional capability of randomlychanging its relation to the others.

Marvin Minsky (1961) would later creditAshby with originating the concept ofrandom search that was exploited inmuch of AI research. His device coupledtwo key theoretical concepts: behavioraladaptation, and trial-and-error search,and thereby served as a mediator be-tween psychological and computationaltheories of the brain and behavior. Whilenot explaining any new data, or relatingdata to theory, this model actually me-diated between theories in two disci-plines, and at two levels of abstraction.In doing this, it provided a conceptualbridge between the disciplines.

But did this require a working model?From a purely theoretical perspective, theHomeostat proved nothing that could notbe shown mathematically on paper.There have subsequently been formalproofs for the convergence (or lack ofconvergence) of a great many search al-gorithms and random search strategies.In fact, Ashby developed the mathemati-cal equations defining the Homeostatbefore he began designing the device torealize them, and spent several years at-tempting various schemes for construct-ing a device which would both conformto these equations and provide a vividdemonstration of its underlying princi-ples. But unlike a purely mathematicaldemonstration, the Homeostat made itvividly explicit just how such a randomsearch could be coupled to behavior inan explanatory way.

And so the initial interest and moststriking aspect of these working modelsof the brain was their very existence anddemonstration of the fundamental prin-ciples of mechanistic psychology. More-over, the real power of these models layin their ability to be subjected to experi-mentation. This was not only more effi-

Science Studies 1/2006

26

cient than mathematical analysis at atime when the first digital computerswere still under construction. Eventually,the increasing power, ease of use andaccessibility of computers would makedigital simulations far more attractivethan the construction of analog ma-chines for many purposes. But it was theelectronic brains that paved the way tocomputational simulations.

Working Models as Experiments

The outlook for an experimental modelbrightened at once with the problemreduced to the behaviour of two orthree elements. Instead of dreamingabout an impossible ‘monster,’ someelementary experience of the actualworking of two or three brain unitsmight be gained by constructing aworking model in those very limitedbut attainable proportions. (Walter,1953: 109).

Now that we have considered some ofthe demonstrative functions of models,it is time to consider the more traditionalroles of models in scientific explanationand verification, and what working mod-els can offer here that theoretical mod-els cannot. It is these aspects whichCordeschi suggests are the specific ad-vantage of working models over “meremetaphors.” Ashby characterized thisfunction in terms of deduction and ex-ploration:

Simulation for Deduction and Explo-ration. Perhaps the most compellingreason for making models, whether inhardware or by computation, is that inthis way the actual performance of aproposed mechanism can be estab-lished beyond dispute... Note, for in-stance, the idea that the molar func-tioning of the nervous system might beexplained if every passage of a nervous

impulse across a synapse left it increas-ingly ready to transmit a subsequentimpulse. Such a property at the synapsemust impose many striking propertieson the organism’s behavior as a whole,yet for fifty years no opinion could begiven on its validity, for no one coulddeduce how such a system would be-have if the process went on for a longtime; the terminal behavior of such asystem could only be guessed. (Ashby,1962: 463).

There are several ideas packed into thispassage that bear commenting upon.The first thing to note about this passageis that it points to how the availability ofa specific set of material practices influ-ences what can be tested, and thus whatcan become knowledge. Even thoughsome of the mathematical techniquesfor testing various hypotheses might beavailable, the work required to carrythem out often was too great to be real-ized. The more important aspect is thatworking models support the empiricaltesting of certain hypotheses. Eventhough the applicability of the hypoth-esis to the brain might be tenuous, it isstill possible to make progress in the de-tails of theory using such models.

As one popularizer of cybernetics putit:

When we have thus constructed amodel which seems to copy reality, wemay hope to extract from it by calcula-tion certain implications that can befactually verified. If the verificationproves satisfactory, we are entitled toclaim to have got nearer to reality andperhaps sometimes to have explainedit. (de Latil, 1957: 225).

Whether this constitutes explanation ornot might be stretching things, but thereare certainly characteristics of a workingmodel that enliven speculation into boththe structure and organization of the

27

Peter Asaro

natural phenomena, and into possibleextensions to the working model. I be-lieve that it is the ability to move backand forth between deductions fromtheories and extensions to models thatmakes the synthetic method so produc-tive. Though it is in some sense depend-ent on antecedent and subsequent ana-lytic methods and theorizing, it greatlyenhances these by offering new ideasand directions for extension when theo-rizing and analysis are at a loss. This alsohelps to explain why it is often difficultto separate the scientific from the tech-nological advances made by the syn-thetic method – the two become inter-twined as technoscience.

There are two senses in which a modelcan be extended. The first sense does notnecessary involve any changes to themodel itself:

Once the model has been made, thework of the model-maker has reacheda temporary completeness, but usuallyhe then immediately wishes to seewhether the model’s range of applica-tion may be extended. The process ofextension... will be subject to just thesame postulate as the other processesof selection; for, of all possible ways ofextending, the model-maker naturallywants to select those that have somespecial property of relevance. Thus, amodel of the brain in gelatin, that vi-brates just like the brain under concus-sion, is hardly likely to be worth exten-sion in the biochemical direction. Fromthis point of view the process of exten-sion is essentially an exploration. So faras the worker does not know the valid-ity of the extension, to that degree musthe explore without guidance, i.e., “atrandom.” (Ashby, 1972: 110).

Extension can consist of two differentprocesses: validation (or verification)and exploration. In the first sense, a newmodel is extended by showing it models

a wider range of phenomena than firstbelieved, or it is modified to achieve thiseffect. In the second sense, a model canbe explored to find new kinds of phe-nomena in it, or built upon and alteredto produce new or clearer phenomena.In exploration, one does not seek outsome specific correspondence betweenmodel and world, but rather one treatsthe model as itself the object of investi-gation and extension.

If a model actually generates a genu-ine behavior in the world, that is a phe-nomenon itself to be explained. It is not,in this sense a prediction about whatmight happen, it is itself a happening.Moreover, as with experimental appara-tus, phenomena can be played with,changed slightly without any particularexpectations as to what new phenomenamay result, just to see what might hap-pen. In this sense it is not predictive, nordoes it necessarily attempt to fit the dataobtained from some other phenom-enon. It does not tell you what you mightfind in the world, it is what you find.There is a connotation to the use of theword “experiment” which means open-ended exploration of this sort – a “play-ing around” or fiddling with things justto see what might happen (Pickering,1995). A material model like the Homeo-stat can be played with by students andnovices, as well as experts to reveal allsorts of interesting phenomena. Hy-potheses about its behavior can also berigorously tested, with no change in thenature of the model.

Ashby performed a number of experi-ments on the Homeostat, both to provethat it could do what he had hypoth-esized it could, and to discover what elseit was capable of. In terms of demon-strating that it was a valid model of the

Science Studies 1/2006

28

nervous system, Ashby replicated anumber of experiments on hisHomeostat that were previously per-formed on living systems. Among thesewas an experiment by Sperry in whichthe muscles in the arm of a monkey hadbeen surgically cut, swapped aroundand reconnected such that they wouldnow move the arm in the opposite direc-tion than they previously had. In the ex-periment, the monkey relearns quitequickly how to control its arm in this newconfiguration. To replicate the experi-ment, Ashby used a switch on the frontof a Homeostat unit which reversed thecurrent passing through it. He thennoted in the trace of the needle’s move-ment, the exact opposite reaction to anincoming current as before, resulting ininstability. However, after a short searchby the Uniselector, the unit was able tostabilize once again (Ashby, 1952: 105-7). This sort of experiment was of themost basic kind.

More complicated experiments werealso performed. These included condi-tioning experiments in which the nee-dle was manually manipulated by theexperimenter as a form of “punishment”(Ashby, 1952: 114). Ashby also describesan experiment in which he changes therule governing the punishment sys-tematically, sometimes enforcing onerule, and sometimes the other. From thishe observes that the device is able toadapt to two different environments si-multaneously (Ashby, 1952: 115). He alsomade new discoveries of the machine’scapabilities through these explorations.These included ways of materially ma-nipulating the machine that were madepossible by its design, but were not in-tended features of the design. Most sig-nificant among these was the possibil-

ity of tying together with string, or bind-ing with a stiff rod, the needles of two ofthe units. The result was to force themto find an equilibrium under the strangeconstraint of their material entangle-ment, as well as their electronic cou-plings (Ashby, 1952: 117). The Homeosatwas quite capable of finding such anequilibrium. The other aspect of the ma-terial device that was not foreseeable onpaper was its particular temporal di-mensions–how quickly the Uniselectorflipped, how quickly it found a new equi-librium, etc. Ashby also experimentedwith the delay in timing between theunit’s behavior and the administrationof punishments in conditioned training(Ashby, 1952: 120). These aspects of themachine were certainly incidental to itsconception, but quite significant to realoperations and provided the basis forfurther exploration of the Homeostatand its behavioral repertoires.

Ashby went on to develop a muchmore sophisticated synthetic brain, theDynamic and Multistable System, orDAMS. While he had great hopes forDAMS, the technical design, cost, andabove all the complexity of behavior thatthe device exhibited would frustrateAshby for years. As Pickering (forthcom-ing) has recounted in his analysis of theill-fated machine, there was also a clearlydevelopmental aspect of his work onDAMS. Beginning with his great expec-tations for the device, Ashby’s notebooksreveal the various causes of his frustra-tion: from the problem of having to de-vise a clearer notion of “essential vari-ables” than had sufficed for theHomeostat, to the technical difficultiesof building such a large and complicateddevice, to the costs and his lack of re-search support funds, to Ashby’s inabil-

29

Peter Asaro

ity to comprehend the patterns of be-havior exhibited by the complex device.Through these emerging developments,his understanding of the brain and whathe was trying to do with DAMS evolveddynamically over time. It is clear that hewas not certain how the device wouldbehave specifically, but only sought ageneral sort of performance, yet what hegot from it was still frustratingly com-plex, so much so that after nearly sevenyears of work, he abandoned it, andlargely edited it out of the literature.

Walter’s Tortoises were also subject toa number of experiments and explora-tions, and yielded unexpected results.Primarily the unexpected results weredue to interactions between the robotsbased on their lights and light sensors:

Some of these patterns of performancewere calculable, though only as typesof behaviour, in advance, some werequite unforeseen. The faculties of self-recognition and mutual recognitionwere obtained accidentally, since thepilot-light was inserted originally sim-ply to indicate when the steering-servowas in operation. . . . The important fea-ture of the effect is the establishmentof a feedback loop in which the envi-ronment is a component. (Walter, 1953:117)

What seems most clear from Walter’svarious discussions of the Tortoises isthat he believed they were physiologicalmodels of biological phenomena. Ofcourse, he was not claiming that themechanisms which produced the be-haviors in animals were identical to, oreven isomorphic to, the circuits of theTortoises, but just that those behaviorscould be produced by simple circuitswhich employed feedback and modula-tion mechanisms similar to neural cir-cuits – i.e. that they shared some relevant

functional properties. An important sci-entific insight lay in the fact that verysimple circuits could produce such com-plex and interesting phenomena pro-vided that they instantiated these func-tions:

The electronic “tortoise” may appropri-ately be considered as illustrating theuse of models. But models of what? Inthe first place they must be thought ofas illustrating the simplicity of con-struction of the cerebral mechanisms,rather than any simplicity in their or-ganization... In short: reality may in-deed be more complex than the model,but it is legitimate to consider that itmay be equally simple. In a matter ofwhich we know so little such a hypoth-esis is not without importance. (deLatil, 1957: 227-8).

The Tortoises were thus used as a basisto argue against those who hypothesizedthat neural circuits were necessarily orinterminably “complex” because thebehavior of organisms was complex.

Walter stopped short, however, of say-ing that these qualities were purely in theeye of the beholder. Instead, the phe-nomena of life and mind were positedto subsist in the negative feedback loopswhich held the organism in various sta-ble configurations even as it movedthrough a dynamic environment. Forthese biologically-inspired and inspiringmachines to become legitimate scien-tific models, they would have to do morethan appear lifelike and evoke curiosityand wonder.

Despite the fact that they lacked a de-tailed theory or explained any specificset of data, these synthetic brain mod-els produced useful and illuminatingcontributions to brain science. Whatbecomes clear from looking at the his-tory of these electronic models of the

Science Studies 1/2006

30

brain in the 1940s are some of the waysin which they served as mediators be-tween relatively vague theories, and in-formal and often impressionistic obser-vational data, and ultimately served as astepping stone to more formal theoriesand rigorous observations. In short, theydid this by providing models aroundwhich new theorization and observationcould be organized. As a result, theserather simplistic models served as me-diators between initially vague and un-related theories, and more concrete andcoherent theories of brain organizationand behavior which followed.

The Tortoises and Homeostat suc-ceeded in doing this because of theirability to demonstrate and communi-cate scientific knowledge in practicaland accessible ways. These are prag-matic elements of epistemology, whichis not simply a matter of truth and justi-fication, but also depends upon prag-matic elements of the retention, trans-mission, and utilization of knowledge.This is what makes models so crucial tothe epistemic efficacy of scientificknowledge. Such issues do not arise intraditional philosophy of science, whichconsiders only the individual mind seek-ing justified theories, but do becomecrucial once we take a view of knowledgeproduction as a social practice, in whichmultiple agents must generate knowl-edge together and transmit it to others.In these processes, issues of communi-cation are not isolated from issues ofknowledge, but are instead crucial to it.It is in this regard that the ability of amodel to store and transmit knowledge,as image, idea and practices, becomerelevant. Ashby and Walter were bothself-consciously aware of this fact.

It should also be clear that these ex-

amples of working models employ thesynthetic method, and succeed scientifi-cally in virtue of being mediators of aparticular sort. Both the Homeostat andTortoises are examples par excellence ofthe synthetic method–machines built ac-cording to specific principles of construc-tion to offer a proof by example that amechanistic approach to the brain coulddescribe mechanisms that produce plau-sible behaviors. In the case of theHomeostat, it was shown that merelyseeking equilibrium points and stabilitythrough random trial-and-error searchescould lead to adaptive behavior. Whethera “Uniselector” was exactly the samemechanism that real creatures used toadapt to their environments was not atissue because of the level of abstractionat which the model operated. But it suc-ceeded in forging a powerful conceptuallinkage between learning and randomsearch which still drives much of the re-search in AI and machine learning. Inthis sense the model was successful be-cause it was a mediator between theo-ries in biology (e.g. homeostasis) andtheories in engineering (e.g. randomsearch).

The Tortoises, with their elaboratedset of behaviors demonstrate anotheraspect of how working models act asmediators within the synthetic method.Once a few major proofs by demonstra-tion are achieved, it becomes necessaryto explore and extend the workingmodel. Walter did this both by extend-ing the model to new psychological phe-nomena by observing these phenomenain the behavior of the Tortoises, and alsoby improving and extending the ma-chinery of the working model. He builtversions of the Tortoise which incorpo-rated adaptive learning mechanisms,

31

Peter Asaro

called CORA and IRMA (Walter, 1953).Using these mechanisms he “trained”his robots to respond in certain ways toa whistle by building an association be-tween the whistle and another stimulus,much like Pavlov’s dogs. Here the syn-thetic method works by drawing on theconstraints of the real world – we couldimagine all sorts of mechanisms that“might” produce such-and-such be-havior, but here is one that actually doesin this working model. In such cases themodel is mediating in the more tradi-tional sense, between theories and em-pirical data. But the empirical dataare the behavior of the model itself, andthe extension of the range and similar-ity of those behaviors to the target phe-nomena, animal behaviors, further rein-forces the mediating bridge betweentheories, in this case psychological theo-ries of behavior and interacting feedbackcontrol mechanisms.

Conclusions

What we have just reduced to absurd-ity is any prospect of reproducing all itselaboration of units in a workingmodel. If the secret of the brain’s elabo-rate performance lies there, in thenumber of its units, that would be in-deed the only road, and that roadwould be closed. But since our inquiryis above all things a question of per-formance, it seemed reasonable to tryan approach in which the first consid-eration would be the principles andcharacter of the whole apparatus inoperation. (Walter, 1953: 107)

As simulations, or stand-ins, workingmodels were not completely new to sci-ence, but they were new to the brain sci-ences. The brain and mind sciencesfaced a challenge in the first half of the

20th century to become “empirical”through the use of the experimentalmethod and to distance themselves from“metaphysical” forms of explanation(Gardner, 1987). Physics was at the timeconsidered to be the science which othersciences should emulate methodologi-cally, especially if they wanted to bolstertheir empirical legitimacy. There weretwo severe difficulties in doing this whichwere ultimately overcome through thesynthetic method. The first was a require-ment for the observability of the data-generating phenomena upon whichtheories were based. The second diffi-culty was that real functioning brainswere very difficult to work on for techni-cal, methodological and ethical reasons.The result was a behaviorist psychologythat completely ignored the inner work-ings of the brain, and brain sciences thatfocused on the physiology of single orsmall numbers of cells, or the grossanatomy of mostly dead brains. The ex-ception to this was psychiatry, which con-fronted sick and damaged brains on aregular basis, and managed to intervenein rather drastic ways upon them, in thehope of both curing the unhealthy brainand understanding the healthy brain.

As empirical experiments, workingmodels offered a new basis for proceed-ing in areas of brain science that wereotherwise difficult to address using theanalytic “controlled experiments”method of physics because individualvariables could not be so easily isolated.Despite efforts to shore up the epistemicbasis of the brain sciences, the resultingapproaches did not quite manage to in-tegrate the various levels of analysis in acompelling way. It was the cyberneticbrain models which managed to do this.But why should we think that building

Science Studies 1/2006

32

working models should be more usefulthan experimenting on neurons directly,or with the conditioned behavioral pat-terns of living animals?

This might be read as the “paradigmshift” from behaviorism and traditionalneurophysiology to a computationalcognitive neuroscience. It might also bepossible to view this history as insteadseeking a rigorous mathematical andmechanical account, and as a conse-quence of this arriving at theories whichapplied equally to the organic and inor-ganic. But there is another aspect of theuse of these models that is a centraltheme of this article. It is the notion thatthese models acted as bridges or media-tors between existing theories at differ-ent levels of analysis. Cordeschi ac-knowledges that working models estab-lished a new intermediary level of analy-sis between behavior and physiology.However, he emphasizes the autonomyof this new level of analysis, rather thanits constructive and mediating aspects:

The earliest behavioral models, de-signed as simple physical analogs, be-gan to suggest that there might exist anew level for testing psychological andneurological hypotheses, one thatmight coexist alongside the investiga-tions into overt behavior and the nerv-ous system. This was the core idea ofwhat Hull and Craik had already calledthe “synthetic method,” the method ofmodel building. This idea comes fullyinto focus with the advent of cybernet-ics, and especially when the pioneersof AI, who wanted to turn their disci-pline into a new science of the mind,found themselves coming to grips withthe traditional sciences of the mind–psychology and neurology–and withtheir conflicting relationships. Thatevent radically affected the customarytaxonomies of the sciences of the mind.(Cordeschi, 2002: xviii)

The synthetic method was thus one wayof establishing working models as me-diators, mediators between theories es-poused by very different disciplinary tra-ditions, and mediators between very dif-ferent sorts of phenomena and materialperformances.

The issue facing the brain scienceswas how to proceed in devising and test-ing hypotheses about the biological ba-sis of psychology. The first step in doingthis was to give a plausible account thatbridged low-level physiological mecha-nisms and high-level behavioral mecha-nisms. But there is a way of viewing thesynthetic brains in which they mightagain appear puzzling. One can arrive atthis puzzlement by noting that evenwhile the brain sciences were seeking toconstruct a sound empirical basis fortheir work, they did this by positing anew level of analysis of the mind that wasnot itself directly observable. From thisperspective, the synthetic brains aremodels of mental functions, not be-havior or neurons, but rather functionsthat are not themselves directly observ-able. In fact, it is hard to conceive of justwhat entities these models were sup-posed to be modeling. The way out ofthis puzzle is to understand the empiri-cal power of the mediation that theywere performing between different lev-els of analysis. The two levels of analysisbetween which these new brain modelstried to span were the traditional sci-ences of behavior and neurons, and indoing this they were able to combine theempirical force of both fields to bolsterthe hypothesized bridge between them.Without an existing theory of the inter-action of the principle levels of analysisin the brain sciences, it was the particu-lar advantage of the working synthetic

33

Peter Asaro

brain models that they instantiated cer-tain theories about neurons and wereable to display characteristic behaviorsas a consequence. It was the modelsthemselves which mediated betweenthe levels, not theoretically, but throughtheir working and exhibiting behavioralphenomena that could stand in similar-ity relations to animal behavior.

Even if we take a view of science asultimately about “knowing that,” ratherthan “knowing how,” working modelsare a practical means to knowledge andmay very well be dispensable once thefruits of knowledge are collected. In-deed, it can often be difficult to recog-nize and express just how models haveexerted their agency from this perspec-tive. Yet, the moment we stop to ask whycertain techniques and technologies arein place, and how they arrived at theforms they take, we would be at a loss toexplain these in the absence of the his-tory of material culture. In principle, notechnological solution in use is the onlyone possible, and only rarely is it an op-timal solution. Sometimes the technol-ogy in use is the best available, where thecriteria of choice and the alternatives tochoose among have evolved over timeand in interaction with one another. Andso history is the best way to get at suchquestions, and there is a sense in whichevery technological artifact is an archiveof its own history–though it may havebeen subjected to much cleansing andscrubbing to remove its intrinsic histori-cal traces. At the very least, we betterunderstand a technology by knowing itshistory, as well as its structure.

It is clear that synthesizing and experi-menting on working models can achievemany kinds of mediation useful to sci-entific progress. Working models have

received far less attention in the philoso-phy of science literature than theoreti-cal models have. Yet it seems clear thatthe pragmatic virtues of models functionin many areas of scientific practice. Itremains for further study to see to whatextent working models can be found inother areas of science, and what rolesthey play as mediators in those areas.Taken broadly enough, one can findworking models all over science. Yet itseems that the working models in eachscience are more or less unique to thescience in which they are found. Indeed,it is their specificity to the scientific prac-tices in which they are embedded thatmakes them both unique and interest-ing as a means to studying those prac-tices. In this sense it may be undesirableto seek out any “general theory” of work-ing models. Rather it seems more oppor-tune to view these models as an obviouspoint of entry into studying the materialculture of science. By asking such ques-tions as “Why are the models built? Howare they built? What kinds of experi-ments are performed? Which models arekept and which are discarded? How arethey copied and developed over timeand propagated through space?” wemight hope to get at the very essence ofthe material culture of scientific models.

References

Ashby, W. R.1952 Design For a Brain. New York, NY: John

Wiley and Sons.1962 “Simulation of a Brain.” Pp. 452-466 in

Borko (ed.), Computer Applications inthe Behavioral Sciences. New York, NY:Plenum Press.

1972 “Analysis of the System to be Modeled.”Pp. 94-114 in Stogdill (ed.), The Proc-ess of Model-Building in the BehavioralSciences. New York, NY: W. W. Norton.

Science Studies 1/2006

34

Aspray, W.1990 John von Neumann and the Origins of

Modern Computing. Cambridge, MA:MIT Press.

Cartwright, N.1983 How the Laws of Physics Lie. Oxford,

UK: Clarendon Press.Cordeschi, R.2002 The Discovery of the Artificial: Be-

havior, Mind and Machines Before andBeyond Cybernetics. Dordrecht, TheNetherlands: Kluwer.

de Latil, P.1957 Thinking by Machine: A Study of Cyber-

netics. Y. M. Golla (trans. from Frenchedition of 1956) Boston, MA: HoughtonMifflin Company.

Gardner, H.1987 The Mind’s New Science: A History of

the Cognitive Revolution. New York,NY: Basic Books.

Giere, R.1999 Science Without Laws. Chicago, IL:

University of Chicago Press.Hacking, I.1983 Representing and Intervening: Intro-

ductory Topics in the Philosophy ofNatural Science. Cambridge, UK: Cam-bridge University Press.

1992 “Do Thought Experiments Have a Lifeof Their Own?”, in A. Fine and M. Forbesand K. Okruhlik (eds.) PSA 1992, 2: 302-310.

Hayward, R.2001 “The Tortoise and the Love-Machine:

Grey Walter and the Politics of Electro-encephalography.” Science in Context14 (4): 615-641.

Hesse, M.1966 Models and Analogies in Science. Notre

Dame, IN: University of Notre DamePress.

Kuhn, T.1962 The Structure of Scientific Revolutions.

Chicago, IL: University of ChicagoPress.

Minsky, M.1961 “Steps Toward Artificial Intelligence.”

Proceedings of the Institute of RadioEngineers, 49: 8-30.

Morgan, M. & Morrison, M. (eds.)1999 Models as Mediators: Perspectives on

Natural and Social Science. Cambridge,UK: Cambridge University Press.

Pickering, A.1995 The Mangle of Practice: Time, Agency,

and Science. Chicago, IL: The Univer-sity of Chicago Press.

Forthcoming (Unpublished manuscript,February 2006.) Ontological Theatre:Cybernetics in Britain, 1940-2000.Chapter 4: Ross Ashby; Psychiatry, Syn-thetic Brains and Cybernetics. pp. 1-80.

Schaffer, S.1994 “Machine Philosophy: Demonstration

Devices in Gregorian Mechanics.”Osiris 2nd series Instruments 9: 157-182.

Stafford, B.1991 Body Criticism: Imaging the Unseen in

Enlightenment Art and Medicine.Cambridge, MA: MIT Press.

Teller, P.2001 “Twilight of the Perfect Model Model.”

Erkenntnis, 55(3): 393-415.van Fraassen, B1980 The Scientific Image. Oxford, UK:

Clarendon Press.Walter, W. G.1953 The Living Brain. Harmondsworth,

Middlesex, UK: Penguin Books.Winsberg, E.2003 “Simulated Experiments: Methodology

for a Virtual World.” Philosophy of Sci-ence 70 (December 2003): 105-125.

Yaneva, A.2005 “Scaling Up and Down: Extraction Tri-

als in Architectural Design.” SocialStudies of Science, 35(6): 867-894.

Peter AsaroGallery of ResearchAustrian Academy of SciencesVienna, [email protected]