Genic Representation: Reconciling Content and Causal Complexity

27

Transcript of Genic Representation: Reconciling Content and Causal Complexity

Genic Representation: Reconciling Content andCausal ComplexityMichael Wheeler� and Andy ClarkyDRAFT: PLEASE DO NOT CITE WITHOUT AUTHORS'PERMISSION1 Introduction: Hard Times For Inner VehiclesThese are hard times for the notion of internal representation. Increasingly, theorists are ques-tioning the explanatory value of the appeal to internal representation in the search for a scienti�cunderstanding of the mind and of intelligent action.1 What is in dispute is not, of course, thestatus of certain intelligent agents as representers of their worlds. It is surely right and properto say, for example, that a certain person represents as sad the death of Mother Theresa. Whatis in dispute is not our status as representers, but the presence within us of identi�able and sci-enti�cally well-individuated vehicles of representational content. Recent work in neuroscience,robotics, philosophy, and developmental psychology suggests, by way of contrast, that a greatdeal of (what we intuitively regard as) intelligent action may be grounded not in the regimentedactivity of inner content-bearing vehicles, but in complex interactions involving neural, bodily,and environmental factors and forces.2This is an intriguing claim, and one that has occupied both of the present authors elsewhere.3One quick response (which strikes us as correct, as far as it goes) is to draw attention to thespecial class of cases where such ongoing agent-environment interaction is necessarily absent:cases such as counterfactual rehearsal and so on.4 Such cases are important and worthy of specialstudy, but they do not constitute the bulk of daily, on-line intelligent behaviour. The primaryexpression of natural, biological intelligence, it seems fair to say, consists in the production of anongoing sequence of actions that exibly respond to a constant stream of incoming environmentalstimuli | think of driving a car, preparing a meal, or engaging in a lively conversation. In casessuch as these, the allegedly problematic pro�le of complex, multi-factor interactivity clearlyobtains. In this paper we investigate the claim that complex causal interactions cause troublefor the notion of inner representational vehicles. We review some of the cases supposed to putpressure on a representational-vehicle based understanding and conclude that the threat, evenin these ongoing, interactive cases, is more apparent than real. The main contribution of thepresent paper, however, is to go beyond this negative thesis5 and to begin to develop a somewhat�Department of Experimental Psychology, University of Oxford, UK. E-Mail: [email protected]/Neuroscience/Psychology Research Program, Washington University, St.Louis, USA. E-Mail:[email protected], e.g., Thelen and Smith (1993), van Gelder (1995), Webb (1994), Cli� and Noble (forthcoming).2E.g., in neuroscience see Globus (1995). For related claims in robotics, philosophy and developmental psy-chology, see note 1 above.3E.g., Clark (1997, forthcoming-a, forthcoming-b), Clark and Toribio (1994), Wheeler (1994).4For a full treatment of this class of cases, see Clark and Grush (submitted).5The negative point is also pursued in Clark (forthcoming-b).1

di�erent vision of the nature of internal representation itself. For re ection on the more denselyinteractive and distributed (across brain, body and world) class of cases suggests, we believe,a more sensitive and broadly applicable notion of the way inner states contribute to intelligentbehaviour, and hence of the notion of an inner representational vehicle itself.The paper begins (section 2) by sampling the space of highly interactive problem-solvingroutines. Section 3 discusses the use of such data to draw anti-representationalist conclusions.Section 4 pursues an extended analogy between the role of genes in the causal determinationof biological form and structure, and the role of neural states in the causal determination ofintelligent action. We use this analogy to develop a notion of genic representation, arguing thatthe crucial move, in both cases, is to appeal to normal ecological context so as to balance multi-factoral, interactive causal determination against the intuition that certain aspects of the causalnexus play a special role in promoting adaptive success. Section 5 then raises a variety of worriesabout this vision and uses them to help get a better grip on the notion of genic representationitself. We end with a puzzle concerning the relation between cognition and representation.2 Spreading the CauseWe shall use the term causal spread to describe any situation in which an outcome is generatedby a network of multiple, co-contributing causal factors that extends across readily identi�-able systemic boundaries. One can begin to see why cognitive scientists should care aboutthis phenomenon, by noticing that the combination of massively distributed activity, dense in-terconnectedness, and multi-directional connectivity present in biological brains has led sometheorists to claim that many di�erent areas of an animal's brain will typically be crucial andco-contributing causal factors in the production of any speci�c behaviour. At �rst sight, recog-nizing the existence of such intra-neural causal spread might not seem all that new or radical.After all, ever since the birth and rebirth of connectionism (in the 1940s and the 1980s respec-tively), cognitive scientists have been getting used to the idea that intelligent behaviour mightbe explained as the causal outcome of distributed activity patterns in neural networks. However,in theoretical positions which wholeheartedly embrace intra-neural causal spread, the tendencyis often towards a rather extreme form of neural holism, one that is not typical of connectionism.Consider, for example, the following remarks by Varela, Thompson, and Rosch:[The brain is] a highly cooperative system: the dense interconnections among itscomponents entail that everything going on will be a function of what all the compo-nents are doing ... [If] one arti�cially mobilizes the reticular system, an organism willchange behaviorally from, say, being awake to being asleep. This change does notindicate, however, that the reticular system is the controller of wakefulness. Thatsystem is, rather, a form of architecture in the brain that permits certain internalcoherences to arise. But when these coherences arise, they are not simply due to anyparticular system. (Varela et al., 1991, p.94)6We shall not dwell further on this style of complexity-induced intra-neural causal spread. Forcausal spread does not always require impressive levels of neural complexity. Neither does itrecognize the peripheries of the nervous system or the skin as crucial boundaries. Thus considerthe following experiment in evolutionary robotics, as performed by Harvey, Husbands, and Cli�(1994).7 A visually guided robot was placed in a rectangular, walled, dark arena, that had6For other accounts with a similar avour, see, e.g., Beer (1995), Harvey (1992), Kelso (1995), Thelen andSmith (1993), and Wheeler (1994).7Practitioners of evolutionary robotics use algorithms inspired by Darwinian evolution to design control systemsfor autonomous robots. Loosely speaking, the methodology is to set up a way of encoding robot control systems2

a white triangle and a white rectangle mounted on one wall. The simple discrimination taskconfronting the robot was to approach the triangle but not the rectangle. It was decided inadvance, by the experimenters, that the robot's control system should consist of an arti�cialneural network connected to some extremely rudimentary visual receptors. Within these broadconstraints, an evolutionary algorithm was allowed to determine the speci�c architecture of theneural network, the way in which the network was coupled to the visual receptors, and the�eld-sizes and spatial positions (within predetermined ranges) of those visual receptors.The successful strategy discovered by arti�cial evolution was as follows: Two visual receptorswere positioned geometrically such that visual �xation on the oblique edge of the triangle wouldtypically result in a pair of visual signals (i.e., receptor 1 = low, receptor 2 = high) whichwas di�erent from such pairs produced anywhere else in the arena (or almost anywhere else inthe arena | see below). The robot would move in a straight line if the pair of visual signalswas appropriate for �xation on the triangle, and in a rotational movement otherwise. Thusif the robot was �xated on the triangle, it would tend to move in a straight line towards it.Otherwise it would simply rotate until it did locate the triangle. Occasionally the robot would�xate | `by mistake' | on one edge of the rectangle, simply because, from certain angles, thatedge would result in a qualitatively similar pair of visual signals being generated as would havebeen generated by the sloping edge of the triangle. Perturbed into straight line movement, therobot would begin to approach the rectangle. However, the looming rectangle would, unlike alooming triangle, produce a change in the relative values of the visual inputs (receptor 1 wouldbe forced into a high state of activation), and the robot would be perturbed into a rotationalmovement. During this rotation, the robot would almost inevitably re�xate on the correcttarget, the triangle.This admittedly humble robot serves as a �rst demonstration of causal spread. To see why,contrast it with a di�erent (hypothetical) triangle-seeking robot, one which uses visual inputsto build an internal model of the environment, and which then uses that model to plan anappropriate path to the target, encoded as a set of movement-instructions. Assuming everythinggoes to plan (the model is accurate, the environment doesn't change too much etc.), the robot'smotors then follow the stored plan, without additional sensory feedback.8 In this case, onewould be inclined to say that the agent's body and its environment are playing, at best, minimalcausal roles in the production of the required behaviour. For sure, the environment is a sourceof information at the beginning of the planning phase, and that information becomes input forthe planning mechanism via the agent's visual sensors. However, after that initial input-stage,and during the successful completion of the behaviour, neither the environment nor the visualsensors play any further part in the proceedings. And although the agent's motor mechanismsare crucial in enabling it to move, they merely follow the movement-instructions that hail fromthe planning mechanism; they do not play any active part in generating the behavioural solution.By contrast, the cunning solution found by arti�cial evolution demands signi�cant causalcontributions from (i) the agent's nervous system, (ii) the agent's visual morphology, (iii) theagent's bodily movements and (iv) the local environment with which the agent regularly inter-as genotypes, and then, starting with a randomly generated population of controllers, and some evaluation task,to implement a selection cycle such that more successful controllers have a proportionally higher opportunityto contribute genetic material to subsequent generations, i.e., to be `parents.' Genetic operators analogous torecombination and mutation in natural reproduction are applied to the parental genotypes to produce `children,'and (typically) a number of existing members of the population are discarded so that the population size remainsconstant. Each robot in the resulting new population is then automatically evaluated, and the process starts allover again. Over successive generations, better performing controllers are discovered.8This is a rather extreme example of a robot that works according to the principles of what Brooks (1991) hasdubbed the `sense-model-plan-act' framework. Brooks takes this framework to be characteristic of the approachto robotics adopted in classical arti�cial intelligence. 3

acts. Thus whilst standard cognitive-scienti�c theorizing typically picks out neural processingevents as the agent-side causal factors that matter, one agent-side factor that deserves to besingled-out in the evolved solution here is the spatial layout of part of the agent's physical body.This is not to deny neural events their place in the complete agent-side causal story (even thoughthe evolved neural network was, in actual fact, quite simple). Rather it is to acknowledge thepoint that it is the geometric organization of the robot's visual morphology that is primary inenabling the robot to become, and then to remain, locked onto the correct �gure. The crucialpart played by the environment becomes clear once one realizes that it is the speci�c ecologicalniche inhabited by the robot that enables the selected-for strategy to produce reliable triangle-rectangle discrimination. If, for example, non-triangle-related sloping edges were common in therobot's environment, then although the evolved strategy would presumably enable the robot toavoid rectangles and to move towards triangles, it would no longer enable it to move reliablytowards only triangles. So successful triangle-rectangle discrimination depends not just on thework done by these agent-side mechanisms, but also on the regular causal commerce betweenthose mechanisms and speci�c structures in the environment which can be depended upon tobe reliably present.Put all this together, and one arrives at the following picture: What the observer identi�es(correctly) as triangle-rectangle-discrimination is an emergent structure in the pattern of ongoingbrain-body-environment interactions. And the causal mechanism responsible for producing thisemergent behaviour is a process that straddles brain-body-environment boundaries. This is thedistinctive signature of causal spread.Our second example again features a robot, this time as a means for investigating a spe-ci�c instance of biological intelligence. To attract mates, male crickets produce an auditoryadvertisement, the frequency and temporal pattern of which are species-speci�c. Conspeci�cfemales track this signal. So what sort of mechanisms in the female might underlie her track-ing capacity? By combining existing neuroethological studies with robotics-inspired ideas, andby implementing a candidate mechanism in an actual robot, Webb (1994, 1996) has developedand defended an elegant story that begins with the anatomy of the female cricket's peripheralauditory mechanism.The female cricket has two ears (one on each of her two front legs). A tracheal tube of �xedlength connects these ears to each other, and to other openings (called `spiracles') on her body.So sound arrives at the ear-drums both externally (i.e., directly from the sound source) andinternally (i.e., via the trachea). According to Webb, what happens when the male's auditorysignal is picked-up is as follows: On the side nearer the sound source, the externally arrivingsound travels less distance than the internally arriving sound, whereas, on the side further awayfrom the sound source, the two sounds travel the same distance. This means that the externaland internal arrivals of sound are out of phase at the ear-drum on the side nearer to the soundsource, but in phase at the ear-drum on the side further away from the sound source. Thepractical consequence of this di�erence is that the amplitude of ear-drum vibration is higheron the side nearer the sound source. In short, when the male's signal is picked up, there is adirection-dependent intensity di�erence at the female's ears, with the side closer to the soundsource having the stronger response.So far so good. All the female has to do, in order to �nd the male, is to keep moving in thedirection indicated by the ear-drum with the higher amplitude. This is achieved via the activityof two dedicated inter-neurons in the female's nervous system, one of which is connected tothe female's right ear, the other of which is connected to her left. Each of these neurons �reswhen its activation level reaches a certain threshold, but because the neuron which is connectedto the ear-drum with the higher vibration amplitude will be receiving the stronger input, thatneuron will reach threshold more quickly, and hence �re �rst. So the idea is that, at each burst4

of sound, the female moves towards the side of the neuron that �res �rst. In e�ect, therefore,her tracking mechanism responds only to the beginning of a sound burst. It is here that thespeci�c temporal pattern of the male's signal becomes important. On the one hand, the gapsbetween the syllables of the male's song must not be too long, otherwise the information abouthis location will arrive too infrequently for the female's mechanisms to track him reliably. Onthe other hand, the gaps between the syllables of the male's song must not be too short. Eachof the two dedicated interneurons in the female's nervous system has a decay time during which,after �ring, it gradually returns to its non-activated rest-state. During this recovery period, aneuron is nearer to its �ring threshold than if it were at rest. In consequence, if a neuron receivesinput during the decay time, it will �re again more quickly than if it receives that input whileat rest. So if the gaps between the syllables of the male's song were shorter than the total decaytime of the neurons, or if the song were continuous, then it would often be unclear which of thetwo neurons had �red �rst. Thus the temporal pattern of the male's song is constrained by theactivation pro�le of the dedicated interneurons in the female.Webb's explanation bears all the hallmarks of causal spread. Adaptive success is securedby a system of organized interactions, extended in space and time, involving signi�cant causalcontributions from the female cricket's nervous system (the dedicated interneurons), her body(the �xed-length trachea), and her environment (the speci�c structure of the male's signal).9The last of our preliminary examples of causal spread concerns infant walking. Consider thefollowing data: Hold a newborn human infant upright and she will produce coordinated steppingmovements. However, at about 2 months of age, these movements seemingly disappear from thechild's repertoire of actions, only to reappear again at about 8 to 10 months, when she acquiresthe capacity to support her own weight on her feet. At about 12 months of age, the infanttakes her �rst independent steps. How might this reliable sequence of transitions be explained?A certain tradition in developmental theorizing would understand it as, broadly speaking, therelentless playing out of a set of developmental instructions encoded in the infant's genes. Onthis view, behavioural change follows from the gradual maturing and coming `on-line' of agenetically-speci�ed central pattern generator which is housed in the central nervous system,and which determines patterns of walking by issuing directions to the appropriate muscles.This `instructionist' style of explanation is roundly rejected by Thelen and Smith (1993), whosuggest that locomotion and locomotive development are examples not of central control (neuralor genetic), but of self-organization in an extended brain-body-environment system, that is, ina system exhibiting large-scale causal spread.The concept of self-organization will play an important role in what follows, so we needto cash it out. Self-organization occurs in energetically open complex systems where largenumbers of components interact with each other in nonlinear ways to produce the autonomousemergence and maintenance of structured order.10 Popular illustrative examples include theBelousov-Zhabotinski reaction, lasers and slime-molds (see, e.g., Goodwin, 1994; Kelso, 1995;Thelen & Smith, 1993). To appreciate just one way in which, on Thelen and Smith's account,self-organization makes a crucial contribution to an understanding of infant walking, recall thedevelopmental stage in which infants who are being held upright do not produce coordinated9Notice that if one had started out from a rather unreconstructed, classical information processing perspective,then one might have been driven to think that sounds in general would have to be processed, and that at least twoseparate processing mechanisms would be required | one to recognize the male's song, and one to determine thedirection in which to move. On Webb's story, however, sounds in general are not processed, and one mechanismsu�ces. `Recognition' of the correct signal is implicit, because the female cricket is simply unable to move reliablytowards signals which are of the wrong frequency and/or temporal pattern.10The term `autonomous' is commonly used in connection with self-organization, to capture the fact that theorder observed in a self-organizing system is a product of the intrinsic nature of that system, rather than of theactivity of some external organizing agent. 5

stepping movements. Thelen and Smith found that, during this stage, 7 month old infants willimmediately produce coordinated stepping movements, if they are held upright on a slow-movingmotorized treadmill. Moreover, the infants are able to compensate for increases in the overalltreadmill speed; and they are able to make asymmetrical leg adjustments to preserve coordinatedstepping, if the two belts of the treadmill are set to move at di�erent speeds. Thelen and Smith'shypothesis (op cit. pp. 96-7) is that the mechanical action of the treadmill brings about a shiftbetween the non-stepping and stepping modes of behaviour. The treadmill does this simply byreplacing certain leg-dynamics that occur naturally in mature locomotion. Just after the infant'slegs `touch down' onto the moving treadmill, her centre of gravity is shifted over the stance legand the trailing leg is stretched backward. In e�ect, the trailing leg behaves like a spring, thestretch imparting energy into the leg which then swings forward with little or no extra muscleactivity. The initiation of the swing appears to be triggered by the proprioceptive, biomechanicalinformation available to the central nervous system at the point of maximum stretch, while theachievement of maximum stretch is itself jointly determined by the action of the treadmill andthe infant's capacity to make at-foot contact with the belt, this capacity depending in turnupon the exor tone of the infant's leg muscles. So, given the context provided by the treadmill,the basic patterns of mature walking dynamically self-organize in real-time, during a phase ofdevelopment in which such patterns were thought to be lost to the infant.From this sort of example, Thelen and Smith draw the following conclusions.Leg coordination patterns are entirely situation-dependent ... There is ... noessence of locomotion, either in the motor cortex or in the spinal cord. (pp.16-17,original emphasis)There is no logical or empirical way to assign priority in the assembly of ...[kicking] movements to either the organic components that \belong" to the infantor to the environment in which they are performed. Both determine the collectivefeatures of the movement which we call a kick. (p.83, original emphasis)[Locomotor] development can only be understood by recognizing the multidimen-sional nature of this behavior ... the organic components and the context are equallycausal and privileged. (p.17)These conclusions make fully explicit the now-familiar claim of causal equality, namely thatthe causal contributions made to on-line adaptive success by environmental and (non-neural)bodily happenings are on a par with those made by neural happenings. But there is also a moreradical claim appearing here (one which, in truth, has been bubbling just under the surface ofall our examples), namely that, in cases of on-line intelligence, no aspect of the extended brain-body-environment causal system should be explanatorily privileged in the cognitive-scienti�cunderstanding of the observed behaviour. This sharing-out of the explanatory weight | callit the phenomenon of explanatory spread | is manifestly at odds with the cognitive-scienti�ctradition, which is to seek explanations of intelligent behaviour that appeal fundamentally, andusually exclusively, to strictly agent-internal neural phenomena.Once the distinction between causal spread and explanatory spread has been made, a ques-tion arises about the relationship between the two. Reading between the lines, it appears thatthe message from Thelen and Smith is that causal spread and explanatory spread come as apackage: thus we learn that \the organic components and the context are equally causal andprivileged". If this co-occurrence is supposed to indicate a necessary link between the two formsof spread, then perhaps the most obvious and plausible way in which that link might be madeintelligible is if explanatory spread were entailed by causal spread. Under such circumstances,6

any non-neural features of the agent's body (e.g., sensor morphologies, tracheal tubes, muscledynamics), or any features of the agent's environment (e.g., reliable structures, treadmills), that�gured as full-blooded factors in the underlying causal story, would thereby assume an explana-tory status equal to that of any neural features that so �gured. Whether this is the right wayto think about things is an issue to which we shall return.3 Explanation Without Representation?In addition to laying stress on the causal and explanatory contributions made by the agent'snon-neural body and her environment, theorists who embrace causal spread often display amarked hostility to the representation-centred explanatory vocabulary that is at the heart ofmost cognitive science. For example, Varela et al. (1991) claim that their approach to cognition\questions the centrality of the notion that cognition is fundamentally representation" (p.9).And in discussing her robot implementation of the theory of cricket phonotaxis described earlier,Webb declares her mistrust of representational interpretations.It could be argued that the robot does contain `representations', in the senseof variables that correspond to the strength of sensory inputs or signals for motoroutputs. But does conceiving of these variables as `representations of the externalworld' and thus the mechanism as `manipulation of symbols' actually provide an ex-planatory function? It is not necessary to use this symbolic interpretation to explainhow the system functions: the variables serve a mechanical function in connectingsensors to motors, a role epistemologically comparable to the function of the gearsconnecting the motors to the wheels. (Webb, 1994, p.53)Thelen and Smith are even more emphatic:How do minds change? Where does new knowledge, new understanding, newbehavior come from? How does the organism continually adapt and create newsolutions to new problems? The answer we present ... makes no use of representationor representation-like processes. (Thelen & Smith, 1993, p.42)For sure, these theorists (and others who pursue a similar line) inherit some of their scepticismabout representational styles of explanation from aspects of their overall theoretical outlookother than their embrace of causal spread (aspects that can often be traced to the in uenceof Continental phenomenology or Gibsonian psychology). But there is little doubt that causalspread is a key factor, and that it is often taken to be implied by, or supportive of, thoseother aspects (Wheeler, 1996). So it is certainly appropriate to ask to what extent a distrustof representationalism might reasonably be fuelled directly by a recognition of causal spread.In other words, does the presence of causal spread alone threaten to undermine explanatorystrategies that appeal to internal representations?There is at least a prima facie case. If behaviour is properly seen as coming about throughan extended causal process, in which interactions cross the boundaries between brain, body, andenvironment, then not only is the causal system that the cognitive scientist needs to explain notcon�ned within the nervous system, but it might well seem that there is little or no warrantto think of any neural states that happen to be involved as fully specifying behaviour (that is,as encoding the instructions for what to do). The argument here would be something like thefollowing: a neural state that has been identi�ed as causally contributing to some behaviouraloutcome cannot count as fully specifying that outcome, if various additional causal contributionsmade by other neural events, non-neural bodily processes, and/or environmental factors play7

substantial roles in determining the form of the behaviour. An especially potent application ofthis style of reasoning becomes possible where the form of the behaviour stems, to some largeextent, not from instruction-sets issued by a centrally located controller, but from processesof generic self-organization in the brain, body, and/or environment (as in Thelen and Smith'sanalysis of infant walking). The general position is nicely stated by Keijzer:The neural system does not incorporate a complete set of behavioral instructions... order does not ow from the neural system outward. Instead the neural sys-tem uses the order which is already present in the musculo-skeletal system and theenvironment. Behavior is subsequently the result of the interactions between thepre-existing order in these systems. It consists of the mutual modulation of neural,bodily and environmental dynamics. (Keijzer, manuscript, p.3)What this shows, we think, is that there is some force to the claim that certain forms of large-scale causal spread threaten to undermine representational forms of explanation in cognitivescience.11In what remains of this paper, we hope to stem the tide of blanket anti-representationalism,without losing sight of the important lessons that have emerged from the foregoing discussion.Such a project recommends itself, in part because there is growing evidence that representation-based modes of understanding are useful tools, even where we confront ongoing, interactivebehaviours involving large degrees of causal spread. Here we shall consider the details of justone illustrative example.12In work by Franceschini, Pichon, and Blanes (1992), detailed neurobiological knowledge ofthe visual system of the house y was used to guide the design of a mobile robot that is capableof navigating its way to a goal (a light source), in real time, whilst avoiding obstacles. Thisfeat is achieved via a series of simple movements, each made according to a speci�c direction-heading generated in the following way. The robot is endowed with a compound eye featuringa layer of photosensors connected to a parallel array of analogue elementary motion detectors(EMDs). This EMD-array drives an obstacle avoidance layer (a parallel-processing, analoguenetwork) that integrates the motion signals from the EMDs to help control the steering of the11Having arrived at this claim concerning causal spread, we should pause to note the existence of a di�erent butrelated phenomenon, namely continuous reciprocal causation. This is causation that involves multiple simultaneousinteractions and complex dynamic feedback loops, such that (a) the causal contribution of each component in thesystem of interest is determined by, and helps to determine, the causal contributions of large numbers of othercomponents, and, moreover, (b) those contributions may change radically over time (Clark, 1997, forthcoming-b). In certain extreme cases of continuous reciprocal causation, the sheer number and complexity of the causalinteractions occurring within the agent's nervous system and/or between the agent and its environment can, itseems, make it di�cult to apply standard representational explanatory strategies. So although causal spreadand continuous reciprocal causation are clearly separable phenomena | as indicated by the fact that one couldhave a system in which causal spread is rife, but in which the multiple causal factors concerned play distinctroles and combine linearly to yield action| they can apparently have similar implications. Indeed, both can beused independently to provide an argument for the adoption of a thoroughgoing dynamical systems approach tocognitive science, as developed and defended by, among others, Beer (1995), Husbands, Harvey, and Cli� (1995),Kelso (1995), Smithers (1995), Thelen and Smith (1993), van Gelder (1995), and Wheeler (1994). (The canonicalintroduction to the dynamical systems approach is the collection edited by Port and van Gelder (1995). Forcritical discussion of the �eld, see Clark (1997, forthcoming-a) and Clark and Toribio (1994).) Dynamical systemstheory is, in many ways, a natural mathematical language in which to formulate explanations of causal systemsthat range across brain, body, and environment. One simply treats brain, body, and environment as coupleddynamical systems, as sources of parameters and state variables that in uence each other in time. And the subtleinteractions that are characteristic of continuous reciprocal causation are similarly conducive to a dynamicalsystems treatment in terms of coupling; see the discussion of the Watt Governor by van Gelder (1995). Although,in this paper, we do not explicitly pursue issues surrounding the dynamical systems approach, it will not escapenotice that we make use of certain dynamical systems concepts in developing our position.12For others, see, e.g., Agre (1988) and Mataric (1991).8

robot. Since the only information available to the obstacle avoidance layer concerns motion, asregistered by the EMD-array, in an environment of stationary obstacles, the robot is, for thepurposes of obstacle avoidance, blind at rest. However, the EMD-array is able to use relativemotion information, generated by the robot's own bodily movements during the preceding step,to build a temporary `snap map' of detected obstacles. This map, which is made available tothe obstacle-avoidance layer, is constructed in a polar coordinate system de�ned in relation tothe compound eye. (Since the eye is mounted in a �xed position, it rotates with the wheels,so a frame of reference centred on the eye is also one centred on the direction of movement.)Then, in a local and short-lived motor-map, information concerning the angular bearings ofthe obstacles (as supplied by the snap map) is fused with information concerning the angularbearing of the goal (as supplied by a supplementary visual system), and the direction-heading ofthe next movement is generated. This heading is as close as possible to the one that would takethe robot straight towards the goal, adjusted so that the robot avoids all detected obstacles.It seems that the adaptive success of Franceschini et al.'s robot is a paradigmatic case ofon-line intelligent behaviour being generated by a distributed causal process featuring large-scalecausal spread. The routes that the robot traverses are never fully speci�ed at any global levelby centrally located `neural' processes. Rather, they emerge from ongoing dynamic feedbackbetween brain, body and environment.13 This creates a certain tension: on the strength of thecase presented earlier, one might expect that any attempt here to make an explanatory appealto internal representations will court problems; yet the explanation of the agent's behaviouras governed by a series of maps seems to speak in favour of a recognizably representationalunderstanding of that behaviour. At this point care is needed, not least because the internalstructures in question are clearly not maps in a certain familiar sense commonly appealed towithin arti�cial intelligence, in that they are not symbolic representations of the objective envi-ronment which are stored and recalled for the o�-line advance planning of a route. Instead, theyare short-lived, task-speci�c, egocentric (i.e., robot-centred) structures, which are built on-line,during each round of sensing and action. That said, the fact remains that nothing about thisparticular pro�le of features seems to undermine, in any obvious way, the basic possibility ofsome sort of representational understanding.14In this tension lies the hope of a rapprochement between causal spread and representationalexplanation. Time, then, to reconsider our options, and to try to determine the true implicationsof causal spread. One way to do this, we think, is to turn to a somewhat di�erent arena | onein which we see essentially the same debate, but in which the conceptual and empirical terrainhas been better explored.4 Genes, Codes and ProgramsCognitive scientists are not alone in making the acquaintance of causal spread. Some biologiststoo have been grappling with the phenomenon. Of course, the term `causal spread' itself is notin currency in biology, but there is no doubt that the same beast is at large. Moreover, just13We should record the fact that Franceschini et al.'s robot receives no visual feedback during each movement.However, it would be wrong to conclude from this that its operating principles are equivalent to those of thehypothetical `classical' robot, discussed in section 2, which builds a model of the world, plans a sequence ofactions, and �nally executes that sequence without further recourse to sensory feedback. The time frame thatmatters here is the overall period between the beginning and end of the task. During this period, the `classical'robot does not depend on regular sensory feedback, merely on the information extracted from the initial sensoryinput. Franceschini et al.'s robot, on the other hand, depends entirely on such feedback to dynamically organize itsroute. In the language recommended by Brooks (1991), Franceschini et al.'s robot is situated in its environment,whereas the other robot is not.14Cf. the discussion of representation, egocentricity and task-speci�city by Clark (forthcoming-a).9

as cognitive scientists divide, broadly speaking, into two camps | those who think that causalspread is either a negligible e�ect or, at worst, a complicating factor, and those who think thatit (along with other factors) requires us to rethink the conceptual foundations of an entire �eldof study | so evolutionary biologists too divide into essentially similar groupings.15 In thissection we shall focus on the version of the debate that has been engaging biologists, to see howthe issues are panning out there.The received orthodoxy in contemporary biology is that genes, or complexes of genes, shouldbe understood as encoding for phenotypic traits (such as bodily forms and neural architectures).Thus an organism's genotype is standardly characterized as a set of instructions for, or as aprogram for, or as storing the information for, the building of that organism's phenotype. Inshort, the standard view in biology is that genes represent phenotypic traits, and that genotypesrepresent phenotypes. We have seen that representational explanatory strategies in cognitive sci-ence might be put under pressure by the presence of causal spread in the systems that underlieon-line intelligent behaviour. One might wonder, therefore, if there is any way in which causalspread might threaten the representational understanding of genes? In order to see how thismight be a real issue, we need to get clear about how, in the context of this discussion, to mapthe phenomena studied by cognitive scientists onto the phenomena studied by developmentalbiologists. In relation to genes and their phenotypic e�ects, it seems that the correlates of neuralstates are genes and gene-complexes, and that the correlates of behavioural outcomes are pheno-typic traits. Therefore the system in which the presence of causal spread might threaten the ideathat genes encode for traits would be the system that underlies the ontogenetic development ofphenotype from genotype. First, then, we need to identify causal spread in the developmentalsystem.Analogous to intra-neural causal spread is what one might call genomic causal spread. Some-times, alleles (the variant forms of genes) at one locus have an e�ect on the expression of genes atanother locus. This is known as epistasis and (along with pleiotropy | the fact that individualgenes often have multiple phenotypic e�ects) it shows that any idea of there being, in general, aone-to-one causal correlation between genes and traits is hopelessly wrong. The actual pictureis, as Varela et al. (1991, p.188) put it, one of genic interdependence, in which the genome isregarded \not [as] a linear array of independent genes (manifesting as traits) but [as] a highlyinterwoven network of multiple reciprocal e�ects". Having recorded the existence of genomiccausal spread, we shall (as we did in the intra-neural case) press on, and track the progressiveexpansion of causal spread in the system of interest.Our �rst stop is a compelling explanation of biological development due to Brian Goodwin(1994), in which the starring role is played by the idea of self-organization in morphogenetic�elds. In physics, a �eld is a dynamical system that is extended in space, and which cantherefore be described in term of kinetics and spatial relations. Goodwin's morphogenetic �eldsare a sub-species of �elds in general, in that they are �elds which involve combinations ofphysical and chemical properties in ways that are distinctive of living systems. For example,in his account of the developmental system that generates the phenotypic form of Acetabularia(a type of algae), Goodwin gives key roles to the ways in which cells control the concentrationof calcium in the cytoplasm (the plasmic content of the cell other than the nucleus), and theways in which calcium a�ects the mechanical properties of the cytoplasm. Given the idea ofmorphogenetic �elds, Goodwin proceeds to argue that phenotypic forms are the characteristicand robust modes of order that emerge through self-organization in such dynamical systems. In15It was heartening to hear the world-renowned evolutionary biologist, John Maynard Smith, make essentiallythis observation, in the discussion session that followed one of his recent lectures (\Evolution and Development:Information or Self-organisation?", The London School of Economics Annual Darwin Public Lecture, Thursday8th May 1997). 10

short, for Goodwin, biological development can be understood as the dynamic self-organizationof generic forms in morphogenetic �elds.Goodwin drives home his message via discussions of many phenotypic traits, including tetra-pod limbs and the vertebrate eye. Here we shall concentrate on his account of phyllotaxis (leafarrangements) in higher plants (op cit. pp.105-119). Despite the enormous variety of phyllotac-tic arrangements that we see in nature, there are really only three generic forms. The mostcommon of these is spiral phyllotaxis, in which successive leaves on the stem appear at a �xedangle of rotation relative to each other. Amazingly, the angles of rotation found in natural in-stances of spiral phyllotaxis tend to take one of only a few values, of which the most common is137.5 degrees. How might one explain these facts? Goodwin o�ers us the following account: Asleaf-tissue grows, it places pressure on an elastic surface layer of epidermal cells. This pressurecauses the epidermal cells to synthesize cellulose micro�brils to resist the force. Where the nextleaf will grow is determined by the fact that, as a result of exactly where the stress has beenplaced, and exactly how the cellulose defences are laid down, the resistance to growth will bestronger in some areas of the epidermal layer than in others. The idea, then, is that the patternof leaves in the phenotype results from a sequence of mechanical interactions between (i) thegrowing leaves under the epidermal surface, and (ii) the barricades of defending cellulose mi-cro�brils. To provide support for such a view, Goodwin cites modelling studies (due to Green)which show that the phyllotactic arrangements observed in nature are stable patterns producedby such a system. This done, the challenge is to explain why these arrangements are the onlystable arrangements generated by the proposed system. Here Goodwin appeals to a secondmodel (due to Douady and Couder) which demonstrates that if (a) the rate of leaf formation isabove a critical value, and (b) the system starts with the most-commonly-found initial patternof leaf primordia in the growing tip, then the developing plant will tend overwhelmingly to settleon spiral phyllotaxis with an angle of rotation of 137.5 degrees! In other words, given certainparameter-values and initial conditions, the most common arrangement found in nature is thedominant generic form produced by the self-organizing developmental dynamics. Moreover, withdi�erent values for certain key parameters in the model (e.g., the growth rates and the numberof leaves generated at any one time), the other phyllotactic arrangements observed in naturereveal themselves as less-likely generic forms of the system. The evidence is indeed persuasive:it seems that phyllotactic development is plausibly understood as a process of self-organizationin a morphogenetic �eld in the growing tip of a plant.In the present context, the most important point to take away from all this is that thesorts of chemical and mechanical processes singled out by Goodwin, as essential causal factorsin development, provide further evidence of causal spread in the developmental system. Thee�ects of calcium on the mechanical properties of the cytoplasm, mechanical stresses in elasticsheets of cells, spatial displacements of growing forms in the face of physical resistance: all ofthese processes are extra-genetic in nature. Once again the crucial causal nexus is creeping everfarther outwards, this time from the genotype into the surrounding cellular and extra-cellularbodily context.The �nal extent of this leakage should come as no surprise. Consider the geneticist's favourite y, the much-studied Drosophila. There are usually about 1000 light-receptor cells in this insect'scompound eye. Genetic mutations can reduce the number of receptors dramatically, but geneticevents are not the only causal factor in the developmental equation. As Lewontin (1983, pp.69-70) reports, the �nal number of receptors also depends on the temperature at which the iesdevelop. If ies with the normal genotype develop at a temperature of 15 degrees centigrade, thenthey will end up with 1100 receptors; but they will have only 750 receptors if the developmentaltemperature is as high as 30 degrees centigrade.To help characterize data such as these, Lewontin introduces the idea of a norm of reaction,11

a curve generated by taking a particular genotype, and plotting changes in a phenotypic traitof interest (here, the number of receptors) against an environmental variable (here, the develop-mental temperature). In e�ect, a norm of reaction shows how an organism of that genotype willdevelop in di�erent environments. So what does one discover if one investigates development in acertain species, by systematically plotting norms of reaction for di�erent genotypes? Drosophilawith a mutation known as Ultrabar always end up with less visual receptors than those with thenormal genotype. The same is true of Drosophila with a di�erent mutation, Infrabar. However,the two mutant genotypes have opposite relations to temperature. The number of receptorspossessed by Ultrabar ies decreases with developmental temperature, whilst the number pos-sessed by Infrabar ies increases. In fact, the norms of reaction for Ultrabar and Infrabar aresuch that they cross over, meaning that one cannot state categorically that ies of one of thesetwo genotypes have more receptors than those of the other. Which mutation will have morereceptors is dependent upon the environmental temperature during development.16Lewontin's analysis is evidence that environmental features are active and crucial causalfactors in determining the phenotypic outcomes of developmental systems. Thus the full extentof causal spread in developmental systems is revealed. Phenotypic form is secured by a systemof organized interactions, extended in space and time, involving signi�cant causal contributionsfrom genes, chemical and mechanical processes in the developing organism's body, and theexternal environment.So how does the representational theory of genes fare, once causal spread in developmentalsystems is acknowledged? Here are some views:We have often heard it said that genes contain the \information" that speci�esa living being. This is wrong for two reasons. First, because it confuses the phe-nomenon of heredity with the mechanism of replication of certain cell components(DNA), whose structure has great transgenerational stability. And second, becausewhen we say that DNA contains what is necessary to specify a living being, we divestthese components ... of their interrelation with the rest of the network. (Maturana& Varela, 1987, p.69, emphasis added)[Selection] arises from cooperative-competitive interactions between speci�c (ge-netic) parametric constraints and intrinsic (generic) dynamics ... in such a viewthe gene in no way constitutes a program for development. Rather, it's a partici-pant playing a parametric role in a self-organizing, synergetic process. (Kelso, 1995,p.183)Genes do not encode macroscopic form. Rather, they help produce it, in con-junction with other cytoplasmic and cellular processes. [Their meaning is givenonly] within the context of the living organization in which they take part. (Keijzer,manuscript, p.15)For some theorists, then, the fact that developmental systems exhibit causal spread seems toundermine completely the representational understanding of genes. But is there really a mandate16For another intriguing example of the environmental determination of phenotypic form, consider the Missis-sippi alligator (see Goodwin, 1994, p.38). These creatures lay their eggs in a nest of rotting vegetation whichproduces heat in varying quantities. Eggs that develop at lower temperatures (within some overall range) endup producing females. Those that develop at higher temperatures end up producing males. Eggs in a clutchwill pass through the critical developmental window at various di�erent temperatures, meaning that a mixture offemales and males will be born. This environmental method of regulating sex ratio (a ratio which, for reasons ofpopulation-survival, needs to stay somewhere near 50:50 in the population) might seem a little hit and miss, butit works well enough. 12

for this particular elimination of representational talk? We favour a less iconoclastic responseto developmental causal spread, one which starts by recognizing a crucial distinction betweentwo possible types of representational claim that might be made in the vicinity of genes.It seems rather natural, in many ways, that a representational theory of genes should inspirethe following thought: if one could �nd out the complete sequence of an organism's DNA, then,in principle, one would be able to use that information alone to `compute' the adult organism,such that one would be able to predict, in every relevant detail, that adult's phenotypic form.Notice that this idea makes sense only if the ontogenetic development of the individual organismcan reasonably be conceptualized as the unfolding of a comprehensive, predetermined plan thatis stored in that organism's genes. Delisi, one of the participants in the high-pro�le HumanGenome Project, is just one of many, many theorists who openly embrace such a view:The collection of chromosones in the fertilized egg constitutes the complete setof instructions for development, determining the timing and details of the formationof the heart, the central nervous system, the immune system, and every other organand tissue required for life. (Delisi, 1988, p.488)This approach to development | call it strong instructionism | is anathema to theorists suchas Thelen and Smith, Goodwin, and Lewontin, that is, to theorists who embrace causal spread.And, indeed, it seems that the existence of extensive causal spread in the developmental systemdoes make strong instructionism di�cult to defend. If phenotypic forms are, in many ways,the products of generic self-organizing processes in the cellular and extra-cellular body of thedeveloping organism, in interaction with the environment, then knowing the entire sequence ofthat organism's DNA will simply be insu�cient to predict phenotypic form. In addition, asthe examples discussed earlier show, one will need to know the relevant principles of biological(self-)organization, plus the values of the key environmental variables.Fortunately, strong instructionism is not the only position available to the representationalistabout genes. Indeed, it would be a grave mistake to confuse the general and relatively weakclaim, that genotypes are encodings, with the more speci�c and much stronger claim, that thegenotype is a complete set of instructions for building the adult organism. Concrete evidencewhich is devastating for the latter claim might not be nearly so serious for the former, and onewouldn't want to throw out the weakly representational baby with the strongly instructionistbath-water. Of course, for the distinction between strongly instructionist and weakly represen-tational theories to hold here, one needs to reach an understanding of what it means for genesto be weakly representational, one that can cope with a high degree of causal spread in thedevelopmental system. In what remains of this section we shall take the �rst steps towards suchan understanding.Genes have to be understood against the wider background of, in Goodwinian terminology,morphogenetic �elds, extended to include the organism's developmental environment. So exactlywhat is it that genes do, within the dynamic context of such �elds, that makes a di�erence tothe phenotypic structure of an organism? The answer, according to Goodwin, is that genesselect or stabilize the developmental dynamics that, through self-organization, actualize one ofthe alternative generic forms that are possible for the organism in question. As Goodwin himselfputs it, \During reproduction, each species produces gametes with genes de�ning parametersthat specify what morphogenetic trajectory the zygote will follow" (Goodwin, 1994, p.102).In other words, genes set parameters for the self-organizing systems that generate phenotypicforms.1717Given this view of genes, Darwinian selection becomes not a generator of phenotypic forms, but a mecha-nism which searches the parameter-spaces of the self-organizing morphogenetic systems that are, in actual fact,responsible for generating those forms. Thus selection plays the role of preserving those dynamic life-cycles that13

The view of genes proposed by Goodwin seems (at least to us) to invite a kind of weakrepresentational reading. It seems not only permissible, but illuminating, to conceptualize genesas encoding developmental parameters. The timely demise of the strongly instructionist devel-opmental program does not undermine this weaker thesis. However, we have not as yet reacheda fully satisfactory place to come to rest, because, ideally, one would want to say not merelythat genes encode developmental parameters, but that, by encoding developmental parameters,genes encode for phenotypic traits. How might this extension be secured? The key move issuggested by the following remarks from Varela et al.:For many years biologists considered protein sequences as being instructionscoded in the DNA. It is clear, however, that DNA triplets are capable of predictablyspecifying an aminoacid in a protein if and only if they are embedded in the cell'smetabolism, that is in the thousands of enzymatic regulations in a complex chemicalnetwork. It is only because of the emergent regularities of such a network as a wholethat we can bracket out this metabolic background and thus treat triplets as codesfor aminoacids. (Varela et al., 1991, p.101)The message here is that to make even the more limited claim that DNA encodes for aminoacids during protein synthesis (a process which we discuss later in this paper), one has to assumecertain regularities of the dynamic context in which gene-activities take place, that is, one hasto treat the cell's metabolism as a stable ecological backdrop against which DNA operates. Tomake the stronger claim that genes encode for speci�c phenotypic traits, one has to extendthis assumption of a stable ecological context to the self-organizing processes at work in theappropriate morphogenetic �eld, and to certain crucial variables in the organism's environment.So any claim that a gene, or set of genes, encodes (weakly) for a speci�c phenotypic trait dependson the assumptions (a) that the principles of morphogenetic self-organization which are genericto the organism in question will operate as normal, and (b) that the expected values of therelevant variables in the developmental environment will obtain.18 What this means is that allgenetic encodings are context-sensitive, being relative to particular morphogenetic �elds and toparticular environments. But it also means that it is legitimate to treat certain genes as codingfor a particular phenotypic trait, just so long as the presence of those genes can be seen to yield| in a rich, normal ecological setting | that very phenotypic outcome.At a minimum, it now seems that it is possible to embrace all the important insights con-cerning dynamical complexity, self-organization, and environmental interaction that ow froma recognition of causal spread in developmental systems, and yet to maintain a non-trivial, ro-bust, empirically useful version of the basic idea that genes encode for phenotypic forms. Thissuggests that one might be able to turn the same trick for internal representations, as they�gure in cognitive-scienti�c explanations of on-line intelligent behaviour, simply by working outthe idea that the way in which neural states do their representing is roughly the same as theway in which genes code for phenotypic forms. As we have seen, genes do not fully specify the�nal phenotypic form. They operate within a normal developmental context of other causallyactive (biochemical and environmental) states and processes. However, that does not preventgenes from being representational in nature. By encoding certain parameters, genes stabilizethe complex developmental dynamics that, in interaction with the environment, yield speci�care adaptively stable in certain environments. For some philosophical re ections on the relationship betweenself-organization, natural selection, and evolutionary functionalist thinking in biology and cognitive science, seeWheeler (1997).18Environmental features can, like genes, be treated as parameterizing the developmental dynamical system.Thus they co-determine, along with the relevant genes, exactly which possible trajectory of that system will �nallybe traversed by the developing organism. 14

phenotypic forms. A genic account of internal representations would suggest a similar line ofthought: Neural states which are implicated in the production of on-line intelligent behaviour donot fully specify external states of a�airs or behaviours. They operate within an assumed, nor-mal ecological context of other causally active (bodily and environmental) states and processes.However, that does not preclude our treating the neural states in question as representations.By encoding certain parameters, those states stabilize the complex neural and bodily dynamicsthat, in interaction with the environment, yield appropriate behavioural outcomes.19This strategy for defending the use of representational explanations in the face of large-scale causal spread incurs, however, a cost. For it seems to require an important shift at thevery heart of cognitive-scienti�c explanation. The vast majority of cognitive scientists havetaken their goal to be simply to identify the representational structures and computational(representation-manipulating) processes whose realization and activity in the nervous systemexplains intelligent behaviour, thought, and cognition. Thus cognitive science has, in general,concerned itself exclusively with strictly agent-internal, and ultimately neural phenomena. Butthe genic account seems to demand that cognitive-scienti�c investigations of the causal processesunderlying on-line intelligent behaviour should factor-in more than merely neural states andprocesses. Moreover, notice that although, on the account of biological development we favour,genes retain their status both as genuine encodings and as important causal factors in thedevelopmental mechanisms underlying the building of phenotypes, they emerge as in no waycausally privileged aspects of those mechanisms. If, in addition, it turned out that explanatoryspread is indeed entailed by causal spread (a possibility which we mooted earlier), then, fromthe fact of causal spread, one could immediately draw the further and more radical conclusionthat whilst genes can rightly be said to encode for phenotypic traits, they should not, in anyway, be given explanatory privilege in a proper understanding of biological development. So ifneural states are, like genes, just one factor in a complex causal web whose combined actionyields the target outcome, and if causal spread implies explanatory spread, then why shouldneural states enjoy any privileged explanatory role in cognitive-scienti�c thinking? Time, then,to reconsider the inference from causal spread to explanatory spread itself.5 Representation, Explanation and Non-Communicative Con-tentLet's begin by taking a step back. Consider the two most extreme positions:1. that the mere presence of causal spread provides su�cient grounds for rejecting the claimthat contributing neural states should be viewed as representations or as instructions;19The idea that neural states contribute to the generation of behaviour by acting as parameters for self-organizing dynamical systems is also at work in Kelso's notion of `speci�c parametric in uences' (Kelso, 1995)and in Keijzer's concept of `internal control parameters' (Keijzer, manuscript). However, both Kelso and Keijzerare essentially hostile to representational interpretations of such states. In our view, this is because both tend tocash out the notion of a program in strong-instructionist terms. In Kelso's case, this is suggested by the fact thathe cannot reconcile his Goodwin-esque view of genes as \parametric constraints" with any notion of a geneticprogram (see Kelso, 1995, p.183, in particular, the quotation reproduced earlier in this paper). And Keijzer (onthe basis of his own analysis of the similarities between biological development and behaviour) argues that anon-trivial representationalism is committed to conceptualizing behaviour \as the execution of a pre-programmedset of instructions which will lead to the attainment of a pre-de�ned goal-state" (Keijzer, manuscript, pp.1-2).This position is set-up as the only worthwhile alternative to the trivial view that \everything within the nervoussystem which covaries with external factors has to be a representation" (op cit. p.21). In contrast, we think thatthe possibility exists for a weak notion of representation to be carved out somewhere between these two extremes,and the principled set of conditions for genic representation that we identify later in this paper constitutes ourattempt to actualize that possibility. 15

2. that, in line with the genetic analogy, it is legitimate to view the inner states or processesas at least weakly representational just so long as they yield, in a rich, normal ecologicalsetting, the kind of outcome appropriate to the contentful gloss.Neither of these extreme views can be correct. To see why, consider a pair of simple cases.Case 1: A standard LISP program.20 A standard LISP program for sorting a set of referencesinto alpha order might comprise, let's say, 30 lines of code. These lines of code will include itemssuch as (cons d (abc)) where CONS names a function that takes an item (in this case `d') andappends it to a list (in this case `abc'). We think it is fair to assume that the 30 lines of code,in standard usage, is thought to constitute a speci�cation of how to proceed so as to solve theproblem. In fact, the 30 lines surely constitute a bona �de algorithm for the task. But thealgorithm or speci�cation is e�ective only in the context of a machine set-up to compile LISP.The operation of the function CONS is not de�ned as part of what is intuitively the program.It is part of the backdrop against which the program is expected to do its stu�. It seems, then,that even the most mundane and paradigmatic examples of programs, recipes and algorithmsalready display a good amount of causal spread. The mere fact that a speci�cation works onlyagainst an assumed backdrop of other factors and forces thus does nothing directly to undermineits status as a speci�cation or set of instructions.Case 2: The Lonesome Weight. Consider a trained-up connectionist network that solves acertain problem (e.g. distinguishing triangles from rectangles). And assume that the solution,as normally understood, involves the use of some distributed representations at the hidden unitlayer. In line with the genic analogy, we might now argue that a single weighted connection(chosen from the many that cooperatively generate the hidden layer encoding) constitutes aspeci�cation (in the weak, genic sense) of how to solve the discrimination problem. For theaction of that weighted connection, in the usual ecological context provided by the other weightedconnections, is clearly su�cient to determine a solution to the problem. But this (to be blunt)is silly: the single weighted connection is not usefully seen as any kind of speci�cation of howto solve the larger problem.In short, neither of the extreme positions scouted earlier can be correct. Causal spread cannotdirectly undermine the claim that certain factors constitute a kind of coded set of instructions(i.e., a body of problem-solving representations) for solving a problem. But neither can we allowthe appeal to ecological context to run wild, as it seems to do in the case of the lonesome weight.Clearly, we must introduce some additional factors if we are to make our alternative vision ofgenetic representationalism non-trivial and secure.Here is another problem. We seem to have suggested both that neural, bodily and envi-ronmental factors may often be on an explanatory par as regards the causal determination ofintelligent behaviour, and that various inner states may properly be seen as (weakly) represen-tational, that is, as sets of coded instructions whose action contributes to solving the problem.But there is at least a prima facie tension between these two claims. If the inner states reallyconstitute instructions, why isn't that enough to warrant the ascription of a special explanatoryrole? The recipe is a special element in the cake-baking process, even though we all agree thatthe presence of an operative oven is an essential contributing factor. If, instead, we argue thatthe other factors are instruction sets too (thus restoring the parity) we e�ectively trivialize ournotion of weak representation. Our own story, it seems, demands that the inner states oftenplay a special role (that of instructions) in determining behaviour.The correct thing to say, we think, is that causal co-determination does not imply explanatoryparity. The truth behind the suggestions of parity is that the other factors matter and, moreover,that they can matter in ways that we have traditionally thought only the representational20This example appears, in a di�erent form, in Clark (forthcoming-b).16

factors mattered. This idea should become clearer later on. For the moment, let's continue toconcentrate on the ways in which the representational and non-representational contributionsmay di�er (as di�er they must, on pain of triviality).One tempting, but ultimately insu�cient move is to appeal directly to a kind of teleologicalasymmetry.21 Once again, genes provide our model. DNA structures have often been shapedand molded, by evolution, precisely so as to `tip the balance' in favour of certain adaptivelydesirable outcomes. The same is true of neural states, only here the shaping and moldingmight be achieved not only by evolution, but also by learning. Genes or neural states mightthus constitute instructions insofar as they (unlike much of the ecological backdrop) have beenselected in order that such-and-such a response or behaviour should occur. The cake recipe, ditto,exists in order that cakes ensue, whereas the oven and the force of gravity, we may assume, donot. Or consider another analogy. A glider pilot, in plotting a course from A to B, may well relyon the presence of certain kinds of local structure and `helping hands' to achieve her goal. Shemay, for example, factor in the reliable presence of certain updrafts and thermals. Behaviouralsuccess thus depends on a combination of her plan and these assumed factors and forces. Thereis clearly a causal-spread-style, multi-factor story to be told. But for all that, the pilot remainsthe pilot. She is a special element in the causal web insofar as her actions are selected so as toreach the goal | even though, in so doing, various other factors and forces, whose shapes werenot thus selected, are co-opted into service.The trouble is, this story still does not separate the truly representational wheat from thecontentless cha�. Consider a robot arm that must reach out and grip a part. The multi-factorstory hereabouts might include reference to (a) the force of gravity, (b) a spring-like assemblyin the arm and (c) a set of instructions for reaching (in that ecological context). The appeal toteleology distinguishes the force of gravity from the rest. But it leaves the spring-like assemblyand the putative instruction set on a par. For both may have been selected precisely so as toenable skilled reaching. (Similarly, we saw earlier how sensor placement may be selected so asto simplify certain kinds of problem-solving.)The most we can say, perhaps, is that the brain and central nervous system constitutean especially potent site of teleological plasticity, since they are resources in which minor andcheap-to-implement changes can make a major di�erence to behaviour: such, in general, is thevirtue of programmable software over gross mechanical solutions. But even if it is allowed thatthe brain (and central nervous system) thus constitute something like `primary loci of adaptiveplasticity', it still remains unclear why we should depict the neural states as representational. Apush in the back may, after all, be a cheap and easy-to-implement means of tipping an ecologicalbalance in favour of a certain outcome (a cli�-face plummeting scenario, for example). But thepush is not thereby revealed as a context-bound instruction to plummet. It is just an appliedforce that tips the scales in a certain (downwards) direction.The frog at the bottom of the discursive beer glass is thus revealed. When is an element ofan extended causal nexus an internally represented instruction rather than a mere force selectedso as to tip a delicate ecological balance in an adaptively valuable direction?22The trouble, at root, is that cognitive science, and philosophy, still lack a decent de�nitionof internal representation. This de�cit, as Robert Cummins nicely points out, is often obscured21What mileage there is in this idea is, we think, pretty much exhausted in Clark (forthcoming-b). The presenttreatment aims to distinguish the representational contribution in a more secure way, and to present a morepositive alternative to the non-representationalist party line.22This genuine di�culty in maintaining a distinction between internal representations and other forces in thephysical body is what appears to be driving Webb's claim (mentioned earlier) that certain variables in her cricket-robot, variables which correspond to sensory or motor signals, should not be thought of as representations, becausetheir role in connecting sensors to motors is, in e�ect, equivalent to the role of the gears in connecting the motorsto the wheels. 17

by the fact that there are several theories on o�er whose job, as Cummins puts it, is to \pin themeaning on the symbol" (Cummins, 1996, p.66) Such theories try to show why a given symbolshould be associated with one meaning rather that another, but they don't tell us why, or when,we should buy into the business of symbols and meanings at all. They don't tell us, in short,what internal representation is.At �rst glance, of course, the answer is obvious. An internal representation is an inner statewhose job is to stand for some other (usually external) state of a�airs. This answer workswell enough for public representational schemes such as language. Here, the speakers of thelanguage use the symbols to stand for other things. The inner economy, however, is di�erentin one crucial respect: it must operate by means of the direct causal properties of the statesand processes. There is no room for inner homunculi that literally understand such-and-such astate as standing-in for such-and-such a state of a�airs. Instead, what happens happens, withno process of literal interpretation intervening.What is needed, then, is a way of understanding inner states as stand-ins that does notcommit a kind of homuncular fallacy. This is where a certain notion of strong (i.e. non-genic)representation seems to answer the call. The idea23 is that the requisite notion of standing-in isprovided by the capacity of certain states and processes to �gure in episodes of environmentallyde-coupled `o�-line' problem-solving. And what this requires, in turn, is that the states andprocesses constitute a kind of model of the target domain, viz an inner arena in which theunfolding of states marches in step with the way things unfold in the wider world. The notionof strong internal representation is thus the notion of an item or items that �gure in an innermodel capable of supporting a kind of vicarious problem-solving: one in which moves can betried out before committing the real esh and blood to action (see, e.g., Campbell (1974), andDennett's (1995) notion of a `Popperian Agent'). The adaptive role of `standing in for' can thusbe cashed out in terms that do not invoke an inner understander, but which make clear contactwith the primary (external, public) usage nonetheless.Genic representation, however, is meant to do duty in a variety of cases in which the capacityfor o�-line use is not present, viz cases in which the inner resources are delicately keyed so asto make maximal use of bodily and environmental factors and forces so as to promote ongoingadaptive success (e.g. the Franceschini maps). There are, fortunately, several features of thestrong representational scenario that can still carry over to such cases, includingI the presence of kind of arbitrariness in which what matters is not the shape or form of theindividual representations themselves, but rather their role as content-bearers;II the fact that the (putative) representations, although not actually understood (by the sub-systems concerned) as content-bearers, are nonetheless consumed by other subsystemsin a way which makes sense only if we depict the subsystems as requiring the kinds ofinformation the content-ascribing glosses highlight;III the presence of an entire system of states whose combinatorics and/or dynamics are somehowappropriate to the target domain.These three conditions seem to play, in addition, a clear role in many arguments that depictDNA as a kind of code, as will become clear if we stop for a moment to think about the details ofprotein synthesis. Proteins are molecules of linearly strung amino acids which not only constitutemuch of the physical stu� of the organic body, but also orchestrate the chemical processes inthe living cell that underlie development. Protein synthesis | the stringing together of variousamino acids in various sequences, so as to manufacture a speci�c protein | is controlled by the23This is a condensed version of an argument developed at much greater length by Clark and Grush (submitted).18

organism's DNA, but only indirectly. Here's how it works.24 DNA molecules consist of two longstrands of nucleotide molecules, molecules known as bases. In DNA there are four such bases:adenine (A), cytosine (C), guanine (G), and thymine (T). The �rst stage of protein synthesisinvolves a process known as transcription, in which the DNA acts as a kind of template for themanufacture of a di�erent sort of molecule: RNA. Like molecules of DNA, molecules of RNAare chains of nucleotide bases, but this time uracil (U) replaces thymine (T) alongside A, C,and G. When protein synthesis begins, an enzyme called mRNA polymerase sets about the taskof building molecules of a particular sort of RNA | namely messenger RNA (mRNA). Themanufacture of mRNA follows a set of rules according to which an A-molecule located at somespeci�c position on the strand of DNA results in a U-molecule being located in the same positionon the strand of mRNA, a T results in an A, a C results in a G, and a G results in a C.25The second stage of protein synthesis | known as translation | is the process by whichthe mRNA molecules constructed during transcription result in the manufacture of speci�cproteins, with, typically, each individual mRNA molecule initiating the production of severaldi�erent proteins. Located in the cytoplasm of the cell are protein-manufacturing-sites calledribosomes, and molecules of another sort of RNA: transfer RNA (tRNA). Molecules of tRNAare single triplets of nucleotide bases attached to single amino acids. The system is set-up suchthat each triplet of nucleotides on a strand of mRNA | each codon, as such triplets are known| either (a) determines that a particular amino acid will be produced (e.g., GCU results inthe production of alanine), or (b) marks where the stretches of the mRNA chain that determinedi�erent proteins begin and end. What happens is that an mRNA molecule becomes attachedto a ribosome, and then passes through it, one codon at a time. When a new codon movesinto place, the ribosome (through what is, in e�ect, trial and error search) locates a molecule oftRNA which, according to the rules of base-pairing, features the opposite triplet to the mRNAcodon (its anti-codon). The ribosome then strips o� the amino acid from the other end of thetRNA molecule, and adds it to the protein which it is building. Stripped of its amino acid, thetRNA molecule oats o� into the cytoplasm, only to be `recharged' with `the right' amino acid,thanks to the operation of aminoacyl-tRNA synthetase, an enzyme that recognizes the type oftRNA molecule concerned, locates the correct amino acid, and binds that amino acid to thevacant end of the tRNA molecule.We can now see how the three conditions for genic representation | features I to III identi�edabove | apply to the process by which protein synthesis is indirectly controlled by DNA.Although every instance of a particular mRNA codon, as transcribed from its DNA template,is believed to result in an instance of the same amino acid being added to an emerging protein,there is nothing in current biological knowledge to suggest a physical/chemical reason why themappings could not have been set up di�erently. Thus it seems that the mappings from particularnucleotide triplets to particular amino acids are, in the sense we require, arbitrary (condition I).In the biological literature, the standard formulation of this idea is to say that the genetic codeis thought to be arbitrary. However, from our perspective, that is not the way to put the point.As will become clear later, we understand arbitrariness to be a necessary condition on therebeing a code at all. Therefore, if the mapping in question were not arbitrary, there would beno pressure to think of the system in question as one of encodings. Oversimplifying somewhat,the right question is not, \is the genetic code arbitrary?", but rather, \is there a genetic code?".Next, notice that the process carried out by the machinery of ribosomes and tRNA is mostilluminatingly understood as a process of decoding, in which that machinery is conceived as a24The account that follows draws heavily on Hofstadter (1985) Hodson (1992), and Maynard Smith (1995).25These rules are known as base-pairing rules, because each of the four bases binds to a speci�c partner. As wellas being crucial to transcription, base-pairing underlies the famous capacity of DNA molecules to make copies ofthemselves (i.e., to self-replicate). 19

(distributed) consumer sub-system which is sensitive to the information carried by the mRNA(condition II). (That is why, of course, the process is called translation.) Finally, it seems clearthat there is a systematic relationship between the the sequence of codons that makes up anan individual mRNA molecule, and the resulting sequences of amino acids that make up theproteins generated from that mRNA molecule, sequences whose length is determined by thepositioning of `punctuation' codons (condition III).26Features I to III thus form, we believe, a coherent complex that pivots on the idea of arbitraryintervening states or processes whose systemic role is to act as the bearers of speci�c contents.The three features (arbitrariness, content-based consumption, and systematicity) seem nicelyapplicable to the genic cases, and o�er at least a hint of some ways in which even weak internalrepresentation involves more than the mere presence of a contributing force.That said, the notion of content-based consumption still looms uncomfortably large in thestory we have told. Uncomfortably, in that content-based consumption is one of those notionsthat seems clear and obvious until you try to cash it out. It is easy, of course, to cash it out whenyou are dealing with agents (like ourselves) who use systems of conventional symbols in which,for example, the word `dog' is clearly playing a role best understood by reference to contentrather than form. Other words, like `cat', would clearly have done just as well. But, as we notedearlier, it is crucial in this debate that we not confuse the operation of internal representationswith that of conventional, communicative codes (such as English or Spanish). For (as we allknow) internal representations, unlike communicative ones, must secure their e�ects without �rstbeing understood.27 As Cummins puts it:A system exploits an internal representation in the way a lock exploits a key:by being causally a�ected by its structure. Anything less austere than this will un-dermine the explanatory interest of internal representations altogether. (Cummins,1996, p.102)In particular, Cummins cautions against being misled by the form of speech `X representsY to Z' since \this makes representation look like a form of communication" (op cit. 104).Arguments that insist that such-and-such an internal state cannot be a representation since itdoesn't represent anything to the system itself (see, e.g., Beer, 1995; Brooks & Stein, 1993;Harvey, 1992) seem to trade on this very confusion. For example, Brooks and Stein (1993, p.2)caution that the �ring of a particular neuron, even if it is non-accidentally associated with,say, the presence of red objects, is not thereby revealed as a representation of redness for thesystem itself. Instead, it represents redness to us, the external observers. Now this notion ofrepresentation clearly trades on the idea of a state that presents itself to an understanding systemas having a certain content. Such a notion is substantially stronger than the standard cognitive-scienti�c use. A better way of carving the cake, in line with Cummins' suggestion, might thusbe to say that although the �ring of the neurons does not literally communicate anything to anyunderstanding sub-system, the �ring may still be functioning as an internal information-bearerand hence as a form of (non-communicative) representation. What is missing is any activeappreciation of the content: but the content may be bringing about its e�ects nevertheless.Much recent anti-representationalist sentiment is, we agree, rooted in this (mistaken) ten-dency to equate internal representations with elements in some kind of internal yet still basicallycommunicative economy. But the mistake is not quite as straightforward as Cummins makes itappear. For as soon as we abandon the close analogy with conventional communicative codes,26It is worth just noting that since DNA codes for the manufacture of the mRNAmolecules, the tRNAmolecules,the ribosomes, the synthetases and the polymerases, DNA codes for its own decoders.27We all know this, but it is terribly easy to pay mere lip service to the idea, especially when we think aboutthe role of content in cognitive-scienti�c explanation. 20

the distinction between a mere internal force and an internal representation does indeed threatento evaporate. If the inner state really brings about its e�ects the way a key opens a lock thenwhy not conceive the inner state simply as a force (one among many, in fact) that contributesto the bringing about of an e�ect? For example, once we recognize that internal representationscannot be depicted in essentially communicative terms, the most obvious di�erence between aninstruction and an applied force simply fades away. For an instruction, we would like to say,can be disobeyed, whereas an applied force cannot. (It may sometimes fail to achieve its pur-pose | the push may fail to send the victim over the cli� | but this surely does not amountto the victim's disobeying some enemy's command to fall!) This gap between the understoodcontent and the indicated action is central to the ordinary (communication-based) notion of anencoded instruction: but it cannot, we think, be preserved in our understanding of any genuinelyexplanatory appeal to an inner code.The hope, then, is that despite these very real di�erences, the triple appeal to arbitrariness,consumption and systematicity may provide a robust and workable means of distinguishing thecases (force versus coded instruction), and one that works without committing the communica-tive fallacy. But even here we must proceed with caution. An element in a communicativelanguage system is arbitrary because its meaning is �xed by convention: the very same itemcould easily have been used to mean something else. Similarly, coded instructions or representa-tions look like items awaiting active understanding by another sub-agency. How can we unpackthese ideas in a way that circumvents the tacit appeal to communicative role?Consider a robot built by Christopher Longuet-Higgins and described and discussed byJohnson-Laird (1983) and, more recently, by Keijzer (submitted). The robot avoids falling o�the edge of tables, but it seems to have no table edge detector. Instead, the main wheels drivetwo small wheels that hold a piece of sandpaper under the base plate, and it is set up so thatthe small wheels, relative to the sandpaper, echo the location of the real robot relative to thetabletop. Whenever a small wheel hits a ridge at the edge of the sandpaper, a circuit is closed andthe robot turns. As a result, the small wheels and sandpaper model the big robot and tabletop.By ceding control to the inner model, the big robot avoids falling o� the edge. Commenting onthis set-up, Johnson-Laird writes that:The small wheels and the piece of sandpaper are not intrinsically a model butthey become a model in the robot because of their function as an arbitrarily selectedsymbolic notation that is used to register the position of the robot in the world.(1983, p. 404, and quoted by Keijzer (submitted))The idea we wish to draw out of this is that the inner elements are here counted as arbitrarybecause the relevant equivalence class of inner systems (the set of di�erent systems that couldplay the same adaptive role) is �xed not by the �rst order physical properties of the components(the thickness and weight of the sandpaper etc) but by their capacity, when appropriatelyorganized, to act as the bearers of a certain (speci�able) body of information. These statesbear that information insofar as they support a body of internal dynamics that is �t to track ormodel a speci�able set of external states of a�airs. The inner states thus allow a certain kindof information (one that is not directly transduced by the robot's sensors) to guide the robot'sbehaviour. The absence of an additional inner element that understands the inner states asrepresentations of the proximity of edges is unimportant. For the states do indeed carry thatinformation, and they enable the information to be e�ective in guiding the behaviour of thelarger system. The equivalence class of inner sub-systems that could play the same role includesanything that can bear that information and make it e�ective within the larger system.The notion of an inner consumer or content-sensitive sub-system can now be glossed thesame way. What matters is that the putative consumer process (the circuit closer in the robot21

above) is targeted on a certain inner state not because of the state's intrinsic physical properties,but because of the information that it happens to bear. The counterfactuals (which �x theequivalence classes) make it plain: had nature exploited some other inner state as an indicatorof the table edge, then the consumer process would have been targeted on it instead (cf. Millikan,1989).Internal representations, we conclude, are best seen as inner states or processes whose func-tional role is to bear certain speci�able contents. And this functional role, in turn, is charac-terized by (i) the way we are forced to invoke information or content in order to specify theequivalence classes of inner organizations that could support the same kind of environmentalactivity and co-ordination (cf. Pylyshyn, 1986), (ii) the presence of consumer subsystems thatexploit the inner states for their content, and (iii) the presence of some kind of combinatoricstructure in the representational states. Whether this last condition is necessary or merely typ-ical we leave open. The presence of combinatoric structure in the inner states contributes, wethink, to the appearance of a full inner code as against an isolated inner sign or symbol. Butnotice that the inner model in the Longuet-Higgins robot (like the Franceschini maps) does notrely on classical combinatorics. Instead, it still uses a kind of structured inner model to supporta variety of content-related encodings. It is probably this latter capacity that matters more thanthe presence of re-combinable symbols per se.The three features (arbitrariness, consumption-for-content, and broad systematicity) thus�t together like this: Arbitrariness (the fact that any suitable information-bearer could playthe same adaptive role) is what forces us to depict certain states or processes in contentfulterms; content-based consumption emerges as a kind of `inner de-coding' in which some speci�cinformation-bearer is enabled to guide behaviour; and systematicity is the economical icing onthe informational cake, allowing structurally related inner states to guide di�erent-but-relatedbehaviours.28What, �nally, of that lonesome weight? Nothing we have said so far shows how to stop thekind of runaway appeal to ecological context that might allow us to, for example, treat a singleweight in a connectionist network as encoding a context-assuming algorithm for solving a largerproblem.A quick response would be to say that, given the constraints developed in the previoussection, the lone weight could not count as an internal representation at all (and hence, byextension, could not count as the vehicle of any kind of content). This quick response, however,evades the true thrust of the objection, which concerns the acceptability of a certain practice:the practice of treating certain contributing factors and forces as mere background against whichan instruction set has been selected to act. Thus we may imagine taking a single line from a 30line LISP program and arguing, in a similar vein, that the lonesome line is in fact a perfectlygood algorithm or instruction set for solving the larger problem, albeit one that works only28Notice that, on our account, mere causal co-variation is clearly not enough to warrant talk even of weakor genic internal representation: instead, what matters is why any co-variance exists, and how it is exploited inthe context of some larger system. Indeed, if causal co-variation were treated not only as a necessary conditionfor genes to be representational, but as a su�cient condition too, then the distinction between the (apparently)representational and the (apparently) non-representational features of the underlying causal system would befatally blurred. Take the biological case: Genes are certainly not the only causal factors in the developmentalsystem that might co-vary with phenotypic traits. If one could hold the genotype and the chemical/mechanicalprocesses in the developing body constant, and vary the environment, then one would, it seems, �nd causal co-variations between environmental variables and phenotypic traits. Similarly, if one could hold the genotype andthe environment constant, and vary the chemical/mechanical processes in the developing body, then one would, itseems, �nd causal co-variations between chemical/mechanical variables (such as calcium e�ects in the cytoplasm)and phenotypic traits. So, to avoid collapsing the distinction between the representational (the genic) and the non-representational (the remaining) aspects of the developmental system, a stronger criterion for representation-hoodthan causal co-variation is required. 22

against the powerful backdrop of the other 29 lines of code! What stops the notion of genicencodings degenerating into this kind of triviality?The answer, we think, is nothing but good sense and explanatory hygiene. One importantfactor is that the story should isolate a plausible locus of co-ordinated encoding activity. Thatis to say, it should not focus on a fragment of a larger inner system that was itself the locus ofselective activity. In the case of the lonesome weight, the adaptive processes that �xed the valueof the lone weight also, and as part of the same adaptive episode, �xed the other weights andconnections. In the case of the single line of LISP code, the process that yielded the target linealso, and as part of the same adaptive episode, generated the other 29. Under such circumstancesit is unacceptably arbitrary, and explanatorily unsatisfactory, to focus on the single weight orline of codeA second important factor concerns the visible contribution of the target element to adaptivesuccess. Thus suppose we encounter a behaviour that is rich and exible and yet seems guidedby a surprisingly minimal kind of inner instruction set. All the richness and exibility, let'sassume, results from the operation of other factors and forces (such as the spring-like action ofthe muscles and a benign operating environment). In this kind of case there would be little valuein treating the inner contribution as a kind of context-exploiting algorithm for producing thebehaviour. For all the richness and exibility of the behaviour is, by hypothesis, here rooted inthe operation of the other factors: and it is the richness and exibility, surely, that the appeal tointernal representations and instructions is meant to explain. And to be fair, much of the workof Scott Kelso, Thelen and Smith and others seems to us to have this avour. It often involves,for example, the use of rich inbuilt synergies (�xed couplings between systemic elements) whosepresence permits very simple neural `commands' to generate complex and exible behaviours.The e�ect, as Kelso (1995, p.38) notes, is like taking a 4 wheeled vehicle and coupling the wheelsso as to allow control via a single steering motion: a much simpler solution that issuing a wholeset of subtly varying commands to control each wheel. The e�ect is also, Kelso suggests, like:An army general saying `take hill eight' with the many subordinate layers ofthe military (subsystems) carrying out the executive command. (Kelso, 1995, p.38,reporting an image due to Peter Greene)Now this image, it is true, can also be deployed within the more traditional representationalistcamps, so as to suggest a kind of stack of information-processing homunculi (see, e.g., Dennett,1978). But Kelso's point is that the subordinate layers need not be relying on information-processing at all. Instead, they may be exploiting inbuilt synergies, local context, internalself-organization and the whole gamut of possibilities described in section 2 above. (Goodwin'smodel of genetic control is exactly parallel in this respect.)The general's command is, at best, a kind of limiting case of an encoded instruction set. Itdoes not really specify a solution to the problem. It is more like a request that a solution befound and implemented! To depict it as a good solution speci�cation in context (in the context,that is, of all the factors and forces that do the real work) is misleading to say the least. Nordoes a single line of LISP code specify a solution in the context of the other 29. By contrast,suppose the full 30 lines of code specify a whole sequence of moves that will solve a problemjust so long as there are certain spring-like couplings between muscles, sensor placements etc..Here, it seems explanatorily acceptable to depict those additional characteristics as the e�ectivebackdrop against which the encoded solution was selected to operate.23

6 Conclusions: a Spectrum and a PuzzleThe notion of genic representation, it should now be clear, is not really needed at either endof an explanatory spectrum. At the most minimal (lonesome weight) end, the appeal to genicencoding is unhelpful as it obscures the true sources of exibility and intelligent response. Atthe more maximal (full LISP program) end, the standard notion of a relatively full internalinstruction set seems to apply. The home of genic representation lies in the presently uncharted,but potentially vast middle ground: the very ground that seems to be increasingly mandatedin attempts to understand the role of real genes in the generation and explanation of biologicalform.In the cognitive case, the nature and extent of this middle territory is still up for grabs.But the general lie of the landscape is clear: it is characterized by achievements that still restfairly heavily on the use of internal information-bearing resources, but that rely on a varietyof additional factors and forces to take up the substantial slack left by the information-basedstory. One �nal point bears emphasis. It is that in taking up this slack, the other factors andforces must reveal themselves as the unexpected root of a good deal of the kind of exibilityand subtlety normally associated with representation-based control. It is not enough, that is,to merely demonstrate that extra-neural factors and forces are essential to certain forms ofbehavioural success. Indeed, such an observation verges on the trivial. What matters, and whatthe best of the recent empirical work29 clearly demonstrates, is that the extra-neural factors andforces should be responsible for aspects of behaviour that might otherwise be expected to beunder the direct control of some more heavily representational style of problem solution.30Such a vision bequeathes a puzzle. Should we say that in such cases what we learn is thatmuch of the exibility and richness normally attributed to cognitive processes turns out to begrounded in non-cognitive factors, thus e�ectively identifying the notion of a cognitive processwith that of a representation-guided one? Or should we expand the notion of the cognitive toinclude a variety of non-representational elements?31 Restricting the notion of cognition to thatwhich is representationally mediated preserves long-standing intuitions about the link betweencognition and something more like re ective reason. But various non-representational factorsand forces seem to play a major role in enabling the kinds of uid, context-sensitive response thatdistinguish biological intelligence from mere computational simulcra (such as expert systems).The very idea of the cognitive domain is, we think, somewhat shakily poised between these twoextremes. Our own view is that neither option is really mandated by our current understandingof the notion of a cognitive process, and that the choice will thus be dictated largely by fashion,chance and rhetoric. So, for today at least, we choose to remain agnostic on this issue.More concretely, we hope to have shown both the urgency and the di�culty of balancingrepresentation-based understanding against the spectre of causal spread. We are convincedthat the bulk of daily on-line problem-solving is indeed characterized by a variety of complexand deeply interactive exchanges between brain, body, and world. Doing justice to this delicateecology without abandoning the explanatory gains of four decades of cognitive-scienti�c researchis, we believe, a vital part of the current e�ort to re-position the mind more intimately in theweb of body and world.29E.g., Webb's work on the cricket call recognition problem.30This complex mixture is nicely captured in Luis Rocha's recent notion of `emergent morphology' in which there\is not a linear mapping of coded messages to functional products, rather, messages encode dynamic structuresthat are left to their own devices as they self-organize" (Rocha, 1996, p.380).31For the former view, see Crane (1995, especially p.191). For the latter, see van Gelder and Port (1995).24

ReferencesAgre, P. (1988). The Dynamic Structure of Everyday Life. Ph.D. thesis, Department of ElectricalEngineering and Computer Science, Massachusetts Institute of Technology.Beer, R. D. (1995). Computational and dynamical languages for autonomous agents. In (Port& van Gelder, 1995, pp.121-47).Boden, M. A. (Ed.). (1996). The Philosophy of Arti�cial Life. Oxford University Press, Oxford.Brooks, R. A. (1991). Intelligence without reason. In Proceedings of the Twelth InternationalJoint Conference on Arti�cial Intelligence, pp. 569{95 San Mateo, California. MorganKau�man.Brooks, R. A., & Stein, L. A. (1993). Building brains for bodies. Arti�cial Intelligence LabReport 1439, MIT.Campbell, D. (1974). Evolutionary epistemology. In Schilpp, P. (Ed.), The Philosophy of KarlPopper, pp. 413{63. Open Court, Le Salle, IL.Clark, A. The dynamical challenge. Forthcoming-a, in Cognitive Science.Clark, A. Twisted tales: Causal complexity, dynamics and cognitive scienti�c explanation.Forthcoming-b, in Minds and Machines.Clark, A. (1997). Being There: Putting Brain, Body, and World Together Again. MIT Press /Bradford Books, Cambridge, Mass. and London, England.Clark, A., & Grush, R. Towards a cognitive robotics. Submitted.Clark, A., & Toribio, J. (1994). Doing without representing. Synthese, 101, 401{31.Cli�, D., Husbands, P., Meyer, J.-A., & Wilson, S. W. (Eds.). (1994). From Animals to Animats3: Proceedings of the Third International Conference on Simulation of Adaptive Behavior,Cambridge, Massachusetts. MIT Press / Bradford Books.Cli�, D., & Noble, J. Knowledge-based vision and simple visual machines. Forthcoming inPhilosophical Transactions of the Royal Society of London: Series B.Crane, T. (1995). The Mechanical Mind. Penguin, London.Cummins, R. (1996). Representations, Targets and Attitudes. MIT Press, Cambridge, Mass.Delisi, C. (1988). The human genome project. American Scientist, 76, 488{93.Dennett, D. (1978). Brainstorms. Harvester Press, Sussex.Dennett, D. (1995). Darwin's Dangerous Idea: Evolution and the Meanings of Life. Penguin,London.Franceschini, N., Pichon, J. M., & Blanes, C. (1992). From insect vision to robot vision. Philo-sophical Transactions of the Royal Society, series B, 337, 283{94.Globus, G. (1995). The Postmodern Brain. John Benjamin, Philadelphia.Goodwin, B. (1994). How the Leopard Changed its Spots: the Evolution of Complexity. Phoenix,London. 25

Harvey, I. (1992). Untimed and misrepresented: Connectionism and the computer metaphor.Cognitive science research paper 245, University of Sussex. Reprinted in AISB Quarterly,96, pp.20-7, 1996.Harvey, I., Husbands, P., & Cli�, D. (1994). Seeing the light: Arti�cial evolution, real vision.In (Cli�, Husbands, Meyer, & Wilson, 1994, pp.392-401).Hodson, A. (1992). Essential Genetics. Bloomsbury, London.Hofstadter, D. R. (1985). The genetic code: Arbitrary?. In Metamagical Themas: Questing forthe Essence of Mind and Pattern, pp. 671{99. Penguin, London.Husbands, P., Harvey, I., & Cli�, D. (1995). Circle in the round: State space attractors forevolved sighted robots. Robotics and Autonomous Systems, 15, 83{106.Keijzer, F. The generation of behavior: On the function of representation in organism-environment dynamics. Submitted PhD Thesis, University of Leiden.Keijzer, F. Doing without representations which specify what to do. Unpublished Manuscript.Kelso, J. A. S. (1995). Dynamic Patterns. MIT Press / Bradford Books, Cambridge, Mass. andLondon, England.Lewontin, R. (1983). The organism as the subject and object of evolution. Scientia, 118, 63{82.Mataric, M. (1991). Navigating with a rat brain: a neurobiologically inspired model for robotspatial representation. In Meyer, J.-A., & Wilson, S. (Eds.), From Animals to Animats:Proceedings of the First International Conference on Simulation of Adaptive BehaviorCambridges, Massachusetts. MIT Press / Bradford Books.Maturana, H. R., & Varela, F. J. (1987). The Tree of Knowledge: The Biological Roots of HumanUnderstanding. New Science Library, Boston.Maynard Smith, J. (1995). The Theory of Evolution. Cambridge University Press, Cambridge.Canto Edition, First Published 1958, Second Edition 1966, Third Edition 1975.Millikan, R. (1989). Biosemantics. Journal of Philosophy, 86:6, 281{97.Port, R., & van Gelder, T. (Eds.). (1995). Mind as Motion: Explorations in the Dynamics ofCognition. MIT Press / Bradford Books, Cambridge, Mass.Pylyshyn, Z. (1986). Computation and Cognition. MIT Press, Cambridge, Mass.Rocha, L. (1996). Eigenbehavior and symbols. Systems Research, 13:3, 371{8.Smithers, T. (1995). Are autonomous agents information processing systems?. In Steels, L., &Brooks, R. (Eds.), The Arti�cial Life Route to Arti�cial Intelligence, pp. 123{62. LawrenceErlbaum, Hillsdale.Thelen, E., & Smith, L. B. (1993). A Dynamic Systems Approach to the Development of Cog-nition and Action. MIT Press, Cambridge, Mass.van Gelder, T. (1995). What might cognition be if not computation?. Journal of Philosophy,XCII (7), 345{81. 26

van Gelder, T., & Port, R. (1995). It's about time: An overview of the dynamical approach tocognition. In (Port & van Gelder, 1995, pp.1-43).Varela, F. J., Thompson, E., & Rosch, E. (1991). The Embodied Mind: Cognitive Science andHuman Experience. MIT Press, Cambridge, Mass. and London, England.Webb, B. (1994). Robotic experiments in cricket phonotaxis. In (Cli� et al., 1994, pp.45-54).Webb, B. (1996). A cricket robot. Scienti�c American, 275:6, 62{7.Wheeler, M. (1994). From activation to activity: Representation, computation, and the dynam-ics of neural network control systems. AISB Quarterly, 87, 36{42.Wheeler, M. (1996). From robots to Rothko: the bringing forth of worlds. In (Boden, 1996),209-36.Wheeler, M. (1997). Cognition's coming home: The reunion of life and mind. In Husbands,P., & Harvey, I. (Eds.), The Fourth European Conference on Arti�cial Life, pp. 10{19Cambridge, Mass. MIT Press / Bradford Books.

27