Damasio and Fodor: A Module of Self?

34
Damasio and Fodor: A Module of Self? Bachelor thesis Robert Peeters Tilburg University Faculty of Humanities Department of Philosophy Thesis supervisor: M.A.M.M. Meijsing Second assessor: A.J.P.W. Dooremalen 1

Transcript of Damasio and Fodor: A Module of Self?

Damasio and Fodor: A Module of Self?

Bachelor thesis

Robert Peeters

Tilburg University

Faculty of Humanities

Department of Philosophy

Thesis supervisor: M.A.M.M. Meijsing

Second assessor: A.J.P.W. Dooremalen

1

Table of ContentsIntroduction: Divide and Conquer.............................................................................................................4

Modularity of Mind....................................................................................................................................6

Positioning the argument.......................................................................................................................7

Figure 1: The Müller-Lyer illusion. ............................................................................................8Subsidiary systems................................................................................................................................9

Modules...............................................................................................................................................10

Central Systems...................................................................................................................................12

Two Critiques...........................................................................................................................................14

Prinz.....................................................................................................................................................15

Localisation, characteristic breakdowns....................................................................................15Mandatory, fast..........................................................................................................................16Ontogenetically determinated....................................................................................................18Domain specificity.....................................................................................................................18Inaccessibility and encapsulation...............................................................................................19

Kok......................................................................................................................................................20

Re-entry......................................................................................................................................21Cross talk...................................................................................................................................22

The Feeling of What Happens..................................................................................................................23

Representations...................................................................................................................................24

Proto-Self.............................................................................................................................................24

Core Self..............................................................................................................................................25

A Neural Substrate...............................................................................................................................26

Figure 2: Locations of the insular and the cingulate cortex, represented in a 3d model of an individual human brain..............................................................................................................27

Emotion and Memory..........................................................................................................................27

Figure 3: Location of the basal forebrain in the left hemisphere, indicated by the crosshairs in all three views. From left to right: a sagittal, coronal and an axial plane..................................29

Fodor and Damasio..................................................................................................................................29

Proto-self as a low-level module.........................................................................................................29

Proto-self as a subsidiary system.........................................................................................................30

Core self and autobiographical self as central systems.......................................................................31

Conclusion...............................................................................................................................................32

References................................................................................................................................................35

2

Abstract: This paper seeks to answer the question whether Damasio's theory of

self can be construed so as to be in accordance with Fodor's classical notion of

modularity. In order to address this, Fodor's original modularity thesis is

summarised and briefly confronted with his later remarks on modularity. Two

reactions to Fodor's theory are explored. Damasio's theory of self is summarised

and correlations with Fodor's theory are contended. The two theories are found to

be complementary in certain respects.

Introduction: Divide and Conquer

This paper looks at Jerry Fodor's 1983 book Modularity of Mind (MoM), and subsequently at

Antonio Damasio's three-part theory of the emergent property of self, as developed in The Feeling of

What Happens(Damasio, 1999). It asks whether these two theories, divided by a period of twenty-six years

and considerable progress in their related fields of research, can be meaningfully juxtaposed and if so,

whether this is useful. Some related theories, including some of Fodor's own later thoughts on the

matter, are briefly expounded. Two critiques of Fodor's modularity thesis are treated to illustrate how

the fields of neuropsychology and philosophy of mind look back on MoM.

Any assay into the nature of mind is bound to encounter a number of problems. Many of these

are concerned with how exactly to pose the necessary questions, and in what light to interpret acquired

research results. One way to approach the mind is to accept that it is a system consisting of more or less

separate subsystems. This method of 'divide and conquer' has intuitive allure: No-one will be amazed

at the notion that a patient can lose his vision due to neurological causes, while, for instance, his ability

to comprehend language remains unaffected. This would argue for vision and language as separate

systems. It would stand to reason to continue looking for more subsystems within these two major

components. This basic method of 'carving nature at its joints' has long been employed, and has yielded

many results.

It is important to bear one distinction in mind: The above is an example of functional

decomposition, as distinguished from neurophysiological structure. As far as vision and language are

functionalities of the human mind, these functionalities are effected by wholly separate, hypothetical

systems, which ideally will be dividable into ever lower-level systems. This proposition, however, says

nothing about brain structures which would support such functionalities. It is one thing to speculate on

3

functional systems, it is quite another to successfully correlate these to exclusively dedicated physical

structures. Supposing that neurological defects can cause specific malfunctions to these functional

systems implies that the functional systems have physical form; the pertinent question is how this form

is organised.

In many cases, and particularly for the visual system, neural substrates have actually been

directly correlated to such functional systems. But this method of inquiry also delineates its own

confines. Jerry Fodor, (1983, pp. 127-128) one of the main proponents of modularity of mind and one of the

two authors this paper will concentrate on, warns us:

“[T]he limits of modularity are also likely to be the limits of what we are going to be able to

understand about the mind, given anything like the theoretical apparatus currently available.

[…] The condition for successful science (in physics, by the way, as well as psychology) is

that nature should have joints to carve it at: relatively simple subsystems which can be

artificially isolated and which behave, in isolation, in something like the way that they behave

in situ.”

Fodor's warning mentions that the confines of inquiry are defined by “the theoretical apparatus

currently available”. Once entities have been established, many of them will seem to interact in a

reciprocal fashion. (Rose, 2006 pp.132-133) Suppose one has two entities A and B, and A influences B as much as

B influences A; would it be practical to speak of this system in terms of causal function? Biological

systems frequently employ such feedback loops, and it would seem appropriate to either regard this

entire system as a functional unit, or to cease to speak in terms of cause and effect. This raises the

question: At what level should one look for the functional unit? For a theory of functionally defined

modularity, this unit is the module; a hypothetical construct. Unfortunately, it is not always possible to

couple these units to apparent neurological structures. This was true when Fodor first published

Modularity of Mind, but even more so in more recent times.

Regarding the difference between functional decomposition and postulating neural structures,

one can take contrasting positions. On the one hand, is seems prudent to take a cautious stance on either

side, either keeping to what's known about neurophysiology, or strictly adhering to cognitive

psychology and merely positing hypothetical, functionally defined systems. On the other hand, a

multidisciplinary approach striving to combine the two does make for a very interesting venture. This is

what Kosslyn& Koenig called cognitive neuroscience in 1992, and what Patricia Churchland had in

4

mind in 1986 when she proposed a 'neurophilosophy' (Churchland, 1986). Kosslyn and Koenig imagine

cognitive neuroscience to be a triangle of the sciences studying behaviour, a computational approach

and brain anatomy. In this, 'computation' refers to models designed to generate output similar to

observed human behaviour. These models are fitted on the one side to observed behaviour, and on the

other to brain physiology. In a way, this is akin to the objective of this paper. Fodor's theory of

modularity was largely rooted in linguistics; language features prominently in the book. Damasio based

his findings mainly on observed cognitive and behavioural changes in patients with neurological

damage. The neuroscientific path Kosslyn & Koenig set out ultimately strives to successfully correlate

findings from such diverse research areas into a unified knowledge of mind and brain.

Fodor's book has been widely contested (Bennett, 1990; Bergeron, 2007), to various degrees of success.

Especially the claim that modules are not only functionally definable, but also assignable to

anatomically separate areas within the brain has elicited a large body of criticism. (Robbins, 2009, Kok, 2004 chapter

5) It is by no means the intention to defend this premise here; but where functional modules can be

distinguished, facilitating neural substrates can and have often been identified. A standard example I

will regularly refer to is low-level visual perception. However, the isolation of a function and

subsequent correlation to a facilitating anatomical structure does not necessarily mean that this

structure functions in an isolated fashion. This will form a recurring theme in the critiques, especially

as Fodor attenuates this claim when he returns to the subject of modularity in his 1998 book In Critical

Condition. The main focus, however, is whether Damasio's theory of self and Fodor's original theory

from MoM can be aligned and whether the proposed conceptual framework would be useful.

Modularity of Mind

A large section of the book is actually employed in changing the reader’s mind on the

accomplishments of Franz Joseph Gall (1758-1828). Gall’s theory of localised mental capacities was,

Fodor contends, far ahead of its time. Unfortunately, he went on to develop his theory into what would

later be known as Phrenology. He postulated that specific brain regions, speculated to control partial

functions of behaviour and cognition, develop in such a way that they push the skull outward. In

corresponding places, they would cause bumps on the outside of the skull, by which the personality and

cognitive capacities of a person could be gauged. This is, of course, completely untrue and it forms the

reason that Gall is largely regarded as no more than a historic curiosity. But in Fodor’s view, the way

Gall attributes localisation to function far outweighs his subsequent false suppositions of being able to

5

‘read bumps’ on one’s head. In his basic premise that brain functions can be attributed to distinct places

in the brain, Fodor argues, Gall was far ahead of his time, in spite of his more dubious assumptions.

Fodor believes that phrenology should be remembered for being a very early precursor to

localisation theories. In fact, Fodor seems to want to revive some semblance of Gall's notion in the

shape of his own theory of modularity; he is proud to belong to the same research tradition as Gall.

Positioning the argument

He wrote his book expressly interposing his own views between Behaviourism on the one side,

and Cognitivism on the other. Fodor explains this clearly and concisely in his 1985 précis to

Modularity. In this, he explains that by the behaviourist's account, low-level processes are no more than

reflexive actions. Reflexes are dumb in two ways: They are non-inferential and they are informationally

encapsulated. In Fodor's own words, the latter term “has to do with the range of information that the

device consults in deciding what answers to provide” (Fodor 1983, pp.103)

In this précis to Modularity, Fodor explicitly stresses the positioning of his argument. He seeks

to wedge it in between the cognitivist and the behaviourist views.

Fodor does this, of course, in a style customary to him: by means of a satirical anecdote. He tells us of a

hypothetical ('handsome') cognitivist, who counters a ('wicked') behaviourist's view on the reflexive

nature of low-level processes by means of the Poverty of Stimulus argument.

“A poverty of the stimulus argument alleges that there is typically more information in a

perceptual response than there is in the proximal stimulus that prompts the response; hence

perceptual integration must somehow involve the contribution of information by the

perceiving organism. […] Modern Cognitivism starts with the use of Poverty of Stimulus

Arguments to show that perception is smart, hence that perceptual identification can't be

reduced to reflexive responding.” (Fodor 1985, pp.2)

Fodor, however, thinks this step need not lead to Cognitivism. What he calls 'input systems' are

certainly more than blind, dumb reflexes. Input systems are computational, but this does not entail that

low-level processes are fully continuous with higher level processes. If that were so, Fodor reasons,

perception should also be influenced by the beliefs of the perceiving subject. This can be demonstrated

by means of the Müller-Lyer illusion.

6

The horizontal lines on the left-hand side of Figure 1, on the next page, appear to be different in

length. However, their lengths are demonstrably equal. This is shown on the right -hand side of the

figure. The curious thing is, that even when the lines have been shown to be of the same length, the

illusion persists. Therefore, holding a belief does not necessarily change perception. To draw a minimal

conclusion from this: at least some perceptual processes are not continuous with at least some cognitive

processes.

Figure 1: The Müller-Lyer illusion.

Fodor takes this position a step further by stating that through placing the wrong emphasis, Cognitivism

paints a flawed picture: “by stressing the continuity of cognition with perception, it missed the

computational encapsulation of the latter”. (Fodor 1985, pp. 2)

So what exactly is it that Fodor postulates in Modularity of Mind? As one can infer from the

above opinion on Cognitivism, computational encapsulation is a large part of it. What he tries to prove

is that at least a part of the mind is built up of more or less autonomous units. This autonomy consists

in operating largely apart from other functional parts of the mind. This should also mean that specific

perceptual peculiarities can betray this modular arrangement of function. As we have just seen, the

Müller-Lyer illusion can be taken to argue for the existence of encapsulated, more or less autonomous

functions within visual perception. Fodor also mentions phoneme restoration in the case of the

McGurk effect, albeit only in a footnote. The effect appears when combined visual and auditory parts

of language messages don't conform: the classic experiment involves the visual image of a speaking

face to which one or more phonemes of the accompanying voice mismatch. The surprising result is that

the observer will experience a congruent language message, even to the extent where this means filling

in gaps or mismatches with hallucinatory experiences.

The reason why this effect is interesting to Fodor's modular view of the mind is because it

seems to transgress one of Fodor's basic notions. The autonomous parts of the mind, Fodor argues, all

7

pertain to their own specific domains. The most likely way to interpret this statement is to imagine

systems involved in different kinds of visual or auditory recognition, sense of smell, taste etc. as bound

to their one specific mode of experience. Following this intuitive interpretation, the McGurk effect

would constitute a violation of this domain specificity. We need to distinguish between a modality of

experience, which can be sound, taste, smell, vision, &c, and the functional domain of a perceptual

system. Fodor warns us against this misconception: the McGurk effect does indeed expose cross-modal

linkages in sensory perception, but the functional domain to which it relates is not simply sound, vision

or the like; the functional domain is language. The way Fodor interprets this is that mechanisms

involved in phonetic analysis can be activated by either acoustic or visual stimuli. These mechanisms

themselves are part of a perceptual system that is concerned with language, or a part of language. The

question Fodor is trying to answer is whether these perceptual systems operate in a modular fashion.

Subsidiary systems

Fodor sets out to find modular elements of the mind by analysing cognitive function (“playing

cognitive science” 1983, pp. 38) His opinion, as we have already seen, is that peripheral systems that deal

with perception are likely candidates. These input systems are responsible for low-level processing of

sensory information, and are part of an encompassing functional architecture. The modularity that

MoM advocates within this architecture is only partial; peripheral systems, suspected to be modular, are

contrasted with central systems. But before we can enter into this distinction, another type of systems

should be identified.

The way Fodor approaches the mind is to regard cognitive function as information processing,

as syntactical manipulation. Information about distal objects from what he calls the 'surfaces of the

organism', (idem, pp.42) i.e. sensory input, is to be presented to the input module. In order to be

syntactically processed, however, this 'surface' information must be presented in a manner which is

comprehensible to this computational machinery. It must, in short, be presented as data in a

syntactically seemly fashion in order to be syntactically processed.

To accommodate this function, Fodor posits a distinct class of devices. With some hesitation, he

names them 'transducers', although he uses both the terms 'transducer' and 'subsidiary system'.

“transducers are analog systems that take proximal stimulations onto more or less precisely

covarying neural signals. Mechanisms of transduction are thus contrasted with computational

mechanisms: whereas the latter may perform quite complicated, inference-like

8

transformations, the former are supposed – at least ideally – to preserve the informational

content of their inputs, altering only the format in which the information is displayed. […] Any

mechanism whose states covary with environmental ones can be thought of as registering

information about the world; and, given the satisfaction of certain further conditions, the

output of such systems can reasonably be thought of as representations of the environmental

states with which they covary.” (Fodor 1983, pp. 41)

The term 'subsidiary systems' seems to stand for the functional class of devices, intended to

complement the functional taxonomy of input- and central systems. A transducer, then, is the form this

type of system takes. “Any mechanism whose states covary with environmental ones”(idem, pp. 39) can be

thought of as a transducer. Fodor emphatically stresses the importance of such systems. Fodor views

cognitive science as built around an analogy between minds and Turing machines; they both work in

terms of formal symbol manipulations. The great difference between the two, according to Fodor, is

that Turing machines are closed circuits. The human mind, however, is in constant exchange with the

world around it. The importance of this exchange to the mind as an environmentally embedded

structure can hardly be overstated. If we liken the mind to a Turing machine so far as it is involved in

information processing leading to output through state changes, then in order for such a system to be in

contact with a distal world, perceptions of this world must be represented as information the system can

process. As such, transducers become indispensable: without them, such a thing as cognition (in a

cognitivistic sense, i.e. the thing that occurs in beings with a brain that operates by formal symbol

manipulation) could never occur.

Modules

Given that Fodor only seeks to attribute modular organisation to systems on the fringes of the

mind, the title of the book could be viewed as being somewhat provocatively overstated. As said

before, Fodor’s thesis of modularity is in actuality confined to low-level systems. In a concise, but non-

encompassing definition, Fodor describes a module:

“A module is (inter alia) an informationally encapsulated computational system – an inference

making mechanism whose access to background information is constrained by general features

of cognitive architecture […] One can conceptualize a module as a special purpose computer

with a proprietary database, under the conditions that: (a) the operations that it performs have

9

access only to the information in its database […] and (b) at least some of the information that

is available to at least some cognitive processes is not available to the module. It is a main

thesis of modularity that perceptual integrations are typically performed by computational

systems that are informationally encapsulated in this sense. […] Although informational

encapsulation is an essential property of modular systems, they also tend to exhibit other

psychologically interesting properties. The notion of a module thus emerges as a sort of

'cluster concept', and the claim that perceptual processes are modularized implies that

wherever we look at the mechanisms that effect perceptual integration we see that this cluster

of properties tends to recur.” (Fodor 1985, pp.3)

The metaphor of 'higher' systems, information streaming 'upward' or 'bottom-up' is arranged according

to relative complexity of function. For instance, a system dedicated to recognising the colour blue in

the visual field is considered as a low-level system. This can be opposed to, for instance, the complex

interaction of processes needed to remember where you parked your car or to consider the square root

of four. Modules, by Fodor's definitions, can be visualised as delivering information 'upwards', to

'higher' processes.As such, they are referred to as vertical systems. They are domain specific, as

opposed to horizontal systems which we will come to shortly.

“[Input systems] tend to be input driven, very fast, mandatory, superficial, encapsulated from

much of the organism's background knowledge, largely organized around bottom-to-top

information flow, largely innately specified (hence ontogenetically eccentric), and

characteristically associated with specific neuroanatomical mechanisms (sometimes even with

specific neuroanatomical loci). They tend to be domain specific, so that – to cite the classic

case – the computational systems that deal with the perception/production of language appear

to have not much in common with those that deal with, for example, the analysis of color or of

visual form” (Fodor 1985, pp.4)

10

In these two passages, Fodor attributes to his notion of modules a number of properties.

'Informational encapsulation' takes a primary position in this definition, and in MoM, Fodor takes this

to be the most important aspect of modularity. (Fodor 1983, pp.37, pp.71) By this term, Fodor means that the

module only has access to its own 'proprietary database'. In the Müller-Lyer illusion, the information

that the lines are of the same length is available somewhere in the mind; but not to the part of the mind

that is used to process this particular image. The empasis on being 'computational' and 'inference-

making' is used to distinguish modular input systems from mere reflexes.

The systems that deal with colour and visual form, which Fodor names in this passage, serve as

other examples of modular function.

The strata of the occipital cortex harbour distinct neural systems which respond selectively to

specific features of visual stimuli such as colour, outline, movement etc. They enable us to recognise

and assess shapes in our visual field. These structures are exceptional examples, in that they are some

of the few instances in which clearly defined functional properties are directly and unequivocally

related to specific neurological loci. They are seemingly ideal examples of classical Fodorean

modularity.

Central Systems

In contrast with the limited systems we have considered until now, some psychological

processes cut across cognitive domains, and presumably the systems that facilitate these functions have

more extensive access to information. They are what Fodor refers to as 'Central Systems'. The question

he asks himself is: Are they also modular? As stated above in his concise 'inter alia' definition of a

module, informational encapsulation is an important factor. This is the main reason why he arrives at a

negative conclusion. Central systems (note that these are not referred to as central modules) are such

apparatuses that they accept input from a broad variety of sources. An interesting fact is that this

description allows for any number of central systems to exist side by side, at varying levels of the

information hierarchy. True Fodorean modules support a vertical information flow, have shallow

outputs and, as mentioned before, domain specific inputs. Properties such as these seem to be

specifically aimed at anything and everything but the so called 'higher functions'; one could think that

modules are by definition peripheral.

11

Fodor's reason for this is what he hopes will one day come to be known as 'Fodor's First Law of

the Nonexistence of Cognitive Science':

“[T]he more global [.. .] a cognitive process is, the less anybody understands it.” (Fodor 1983, pp.107)

The problem, then, is not so much that modules are by definition peripheral, but that the

premise of MoM is to posit probable structures from the 'bottom up', based on what is understood about

cognitive functioning. Therefore, the further one speculates away from this basis, the more speculative

this necessarily becomes. Examples such as 'thought', 'problem solving' and 'fixation of belief' are quite

nebulous; they cannot be easily translated into factors that can be tested in a research design. From the

conceptual framework of MoM, they appear to be almost incomprehensible.

MoM advances a dichotomous view of the mind, in which only some simpler functions are

entrusted to largely localised, largely autonomous processing units. Almost by negation, this avails us

of a working definition of what the 'higher' processes of the mind involve. As mentioned before,

Fodor’s notion of central systems allows for the possibility of a great many more or less autonomous

horizontal entities, each processing information largely independently of, but informationally involved

with, its peers. This is an appealing notion as it does away with the homuncular 'man inside the head'-

problem.

This is a point at which Fodor distances his modularity theory from Gall's. According to Gall,

the mind was completely made up of modules; this has later become known as 'massive modularity'. If

modules are, Fodor reasons, special-purpose processors equipped to deal with specific problems, then it

follows that there must be problems our mind is not capable of dealing with. Fodor does not go quite so

far; the existence of central systems ensures that the mind can apply itself to a wide variety of

problems. However, the scope of this is not necessarily endless: although Fodor believes our minds are

capable of a lot more than Gall's version of modularity entails, he explicitly acknowledges the

possibility that the mind is just not equipped to handle some problems. He even speculates that it is a

distinct possibility that the theory of how the world actually works might fall into this category. Fodor

calls any theory that introduces or entails such epistemic constraints “epistemically bounded”.

12

Two Critiques

I would like to treat two distinct critiques of Fodor's modularity thesis in some detail. Given the

multidisciplinary nature of the research field, it seems only fitting to consider both a philosophical and

a neuropsychological take on the theory. For the philosophical approach, we will regard a number of

points Jesse Prinz makes in his 2006 paper 'Is the Mind Really Modular?' (Prinz, 2006) which is a direct

response to Modularity of Mind. The neuropsychological viewpoint will be taken from the introduction

to the cognitive neuroscience written by Albert Kok in 2004. The objective of these examples is first, to

show that MoM has been vigorously criticised from both these sides. Second, I would like to show that

these criticisms may sometimes have been rather too quick to reject MoM, and may have overlooked

some of the subtleties in Fodor's argumentation and choice of words.

One thing that both these, and many other responses to Modularity have done, is to reduce the

book to a number of characteristics that a proposed module should exhibit. Prinz' paper especially,

attacks these characteristics on an issue by issue basis, without the context of the argument they are a

part of. I think this approach disregards an important aspect of Fodor's text. Fodor closes his book with

a number of short chapters on what the modules he suspects exists would probably look like, but these

are not his main objective in writing. Fodor opens Modularity with a fourfold mission statement. His

stated objectives are, respectively, to:

“(1) distinguish the general claim of faculty psychology from a particular version of that

claim, which I shall call modularity thesis; (2) enumerate some of the properties that modular

cognitive systems are likely to exhibit in virtue of their modularity; and (3) consider whether

is it possible to formulate any plausible hypothesis about which mental processes are likely to

be the modular ones […] (4) disentangling the faculty/modularity issues from what I'll call the

thesis of Epistemic Boundedness” (Fodor 1983, pp.1-2)

It is easy to mistake the nine characteristics Fodor mentions at the end of Modularity as hard

conditions by which to discern which parts of the mind can justifiably be called modular. A more

careful reading of Fodor will, I think, reveal that these traits are part of an argument that is considerably

less conclusive than that. Fodor is very careful to express his hesitancy in formulating the list of

characteristics; his main argument is that there are psychological faculties, and some of these can be

said to have a certain degree of autonomy. The characteristics are produced tentatively, as an attempt to

give form to a suspected organisational principle within the brain. It seems unlikely that Fodor would

13

go so far as to use these properties as conditions to be met in order to qualify as a 'module'. That being

said, these properties are given as part of his view on the suspected modular arrangement of some brain

functions; a critique wouldn't be very critical if it didn't inspect them carefully.

Prinz

Prinz identifies 9 criteria from Modularity of Mind, and groups them according to his

counterarguments. The following subchapters strive to counter some of his arguments, but the list is not

complete, especially pertaining to his use of examples. I do feel, however, that it paints a reasonably

detailed picture of the paper and its relation to Modularity of Mind.

Localisation, characteristic breakdowns

The first two properties on Prinz' list address the notion that once functions have been specified,

these functions can be said to occur at specific locations in the brain. Prinz argues that using fMRI to

correlate function focuses too much on 'hotspots' of activity in the brain. His opinion is that, by

focusing on hotspots, researchers often overlook regions of the brain that are moderately active during

task performance. A similar conclusion is drawn from lesion studies. When a very specific lesion leads

to an equally specific breakdown in a cognitive capacity, it would be wrong to conclude that this is the

location of this function. I think there is nothing wrong with this conclusion per se, but some of the

views Prinz attaches to this point do raise certain objections. He argues that localisation is at odds with

the idea that a functional system can be subserved by a distributed network. His counterargument is

that:

“[A] massively distributed artificial neural network can exhibit a selective deficit after a few

nodes are removed (simulating a focal lesion), even if those nodes were not the locus of the

capacity that is lost.” (Plaut, 1995, as cited in Prinz, 2006)

His second counterargument is based on differences in individual localisations: lesions to the

same location, Prinz reasons, can have different effects in different people. The problem with this, I

think, is that any model of 'areas' in the brain is, of necessity, a generalisation over a large population.

No two brains are exactly alike, and every brain differs to a varying degree from the 'standard' textbook

model. So much so, that Hanna Damasio has taken it upon herself to publish an 'atlas' of the brain

(Damasio, 2005), expressly to address this issue. According to her, anatomical uniqueness is a problem to

14

anyone trained in the field of neurology when presented with individual cases. She identifies three main

types along which individual brains deviate from the norm, and maps these extensively.

Apart from these matters, Fodor's original wording of the matter, I think, does not warrant such

a reaction. Prinz habitually uses the word 'localisation'. Fodor, however, does not name this

characteristic. The two relevant items on Fodor's list are “Input systems are associated with fixed neural

architecture” and “Input systems exhibit characteristic and specific breakdown patterns”, chapters 7

and 8 of Modularity of Mind, respectively. The main difference is that Fodor's wording leaves room for

physical systems to be distributed to a degree. Fodor asked himself in 1983, to what degree the neural

implementations of modular systems are either localised or equipotential (Fodor 1983, pp.37). Fodor presents

these as the two extremes of a scale. This way of presenting the problem does not exclude distributed

modular systems. On a kinder reading of Modularity, one can say it creates room for modular systems

to be neither strictly localised, nor strictly equipotential.

The level of complexity and wide interconnectedness of the brain at the neural level make it

unlikely for neatly delineated modules to be easily discernible. However, rather than a reality to be

uncovered, modules can be viewed as an organisatorial guiding principle that is more or less elusive. In

Fodor's own words, when he returns to the subject in In Critical Condition (1998):

“I think the empirical results over the last couple of decades make a not implausible cause that

modular architecture is an appropriate idealization for some aspects of the organization of

mature cognition” (Fodor 1998, pp.141)

Mandatory, fast

These properties seem uninteresting to Prinz when taken in isolation. With regard to automaticity, or

the mandatory function of a system, Prinz argues that:

“[M]ost mental capacities seem to integrate automatic processes with processes that are

controlled. For example, we form syntactic trees automatically, but sentence production can be

controlled by deliberation. Likewise, we see colors automatically, but we can visually imagine

colors at will. The automatic/controlled condition cannot be used to distinguish systems in an

interesting way.” (Prinz 2006, pp.25)

Oliver Sacks and Robert Wasserman (as described in Kolb& Whishaw 2009, pp. 365) have written about one of

their patients, J.I., who became colour blind as a result of a concussion sustained in a car accident.

15

From the moment of the accident on, he was unable to distinguish colours. Curiously, his visual acuity

itself had in fact increased, especially at twilight. An even more curious fact is that J.I. could still

imagine and remember colours in that first period; two years later he was unable to do this, either.

If anything, I think this argues for the modular function of colour perception. Even though we

see colours automatically, as Prinz notes, we can visually imagine colours at will. However, this

function breaks down when the subsystem responsible for distinguishing colours is destroyed. This

particular function is attributed to simple cells in layer V4 of the primary visual cortex. It is notable that

the capacity to imagine colours was lost over time, and not instantly. This would seem to indicate that

the imagining colours as an interaction between a central system and a peripheral module for colour

perception is more complicated. The only thing that does stand out in this example, is an autonomous

system responsible for colour perception.

Prinz' counterexample to Fodor's notion of a module's speed of function is the system

responsible for circadian rhythms. As a system, however, this is not very discrete. It involves,

inexhaustively, the reticular activation system, pineal gland, thalamus and hypothalamus, and a number

of hormonal markers involved in the entrainment of this system from bodily/environmental cues such

as the light and warmth. (Widmaier et al. 2008, pp.249-250) This seems like an unlikely system to be regarded as

modular, in this context. What Fodor refers to with respect to speed of function usually pertains to

language. Prinz does address this, in the form of verb conjugation. Unfortunately, Prinz' treatment of

this example seems a little facetious:

“In addition, there are large variations in performance speed within any general system, such

as vision or language. Verb conjugation, for example, may depend on whether the verb in

question is regular or irregular, and whether the verb is frequent or infrequent. There is little

inclination to say that verb conjugation is more modular when it is accomplished more

quickly.” (Prinz 2006, pp.25)

Prinz wants to argue against speed of function as a component of modularity. The property of

functioning 'fast' was contrasted by Fodor with central processes, which are slow. In this case, Prinz

compares two processes which are both fast enough to be considered modular; it seems his suggestion,

that fast processes are more modular when they are even faster, is only made for comic effect.

16

Ontogenetically determinated

Prinz continues with his treatment of Fodor in this vein. According to him, Fodor implies that

modules are ontogenetically determined. Prinz counters this by concluding that “many alleged modular

systems are learned, at least in part”. I think this overstates Fodor's claim of innateness. In Modularity,

Fodor mentions this intuition only in passing, in two paragraphs. This is the introduction he gives to

this suspected property of a module:

“The issues here are so very moot, and the available information is so fragmentary, that I offer

this point more as a hypothesis than a datum. There are, however, straws in the wind. There is

now a considerable body of findings about the ontogenetic sequencing of language

acquisition, and there are some data on the very early visual capacities of infants. These results

are compatible, so far, with the view that a great deal of the developmental course of the input

systems is endogenously determined.” (Fodor 1983, pp.100)

In light of this description the harsh reaction it elicits from Prinz hardly seems justified.

Moreover, Prinz goes on to name “the senses” as having, of all suspected modular systems, the best

claim to being innate. (Prinz 2006, pp.26) It is not clear to me exactly which specific system he means by this.

“The senses” is a very diffuse term. Sensory information, in Fodor's view, arrives in the mind by means

of transducers. This information arrives in systems that Fodor suspects are modular. So when Prinz

points to 'the senses' as being innate, he employs a more general term that can be understood to include

input systems. Yet somehow, this should be an argument against Fodorean modularity.

Domain specificity

The main problem Prinz has with a claim of domain specific modules, lies in the knowledge that some

systems have a multimodal significance.

“Visually perceived stimuli also generate activity in cells that are bimodal. The very same

cells are used by the touch system and the auditory system. If we excluded rules and

representations that can be used for something other than deriving information from light, the

boundaries of the 'visual system' would shrink considerably. At the neural level of description,

it is possible that only isolated islands of cells would remain. This would be a strange way to

carve up the mind.” (Prinz 2006, pp.27)

The downsized version of the visual system that Prinz suggests has more in common with a

17

transducer in Fodor's topography of the mind; it does not describe anything like an input system. The

suggestion to eliminate all multimodal systems from the notion of an input system seems absurd, at

best, especially in light of his earlier claim regarding the distributed nature of neural networks: it would

stand to reason that similar functions in different modes of perception do not require separate,

redundant systems. I would even go so far as to claim that such a shared system makes a good

candidate for modularity. Its specific domain, however, would not be a mode of perception such as

vision or hearing; its domain would be the specific perceptual characteristic it responds to.

Inaccessibility and encapsulation

Prinz contends that informational encapsulation, the principle that a modular system only has

access to its own 'proprietary database' and nothing else, does not hold up. His opinion is a reaction to a

paper by Peter Carruthers, published in the same volume Contemporary Debates in Cognitive Science.

Carruthers argues that, in order for one system to access information in another, the first system would

have to represent in some way how the second system works. This would lead to a combinatorial

explosion, which would be computationally intractable. Prinz disagrees, because this does not exclude

a partial form of encapsulation: some systems may be somewhat accessible to other systems. I tend to

agree, but I also think this is erroneous at a lower level: what Carruthers describes is a situation where

the first system (A) has access to information from the second system (B) and have some insight into

what this information means. Consider a simple thermostat as system as system (A) and the room it's in

as system (B). In order to function, the thermostat does not need any representation of the room apart

from the single aspect of its temperature, and compare it to a set value. The computation would be “if

<temperature> is less than <set value>, output is <true>”, where the output switches a heater. Arguably,

my example is overly simple, as the room is not a computational system. I do believe, however, that the

comparison holds when transposed to two computational systems.

Although Prinz promises to regard inaccessibility, his argument in this chapter revolves around

encapsulation as an organisatorial principle. In Modularity, inaccessibility is described as the property

of a module wherein central systems do not have access to the module's computations. Prinz does not

make a clear distinction between modular encapsulation and modular inaccessibility to central systems.

One of his examples does touch on inaccessibility: he mentions top-down influence in this context.

One notable instance of this, we will also encounter in our second critique, by Albert Kok: looking

specifically for a Kodak-film box makes everything yellow stand out. This is, to my mind, can be

18

viewed as a valid example of central access to a peripheral module. It is also possible to explain this in

a neulogically informed manner. Unfortunately, this is not something Prinz does explicitly. We will

return to this example in the context of Kok's critique on MoM, for an explanation of re-entry and its

implications for modular function.

The main problem Prinz seems to have with Modularity of Mind is how to interpret the list of

properties. Prinz sees two distinct options: either a system is modular to the extent that it exhibits

properties on the list. The other possible interpretation is that some properties are essential to

modularity, while others are 'diagnostic', or ancilary. On both these readings, every property presents us

with a dichotomous possibility; graduality within the individual properties is not considered an option.

Prinz arrives at this conclusion:

“I think the properties on Fodor's list can be used neither jointly nor individually to

circumscribe an interesting class of systems.” (Prinz 2006, pp.23)

I would like to suggest a third option to keep in mind, which I believe is more in line with the

objectives of Modularity: the 'criteria' Prinz ascribes to Fodor were originally intended as no more than

an effort to find a pattern in a field of research that was woefully incomplete. They were an attempt to

put into words a suspicion concerning the organisation of mental capacities. They are not the heart of

the argument, although they do represent the best attempt possible at the time to put this suspicion into

concrete terms. Using these positions as analytical tools is fine for argument's sake, but it seems hardly

surprising that they fall short of this function.

Kok

In the general introduction to cognitive neuroscience Albert Kok published in 2004, an entire

chapter is dedicated to localisation of function. Much of the chapter is devoted to explaining Fodor's

modularity thesis as he interprets it. He reduces a large part of it to the properties a Fodorean module

would have, but he also has some more general remarks.

The first point Kok makes about Fodor is an odd one. Early on in the book Kok states that

Fodor's thinking is an example of substance dualism. The reason he gives is that Fodor thinks that

central or higher processes form an “unbounded central processor”. (Kok 2004, pp.22) This is explained by

Kok as a non-spacial place of action, and directly compared to a 'ghost in the machine'. The only

interpretation I can give to this, is that it is an unfortunate misinterpretation of what Fodor calls

19

(epistemic) unboundedness. As explained at the end of the first chapter of this paper, unboundedness is

taken to mean that the human mind can, in theory, handle any type of problem it is confronted with; it

is epistemically unbounded.

Apart from this one point, Kok's treatment of Fodor is well grounded in neurophysiology, and

generally more structured than Prinz'. Kok chooses to reduce the list of properties to just four:

Fodorean modules, to Kok, are informationally encapsulated, domain specific, their function is

automatic/mandatory, and they are innate. These properties, however, are not the main focus of his

criticisms. The main problem Kok has with Fodor is the proposed discontinuity between perception and

cognition. Kok does not advocate a division between peripheral and central systems, but seems to be

more inclined to believe in an ascending scale of functional complexity. The question of localisation is

also approached with a scale, running from naïve, or strict localisation to complete equipotentiality. He

places Fodor very close to complete functional localisation. The alternative organisation he proposes is

of a hierarchical, but distributed network.

In general, Kok advocates a view of the mind where function is distributed, but connected in

hierarchical network relations. He proposes that lower-level functions are continuous with higher level

functions, and not as dichotomous as the perception-cognition distinction that Fodor maintains.

Of much more interest to us here, are the more specific points he raises against Fodor. These are

two neurological phenomena that have a bearing on the notion of modularity.

Re-entry

Re-entry is defined as the

“[p]rocess by which cortical regions send projections back to regions from which they receive afferents” (Kolb & Whishaw 2009, pp.846/G-28)

Not only do they project back, these projections have been suggested to be able to modify input

signals before they are sent. (Zeki 1993, referenced in Kolb & Whishaw 2009, pp.263) Such connections would fit into Fodor's

hierarchical topology of the mind as direct influence from higher, or more central processes to lower,

more peripheral ones. A simple example of this is the effect Prinz already mentioned: looking for a

Kodak film box will automatically make anything brightly yellow stand out against other objects. Kok

mentions re-entry as a counterexample against informational encapsulation. Modularity specifically

argues that peripheral, perceptual systems have a bottom-to-top flow of information, whereas it is only

in central systems that information can flow 'any which way'. Especially within the visual system,

20

colour perception is a favourite example of modular function. It has long been correlated to area V4 in

humans. The counterexample of re-entry within this system would deprive any theory connecting

modular function to localisation of one of its most fitting examples.

Cross talk

Another neurological phenomenon that erodes this view of the mind is cross talk, or 'cross

chatter', as Kok calls it. Within the example of the visual system, the pathway leading to perception of

colour, direction of movement and orientation of stimuli has been thoroughly mapped as a hierarchical

structure of bottom-to-top flow of information: this pathway leads from the retinas, through the optic

chasm and the lateral geniculate nucleus (LGN) to the primary visual cortex. From there, projections

lead to higher order visual areas. This is known as the geniculostriate pathway, as it leads through the

geniculate nuclei, which are located on either side of the thalamus, and the striate cortex. This is an

alternate name for the primary visual cortex, taken from its striped appearance.

Along this pathway, different cellular organisations have been identified that respond

specifically to certain types of stimuli, and project this information to incrementally complex systems.

For instance, the M-type ganglion cells in the retina project to layers 1 and 2 of the LGN. These cells

are insensitive to differences in wavelength, but their function is faster than the P-type cells that project

to layers 3 – 6. These colour blind projections project higher up to cortical systems that deal with

movement, and moving stimuli that require a fast response.

Unfortunately, there is a problem with this hierarchical, systematic view of visual function. The

geniculostriate pathway does demonstrably function according to these principles. When stations along

its way are knocked out, however, cells that were taken to respond specifically to one type of stimulus

are shown to respond to other types of stimuli as well. (Dobkins & Albright 1998) The effect is not as strong as the

main function, but there it a definite cross talk between the systems that seemed so perfectly attuned to

one function. This seems to be a direct counterexample to Fodor's 'dedicated neural architecture': at the

neural level of a system which has been shown to function largely according to Fodor's principles,

neurons are not completely dedicated to one function.

21

The Feeling of What Happens

In The Feeling of What Happens (Damasio, 1999) Antonio Damasio sets out to address one of the most

enduring and intractable problems facing science and philosophy: how the brain gives rise to

consciousness. As an intermediate step, he presents a theory of self and its associated neural basis. It is

this theory that is of interest to us here. In the following chapter, we will take a cursory look at

Damasio's theory of self and its neurological underpinnings. It will be the proposition of this paper that

Fodor's and Damasio's theory can complement each other in some respects.

Within this context, let's look at how Fodor describes central systems, in the précis to

Modularity of Mind:

“[T]hese are everything that perception is not: slow, deep, global rather than local,

largely under voluntary (or, as one says 'executive') control, typically associated

with diffuse neurological structures, neither bottom-to-top nor top-to-bottom in

their modes of processing, but characterized by computations in which information

flows every which way. Above all, they are paradigmatically unencapsulated; the

higher the cognitive process, the more it turns on the integration of information

across superficially dissimilar domains. […] [W]hereas perceptual processes are

typically modularized – hence encapsulated, hence stupid in one of the ways that

reflexes are – the really 'smart', really 'higher' cognitive processes (thinking, for

example) are not modular and, in particular, not encapsulated. So Modularity

advocates a principled distinction between perception and cognition in contrast to

the usual Cognitivist claim for their continuity.” (Fodor 1985, pp.3)

So what does Modularity consider these 'higher' functions to be? Fodor specifies 'thought' and

'problem solving' as such. With regard to Damasio's project in The Feeling of What Happens, I propose

that the neural basis of self as Damasio describes it answers to the above description of a central

system.

Before we explore this neural theory of self, some preliminary contextualisation is in order. As

mentioned, the neural self is a step along the way to explaining how consciousness arises in a complex

neural system. The terminology he employs is focused on consciousness rather that on explaining the

self. Damasio defines consciousness in terms of organism and object, and in terms of the relationships

these two hold. (idem, pp. 133) When taken in isolation, this sounds a lot like subject-object reasoning. A

major difference, however, is that an organism is never presented with just one object. It is immersed in

22

an environment completely composed of such objects. Any of these stimuli, and combinations of them

can all serve as objects in such a relation. From this standpoint, an obvious next step would be to

discern neural structures representing these terms.

Representations

First of all, an object must be represented neurally in order to elicit a response within the

organism. Damasio does not go into much detail here. According to him,

“Extensive studies of perception, learning and memory, and language have given us a

workable idea of how the brain processes an object, in sensory and motor terms, and an idea of

how knowledge about an object can be stored in memory, categorized in conceptual and

linguistic terms, and retrieved in recall or recognition modes. The neurophysiological details

of these processes have not been worked out, but the contours of these problems are

understandable.” (Damasio 1999, pp.20)

It is not his main interest how the object is translated to a neural pattern; to him, it is this neural

pattern that matters. Of course, the translation from distal object to neural pattern is of great interest to

neuroscience in general, and it has made considerable advances in identifying these neural systems.

Also, much of Kok's critique of Fodor's modularity adresses processes at this level. Damasio begins his

theory at the 'object proxy'. By this, he means the neural representation on the level of primary sensory

cortices. (idem) This may be problematic in light of the 'cross talk' objection mentioned in the previous

chapter, as this pertains to the communication of representations from subcortical to cortical sites: the

problem was, that the projection from the subcortical LGN to cortical V1 is not a 1:1 copy, but that a

form of interaction occurs. As Damasio doesn't address this problem in The Feeling of What Happens,

within the confines of this search for a neural basis of the self we will consider the object proxy to be

uncomplicatedly true to the object it represents.

Proto-Self

The processes originating from these object proxies can be divided into two distinct types:

perceptual processes that map distal objects in the world, on the one hand and on the other, processes

based on representations of bodily states. These latter processes are fixed so as to continually map

specific aspects of the body. They are, as Damasio calls them, the body's “captive audience”. (idem, pp.22) It

is between these two classes of representations that something special occurs:

“I propose that we become conscious when the organism's representation devices exhibit a

23

specific kind of wordless knowledge – the knowledge that the organism's own state has been

changed by an object “ (idem, pp.25)

This 'wordless knowledge' originates from mapping the correlations between changes in body

states and distal objects presented to the organism. In doing this, the class of objects representing the

body are regarded as a functional entity. This entity is what Damasio calls the 'proto-self':

“a coherent collection of neural patterns which map, moment by moment, the state of the

physical structure of the organism in its many dimensions […] We are not conscious of the

proto-self. Language is not part of the structure of the proto-self. The proto-self has no powers

of perception and holds no knowledge.” (Damasio 1999, pp. 154)

Contrary to the earlier description of perception, Damasio is very specific about the brain areas

involved in the proto-self. The structures in which this collection of patterns arises include the brain-

stem nuclei which regulate body states and map body signals, the hypothalamus and basal forebrain.

He also mentions the insular cortex, the secondary somatosensory cortices and the medial parietal

cortices.

Most of these structures are known to serve purposes of regulation in the body's homeodynamic

responses. Within the context of the proto-self, their function is to report their condition to other areas

of the brain. It is important to bear in mind that this communication works both ways: homeodynamic

systems not only report their function, their function is also influenced by other areas. As one concrete

example of a neurally mediated body response loop, one can think of the HPA-axis. This hypothalamic-

pituitary-adrenal circuit controls hormone production. One of its functions is the 'stress response', a

short-term reaction which is instigated neurally, but effected by the adrenal glands, by releasing cortisol

into the blood stream. It is then deactivated by a feedback loop. This is what Damasio would call a

'body loop'. The HPA-axis in itself is regarded as a concise functional system. Nevertheless, its neural

control is heavily involved in emotional processes. In some types of depression, for example, this

process isn't shut down normally. This creates a continuously elevated cortisol blood level.

Core Self

This higher order mapping of references between changes in body states and distal objects

makes for the earliest origins of the self as distinguished from the world. To summarise: the organism,

as a physical presence in the world, is represented by means of the neural patterns of the proto-self.

From this seed, more complex forms of self ensue. These patterns are themselves mapped against the

representations of distal objects the organism is presented with. This point is worth considering well; it

24

implies that whatever object we perceive, our response encompasses not just the brain but the entire

body.

When this higher order mapping occurs, the organism maps itself as contrasted with an object.

This new, more complex form of self is what Damasio calls 'core self'. In Damasio's theory, the neural

body image is key to not only self-awareness, but to awareness in general. Perception itself is only

possible against the background of the organism's state as represented in the proto-self. The core self

emerges from this second order mapping of the proto-self as its neural patterns are modulated by an

object. This process works in two ways, allowing for self-reference as well as for external reference:

“The presence of all these signals […] describes both the object as it looms toward the

organism and part of the reaction of the organism toward the object as the organism regulates

itself to maintain a satisfactory processing of the object. There is no such thing as pure

perception of an object within a sensory channel, for instance, vision. The concurrent changes

I have just described are not an optional accompaniment. […] [T]he images you form in your

mind always signal to the organism its own engagement with the business of making images

and evoke some emotional reactions. You simply cannot escape the affectation of your

organism, motor and emotional most of all, that is part and parcel to your mind.” (Damasio 1999, pp.

147-147)

This second-order mapping is the relationship between organism and object that Damasio holds

central to the definition of consciousness. It is important to note that this relationship is non-verbal,

although we are conscious of it. The process is ongoing: Proto-self is constantly being affected by a

multitude of distal objects. It should be considered that this is in no way similar to a Cartesian

homunculus. There is no one part of the brain 'looking out' at perceived objects; the core self is a

property emerging from the affectation that every object-confrontation necessarily has on the organism

as represented in the proto-self.

A Neural Substrate

These patterns are again associated with very specific loci in the brain: the insular cortex

(insula), cingulate gyrus, thalamus, hypothalamus and basal forebrain, the superior colliculi, as well as

the primary somatosensory cortex. These physical locations come with a resolute warning, similar to

what we encountered earlier: core self is a function of the interaction between various systems. These,

and other functions are not 'located' in any such areas:

25

“Phrenological thinking must be resisted at all cost” (Damasio 1999, pp. 154).

The reason for this caveat is that Damasio claims no more than that each of these areas are

indispensable for the emergence of core self: A lesion in any of them prohibits the process.

'Phrenological thinking' in this view refers to a stronger claim, namely that a particular anatomical part

of the brain is solely responsible for a specific function – a claim that is untenable given current

understanding of the way neural structures work. These neuroanatomical sites are, as a substrate,

indispensable to the properties described. However, considering the interconnected nature of neural

nets no one site within the brain is entirely self-contained.

Figure 2: Locations of the insular and the cingulate cortex, represented in a 3d model of an individual human brain.

Extensively naming the brain areas involved also serves an articulate function here. Damasio

aims to distinguish his theory as sharply as possible from the notion of a homunculus such as is

traditionally associated with the Penfield areas, and also to distance himself from any whiff of

localisation theory. Though it may seem paradoxical to name brain areas in order to preclude

allegations of localisation, Damasio appears to want to be as specific as possible about what can

actually be associated with neural loci, and be equally specific that this is all that can be made:

associations of active areas with particular functions.

Emotion and Memory

Damasio goes on to build upon core self, combining it with the notion of memory. This leads

him to a similarly grounded hypothesis of autobiographical self. Given the means by which core self

emerges from perceived states of the organism, this 'autobiography' includes accounts of previous

encounters with objects and their effects on its states. It is important to note at this point that Damasio

26

makes a technical distinction between 'feelings' and 'emotions'. A 'feeling' is described as the “private,

mental experience of an emotion”. (Damasio 1999, pp. 42) This occurs at a level of awareness which

incorporates a notion of autobiographical self. 'Emotion', conversely, refers to the variations in bodily

states in their entirety, in response to a given circumstance.

As such, given that awareness only emerges from higher order mappings of these states, such a

response is fully autonomous and precedes any form of awareness. Nevertheless, these patterns are

registered and associated with situations and objects. Memories of patterns such as these are what give

rise to the autobiographical self: Memory adds awareness of a continuous self through time, but this

self is constituted of the same elements that make up emotions. In his own words:

“The pervasiveness of emotion in our development and subsequently

in our everyday experience connects virtually every object or situation

in our experience, by virtue of conditioning, to the fundamental values

of homeostatic regulation: reward and punishment; pleasure or pain;

approach or withdrawal; personal advantage or disadvantage; and, inevitably,

good (in the sense of survival) or evil (in the sense of death). Whether we like it or not, this is

the natural human condition.” (Damasio 1999, pp. 58)

As always, Damasio accompanies his theoretical abstractions with neurological underpinnings.

According to him, emotional responses are controlled by only a few sites, most of which are

subcortical. (Damasio 1999, pp..60-62) The amygdalae are well known to be involved in fear responses, as well as

in recognising fear in others. Damasio also names the hypothalamus, which is another likely candidate

as it controls hormonally mediated physical responses. This brings us back to the example of the stress

response, by means of the hypothalamic-pituitary-adrenergic (HPA) axis, and other body loops that

feed back into the proto-self. What Damasio adds to this is that experiencing fear, sadness, happiness or

anger elicits a very specific pattern of activity in these and a few more sites. It is notable that some of

the other sites he names as being incorporated in emotional response patterns, i.e. the anterior cingulate

and the basal forebrain, appear throughout The Feeling Of What Happens and Descartes' Error as

structures which are imperative to the generation of a self and of consciousness.

27

Figure 3: Location of the basal forebrain in the left hemisphere, indicated by the crosshairs in all three views. From left to right: a sagittal, coronal and an axial plane.

Fodor and Damasio

In this description, Damasio's theory of proto-self and core/autobiographical self bears

resemblance to the pattern of input modules and central systems Fodor posits. In the following, the

proto-self is considered as an input system subserving the central system of core self. In order to test

the plausibility, the attributes of modules named in Modularity are temporarily applied as conditions to

be met. By this description, proto-self would have to be domain specific, mandatory, and would allow

for only limited central access to what it processes. It would have to work fast, be informationally

encapsulated and have a 'shallow' output.

Proto-self as a low-level module

A number of these conditions seem easily met. Damasio's proto-self as explained is a collection

of neural structures that continually map a number of physical states of the organism. The most obvious

examples are the brain-stem and hypothalamic nuclei that measure blood pressure, hormone and

nutrient levels et cetera. The specific domain that is monitored by the areas Damasio mentions in

connection with the proto-self is body states. Information about these states is what comprises the

entire proto-self, thereby meeting the condition of domain specificity. Their operation is very much

mandatory, as it is fast: No amount of persistent desire to change their operations can do so, short of

self-imposing physical injury. Their account of body states occurs in real time.

Does it allow only for limited access to its processes? The limitation Fodor had in mind was the

depth to which horizontal functions can penetrate the inner workings of the module. This is not the case

28

with proto-self: Its sole function is to represent the information it acquires to other processes. In a

situation where higher-order systems influence the workings of the structures involved in proto-self,

such as in the hypothalamus, this influence is counted as not part of the proto-self. However, as

Damasio indicated in the passage quoted earlier: We are not aware of its function. This arguably

constitutes a situation of limited access, in the sense that only a limited number of processes have direct

access to its workings. In the case of the brain-stem nuclei in the example above, the neural structures

associated with core self necessarily have access. The individual homeodynamic processes that the

nuclei are a part of also have access to the information gathered by their respective sensory nuclei. One

could however dare anyone to perceive their LDL-blood level at any given time, or the level of cortisol

present. But these brain-stem nuclei are the least complicated example possible. The other structures

Damasio mentions (e.g. the basal forebrain and insular cortex) hold a less straightforward relation to

the body. Functions classically attributed to these areas (Kolb& Whishaw, 2009 pp.403) include appetitive responses,

management of basic bodily functions such as waking, hunger. The basal forebrain is also associated

with the amygdalae in fear responses and -disorders. (idem, pp.71, 403)

It is debatable to what extent the neural representation of the body Damasio calls proto-self

itself actually processes information. The basal forebrain is heavily interconnected with the basal

ganglia, which could count as information processing systems, but Damasio does not make this

connection explicitly. The proto-self as a whole is presented in The Feeling of What Happens as a

representational system. Regarding a non-computational system as a Fodorean input system is taking a

liberty with the theory; a liberty Fodor himself might object to.

Proto-self as a subsidiary system

An input system according to Fodor must involve some further basic processing of perceptions.

This is something the proto-self does not seem to do. It doesn't recognise, discern or filter perceptions

the way fundamental input systems do; it merely represents. The 'computational' functions of the

structures associated by Damasio with the proto-self are in maintaining the feedback systems of the

body. These functions are not explicitly subsumed under proto-self. However, we can still apply

Fodor's functional taxonomy. As Damasio stated, the changes to these states are 'part and parcel' to our

mind. Perhaps then it would be more productive to view the entire system of proto-self as a subsidiary

system. Within the scope of Damasio's theory of self, the proto-self's sole function appears to be a type

of 'transduction' of physical conditions, and present these conditions as syntactically manageable

29

information to 'higher' systems. Once again, the distinction must be made between the neurological

systems involved, and their emergent property of proto-self, delivering information 'upward' to second-

order maps. Of these, only the latter is part of the functional system under consideration and it is only

this aspect which can be considered as a transducer to the mind. In this interpretation, the transducer

doesn't read from what Fodor calls the “surfaces of the organism” (Fodor 1983, pp. 43), i.e. sensory

information, but from the body itself. These state changes are then corresponded to the distal world in a

continuous stream of subject/object relations.

Core self and autobiographical self as central systems

Central systems were largely defined by means of negating the characteristics attributed to input

systems as modules. By the same means as before, these attributes are, for the sake of this argument,

put to use as conditions to be met. As quoted earlier, central systems are

“slow, deep, global rather than local, largely under voluntary (or, as one says 'executive')

control, typically associated with diffuse neurological structures, neither bottom-to-top nor

top-to-bottom in their modes of processing, but characterized by computations in which

information flows every which way. Above all, they are paradigmatically unencapsulated; the

higher the cognitive process, the more it turns on the integration of information across

superficially dissimilar domains.”

So how much of this pertains to core self and to autobiographical self as Damasio envisions them? The

workings of a central module should be slow, deep and unencapsulated, according to Fodor. 'Slowness'

might not be applicable in the sense in which Fodor means it. The processes that constitute core and

autobiographical self are ongoing, and real-time. These processes are different in nature from cognitive

functions such as 'problem solving', or 'thought', in that they do not have a clear starting point and

conclusion. The autobiographical self is, of course, slow in the sense that it gradually forms over time.

Being deep and unencapsulated are two attributes that do apply to these higher selves. Both

compile a wide variety of inputs to a single integrated whole. It must be remarked, however, that the

singularity of these constituted wholes depends on the body itself being limited. As Fodor's description

makes it possible for many central systems to exist next to each other; these multiple systems of self

create several distinct ways to be self-aware. The sense that these pertain to one singular being derives

from the singularity of the body they describe.

30

Central systems should be global rather than local, associated with diffuse neurological

structures. These two properties seem to hold true for most complex functions. This may not have been

as clear when Modularity was published as it is now, but many functions are constituted by a great

many loci working in close collaboration to produce one functional system. (Uttal, 2001) Both core and

global self are associated with specific neural structures. (Damasio 1999, pp.192-194) Notwithstanding, the

autobiographical self is significantly less specifically associated with neuroanatomical loci: what

Damasio calls “convergence zones” and “image space”. (idem pp.219-222) The interconnected and

anatomically dispersed nature of functional systems' neural substrates may detract from these two

properties' part in the demarcation of central systems, but they are met by all three Damasio's types of

self.

A problem arises when we consider that central processes should be under 'voluntary control'.

Perhaps some influence can be effected on the contents of core or autobiographical self; the processes

themselves are, in Fodor's terms, mandatory. Characteristic breakdowns occur when any of the systems

involved are damaged: all three types of self independently display characteristic breakdowns in the

event of lesions, (Damasio 199, pp.234-236) but voluntary control over their implementation only occurs at the

level of the autobiographical self.

Conclusion

As we've seen, neither the proto-, core or autobiographical self of Damasio complies exactly with the

characteristics we've attributed to central systems, after Fodor's Modularity of Mind. In order to re-

apply Fodor's modularity theory to modern-day cognitive science, some aspects must be omitted or

charitably reinterpreted. As we've seen in the two critiques, charitable interpretation is not a popular

position.

Damasio's theory of self is not a clean fit as a Fodorean central system. However, it is important

to bear in mind that Fodor doesn't use these negative attributes to mark out specific systems: This is an

approach more usually employed by his critics. Fodor leaves central systems negatively defined. In

keeping with a 'charitable interpretation', one could conclude that a combination of systems such as

Damasio describes, both on a functional level and by its implementation in neural correlates, is a

contestant as good as we can hope to find. The focus of central systems in Modularity is on tasks such

as decision making and problem solving. Damasio is looking for the seat of consciousness, and places

this at the centre of his research. This shift in focus seems to be largely why their central systems have

31

some dissimilar attributes. What's more, Modularity isn't so much occupied with speculating on central

systems; it is based on the thesis that what we can discern in this manner are peripheral systems.

Perhaps it is more auspicious to look past details of definition at the gist of Modularity,

although admittedly this opens the way to selective reinterpretation. The book, though of modest

length, contains a wealth of ideas and it is certainly easy to adhere to the ones that simply appeal most

in hindsight. This being said, Fodor's functional taxonomy of transducers, (modular) input systems and

(non-modular) central systems can still be applied today. Many sensory systems have actually been

defined in this way and correlated to dedicated neural systems. This has, also in keeping with Fodor,

only been true for peripheral systems, limited in their scope of function. In this view, Fodor's negative

definition of central systems should be interpreted as a warning at the edge of the known territory: Here

is what we know, and beyond are functions which swiftly lead to conjecture. Although what we know

is ever expanding through new methods of inquiry, this general map still applies.

Within this expanding map of cognitive neuroscience, considering proto-self as a subsidiary

system is a way in which the two systems supplement each other. As such, the following citation from

Modularity combined with the notion of neural representation of the body in proto-self, also forms a

tangible way for cognitive science to consider the body and embodiment as being actively involved in

the workings of the mind. In this respect, this quote from Modularity of Mind appears as prescient now

as Fodor thought Gall had been:

“If [...] we are to start with anything like Turing machines as models in cognitive

psychology, we must think of them as embedded in a matrix of subsidiary systems

which affect their computations in ways that are responsive to the flow of environmental

events. The function of these subsidiary systems is to provide the central machine with

information about the world; information expressed by mental symbols in whatever

format cognitive processes demand of the representations that they apply to.” (Fodor, 1983, pp.

39)

Both Antonio Damasio and Jesse Prinz warn us against what they call a 'new phrenology'. Prinz' fears

may actually be allayed by carefully reading Damasio. Prinz is concerned that functional imaging of

brain processes and lesion studies lead to rash claims of localisation, but one need only read a few

contemporary papers in neuroscience to see that all possible provisos and conditions are duly specified.

Damasio, as well, takes great care not to imply anything beyond correlations of function and

32

neural activity. The type of phrenological thinking Damasio fears seems more to be directed at the

underestimation of the complexity and interconnectivity of systems within the brain. As long as care is

taken not to oversimplify what we don't know, or to overvalue what we do know, cognitive science

remains a prolific field of research, and one that has ample use for the insights of Jerry Fodor.

33

References

Bennett, L.J., 1990, 'Modularity of Mind Revisited', The British Journal for the Philosophy of Science,

Vol. 41, No. 3, pp. 429- 436.

Bergeron, V. 2007, 'Anatomical and Functional Modularity in Cognitive Science: Shifting the Focus',

Philosophical Psychology, Vol. 20, No. 2, April 2007, pp. 175–195.

Carruthers, P., 2006 'The Case for Massively Modular Models of the Mind' Stainton, R.J. (ed.)

Contemporary Debates in Cognitive Science, Blackwell Publishing, Malden, MA.

Churchland, P. Smith-, 1986, Neurophilosophy: Toward a Unified Science of the Mind-Brain, MIT

Press, Cambridge, MA.

Damasio, A., 2000, The Feeling of What Happens: Body, Emotion and the Making of Consciousness,

Heinemann, London.

Damasio, H., 2005, Human Brain Anatomy in Computerized Images, 2nd ed., Oxford University Press,

New York, NY.

Fodor, J.A., 1983, The Modularity of Mind, MIT Press, Cambridge, MA.

Fodor, J.A., 1985, 'Précis of The Modularity of Mind', The Behavioral and Brain Sciences 8, pp.1-42.

Fodor, J.A., 1998, In Critical Condition: Polemic Essays on Cognitive Science and the Philosophy of

Mind, MIT Press, Cambridge, MA.

Kok, A., 2004, Het Hiërarchisch Brein: Inleiding to de Cognitieve Neurowetenschap, Koninklijke Van

Gorcum, Assen.

Kolb, B. and Whishaw, I.Q., 2009, Fundamentals of Human Neuropsychology Sixth Edition, Worth

Publishers, New York, NY.

Kosslyn, S.M., Koenig, O., 1992, Wet Mind: the New Cognitive Neuroscience, The Free Press, New

York, NY.

McGurk, H. and MacDonald, J., 1976, 'Hearing lips and seeing voices', Nature, 264, pp.746-748

Prinz, J.J., 2006 'Is The Mind Really Modular?', Stainton, R.J. (ed.), Contemporary Debates in

Cognitive Science, Blackwell Publishing, Malden, MA.

Uttal, W.R., 2001, The New Phrenology: The Limits of Localizing Cognitive Processes in the Brain,

MIT Press, Cambridge, MA.

Widmaier, E.P., e.a., 2008, Vander's Human Physiology: the Mechanisms of Body Function, 11th ed.,

McGraw-Hill, New York, NY.

34