Onthe definition consciousness, - PNAS · knowledge, intentions ... They are not unusual (for...

5
Proc. Nati. Acad. Sci. USA Vol. 89, pp. 5774-5778, July 1992 Psychology On the definition of the concepts thinking, consciousness, and conscience (artificial intelligence/mind/cognltion/perception) ANDREI S. MONIN P. P. Shirshov Institute of Oceanology, Russian Academy of Sciences, 117218 Moscow, Commonwealth of Independent States Contributed by Andrei S. Monin, February 3, 1992 ABSTRACT A complex system (CS) is defined as a set of elements, with connections between them, singled out of the environment, capable of getting information from the environ- ment, capable of making decisions (i.e., of choosing between alternatives), and having purposefulness (i.e., an urge towards preferable states or other goals). Thinking is a process that takes place (or which can take place) in some of the CS and consists of (i) receiving information from the environment (and from itself), (it) memorizing the information, (iii) the subcon- scious, and (iv) consciousness. Life is a process that takes place in some CS and consists of functions i and i, as well as (v) reproduction with passing of hereditary information to prog- eny, and (vi) oriented energy and matter exchange with the environment sufficient for the maintenance of all life processes. Memory is a complex of processes of placing information in memory banks, keeping it there, and producing it according to prescriptions available in the system or to inquiries arising in it. Consciousness is a process of realization by the thinking CS of some set of algorithms consisting of the comparison of its knowledge, intentions, decisions, and actions with reality- i.e., with accumulated and continuously received internal and external information. Conscience is a realization of an algo- rithm of good and evil pattern recognition. The theory of artificial intelligence (AI) (and that of human intelligence as well) lacks a constructive (that is, functional) definition of the concept "intelligence." This may be a reason why the concept of Al is not widely accepted in society, even among scientists (while, curiously, the concept of human intelligence even in the absence of definition is never considered as questionable). This of course restrains practical steps towards the realization and development of AI of higher and higher levels. The proverbial question "Can a machine think?" obvi- ously is meaningless until a definition of the concept "think- ing" is given. The Alan Turing test-"if an expert cannot distinguish the performance of a machine from that of a human who has a certain cognitive ability, then the machine also has that ability"-does not give a constructive defini- tion. Substitution of a computer (defined as a machine for manipulation of formal symbols) for a machine in general enabled Searle (1) to conclude that computers, as defined above, do not think (or, in other words, that "strong AI" is impossible) because computer programs contain only syntax while thinking is not limited by formal symbol manipula- tion-it needs semantics. However, there is no necessity to restrict ourselves to the narrow definition of a computer given above. For instance, a computer usually has a data base ordered according to some indications, and this ordering introduces a kind of semantics. In Searle's favorite example of the "Chinese room" com- puter it includes a data base of numbered hieroglyphs that actually contains some semantics in the form of interconnec- tions between hieroglyphs designating, say, "color" and "blue" or "green." Another example is presented by a chess-playing computer: its data base necessarily contains a selection of standard openings, end-games, and multimove checkmates which constitute the chess semantics itself (if it exists at all). Generalizing, we go over to data base semantics in the theory of games and in its numerous practically important applications. Or, considering the most rigorous problems of thinking, it must be admitted that all the semantics of the mathematical group theory and other abstract ("formal") mathematical constructions are created by using initial def- initions and axioms as the data bases and the rules of mathematical logic as the programs to prove theorems. However, the broadening of the definition of a computer is not the principal aim of this paper: I am about to elaborate a definition of the concept "thinking" suitable for a vast class of systems (or "subjects") which includes particularly hu- mans and broadly defined computers as well-for the so- called "complex systems" [see, for instance, Fleishman (2)]. Complex Systems DEFINITION 1. A complex system (CS) is a set of elements, with connections between them, singled out of the environ- ment, capable of getting information from the environment, capable of making decisions (i.e., of choosing between alternatives), and having purposefulness (i.e., an urge to- wards preferable states or other goals). Let me mention some of CS qualities which seem to be necessary while considering possibilities of CS thinking. The first of these is reliability or stability (R-quality) of a given CS or perhaps of a larger CS which includes the given CS as one of its subsystems. The R-quality may be provided for, in particular, by CS structure (preprogrammed, for instance, by means of the genetic code) or by its behavior, including, in the case of a biological CS, instincts or unconditioned reflexes (i.e., aggregates of innate complex reactions or acts of behavior arising as a rule in a constant form in response to internal or external stimuli, such as defensive or self- preservation instincts, feeding, sexual or reproductive in- stincts, and parental and population-preservation instincts). The second is noise-stability while receiving information from the environment, including the normal functioning of organs of sense (I-quality). The third is controllability (C- quality), and the fourth is self-organization or self-learning (L-quality), including learning by being taught (which plays a major role in many biological CS). All four of these qualities are obviously feasible in cases of several nonbiological CS. The structure of a CS is understood usually as a graph, that is, a set of elements ("vertexes" or "nodes") and their pairs (nonordered "ribs" and ordered "arcs") with adjacency and incidence relations between them. Complex structure is necessary for the provision of a CS with R-, I-, Q-, and L-qualities defined above. For example the R-quality may be Abbreviations: Al, artificial intelligence; CS, complex system(s). 5774

Transcript of Onthe definition consciousness, - PNAS · knowledge, intentions ... They are not unusual (for...

Proc. Nati. Acad. Sci. USAVol. 89, pp. 5774-5778, July 1992Psychology

On the definition of the concepts thinking, consciousness,and conscience

(artificial intelligence/mind/cognltion/perception)

ANDREI S. MONINP. P. Shirshov Institute of Oceanology, Russian Academy of Sciences, 117218 Moscow, Commonwealth of Independent States

Contributed by Andrei S. Monin, February 3, 1992

ABSTRACT A complex system (CS) is defined as a set ofelements, with connections between them, singled out of theenvironment, capable of getting information from the environ-ment, capable of making decisions (i.e., of choosing betweenalternatives), and having purposefulness (i.e., an urge towardspreferable states or other goals). Thinking is a process thattakes place (or which can take place) in some of the CS andconsists of (i) receiving information from the environment (andfrom itself), (it) memorizing the information, (iii) the subcon-scious, and (iv) consciousness. Life is a process that takes placein some CS and consists of functions i and i, as well as (v)reproduction with passing of hereditary information to prog-eny, and (vi) oriented energy and matter exchange with theenvironment sufficient for the maintenance of all life processes.Memory is a complex of processes of placing information inmemory banks, keeping it there, and producing it according toprescriptions available in the system or to inquiries arising init. Consciousness is a process of realization by the thinking CSof some set of algorithms consisting of the comparison of itsknowledge, intentions, decisions, and actions with reality-i.e., with accumulated and continuously received internal andexternal information. Conscience is a realization of an algo-rithm of good and evil pattern recognition.

The theory of artificial intelligence (AI) (and that of humanintelligence as well) lacks a constructive (that is, functional)definition of the concept "intelligence." This may be areason why the concept of Al is not widely accepted insociety, even among scientists (while, curiously, the conceptof human intelligence even in the absence of definition isnever considered as questionable). This of course restrainspractical steps towards the realization and development ofAIof higher and higher levels.The proverbial question "Can a machine think?" obvi-

ously is meaningless until a definition of the concept "think-ing" is given. The Alan Turing test-"if an expert cannotdistinguish the performance of a machine from that of ahuman who has a certain cognitive ability, then the machinealso has that ability"-does not give a constructive defini-tion. Substitution of a computer (defined as a machine formanipulation of formal symbols) for a machine in generalenabled Searle (1) to conclude that computers, as definedabove, do not think (or, in other words, that "strong AI" isimpossible) because computer programs contain only syntaxwhile thinking is not limited by formal symbol manipula-tion-it needs semantics.However, there is no necessity to restrict ourselves to the

narrow definition of a computer given above. For instance, acomputer usually has a data base ordered according to someindications, and this ordering introduces a kind of semantics.In Searle's favorite example of the "Chinese room" com-puter it includes a data base of numbered hieroglyphs thatactually contains some semantics in the form of interconnec-tions between hieroglyphs designating, say, "color" and"blue" or "green." Another example is presented by a

chess-playing computer: its data base necessarily contains aselection of standard openings, end-games, and multimovecheckmates which constitute the chess semantics itself (if itexists at all).

Generalizing, we go over to data base semantics in thetheory of games and in its numerous practically importantapplications. Or, considering the most rigorous problems ofthinking, it must be admitted that all the semantics of themathematical group theory and other abstract ("formal")mathematical constructions are created by using initial def-initions and axioms as the data bases and the rules ofmathematical logic as the programs to prove theorems.However, the broadening of the definition of a computer is

not the principal aim of this paper: I am about to elaborate adefinition of the concept "thinking" suitable for a vast classof systems (or "subjects") which includes particularly hu-mans and broadly defined computers as well-for the so-called "complex systems" [see, for instance, Fleishman (2)].

Complex Systems

DEFINITION 1. A complex system (CS) is a set ofelements,with connections between them, singled out of the environ-ment, capable ofgetting information from the environment,capable of making decisions (i.e., of choosing betweenalternatives), and having purposefulness (i.e., an urge to-wards preferable states or other goals).

Let me mention some of CS qualities which seem to benecessary while considering possibilities of CS thinking. Thefirst ofthese is reliability or stability (R-quality) ofa given CSor perhaps of a larger CS which includes the given CS as oneof its subsystems. The R-quality may be provided for, inparticular, by CS structure (preprogrammed, for instance, bymeans ofthe genetic code) or by its behavior, including, in thecase of a biological CS, instincts or unconditioned reflexes(i.e., aggregates of innate complex reactions or acts ofbehavior arising as a rule in a constant form in response tointernal or external stimuli, such as defensive or self-preservation instincts, feeding, sexual or reproductive in-stincts, and parental and population-preservation instincts).The second is noise-stability while receiving informationfrom the environment, including the normal functioning oforgans of sense (I-quality). The third is controllability (C-quality), and the fourth is self-organization or self-learning(L-quality), including learning by being taught (which plays amajor role in many biological CS). All four of these qualitiesare obviously feasible in cases of several nonbiological CS.The structure of a CS is understood usually as a graph, that

is, a set of elements ("vertexes" or "nodes") and their pairs(nonordered "ribs" and ordered "arcs") with adjacency andincidence relations between them. Complex structure isnecessary for the provision of a CS with R-, I-, Q-, andL-qualities defined above. For example the R-quality may be

Abbreviations: Al, artificial intelligence; CS, complex system(s).

5774

Proc. Natl. Acad. Sci. USA 89 (1992) 5775

provided for by duplicating a function in different subsystemsin the same fashion as the reconstruction of the image of awhole object imprinted on a hologram by any part of thehologram, or doing without a damaged electrical line in aparallel electrical network. These analogies may have a literalmeaning while modeling neural networks of human brain ororgans of sense.A complexity of a set may be measured by its dimension-

i.e., an exponent d in the power law N - Ed which expressesthe minimum numberN of spheres of diameterE covering theset when E is small (but not too small; this means that thepower law is an intermediate asymptotic). If d exceeds theusual (topological) dimension of the set, then the set is calledafractal. There exist some grounds to suspect that for neuralnetworks d > 1-i.e., they are fractals.

It appears, however, that the most complex structures arethe random ones [for the theory of random graphs see, forinstance, Gilbert (3)]. They are not unusual (for example,polycrystalline structures and ferromagnetic domains arewidespread in nature and in engineering). The same is true forthe brain cellular microstructure. The formation of the brainstructure on higher, supercellular, levels, including cytoar-chitectonics of cerebral cortex, is of course determined bythe genetic program. However, the human genome (severalmillion genes) is very far from sufficient for encoding all theneurons (1010 to 1011), synapses (1014 to 1015), and theircombinations. The complexity of the neural network struc-ture necessary for the thinking processes exceeds by manyorders of magnitude the possibilities of transmitting thegenetic information.

It is clear, therefore, that the cellular microstructure ofdifferent brain regions is being formed during the ontogenesisprocesses not preprogrammedly but individually-i.e., in asense, randomly (it appears that in these circumstances anexhaustive genetic program of the system functioning, espe-cially on the cellular level, is hardly possible; thereforeteaching and self-learning should play a great role in theprogram-it reveals prospects for elaboration ofprograms forCS with more and more complex structure-i.e., with verylarge amounts of fast-acting elements and connections insmall volumes).The fortuity of formation of neurons and synapses in the

ontogenesis of an individual human brain determines theinimitability of its cellular structure and therefore the ob-served scatter of parameters and abilities of individuals. Thatis why human individuals are inimitable and every person ispriceless. Just this is the principal difference between humanbeings and modern computers, the structure of which, downto its smallest elements and connections between them, isrigidly preprogrammed, so that all computers of the sameseries have an identical structure. A wide scatter of abilitiesof human individuals includes geniuses-very rare largestfavorable deviations of some ability from the norm situated,so to speak, on the tails of the Gaussian curves (occurrenceof geniuses of a certain ability may be estimated by the orderof magnitude as one per billion). This interpretation, incontrast to the mutation hypothesis, explains why genius isnot a hereditary virtue. The cloning of geniuses seems to bein principle beyond the possibilities of genetic engineering.

Finally, let us discuss in this section the question of thepurposefulness of a CS. It may be formalized in particular asoptimization-that is, an urge towards extreme values ofsome purpose functionals of the CS states. The multiformityof goals may lead to relative optimization in regard to sometarget or other and to a corresponding multitude of possibleCS decisions.

Thinking and Life

DEFINITION 2. Thinking is a process which takes place (orwhich can take place) in some of the CS and consists in (i)

receiving informationfrom the environment (andfrom the CSitself), (ii) memorizing the information, (iii) the subcon-scious, and (iv) consciousness.DEFINITION 3. Life is a process which takes place in some

of the CS and consists of functions i and ii above, (v)reproduction with passing ofhereditary information to prog-eny, and (vi) oriented energy and matter exchange with theenvironment sufficient for the maintenance of all life pro-cesses.Here v appears to be the principal function or purport of life

(there exist minor exceptions; for instance, some hybrids,such as the mule, are undoubtedly alive but have no repro-duction abilities-this is an aimless life).

Function iv is obviously not necessary for life. The sameappears to be basically true for function iii if innate hereditaryinstincts are not included in this concept. At the same timefunction v is obviously not necessary for thinking, especiallyfor Al, at least if AI has no reproduction abilities [althoughsexual reflections may play a significant role in the mental lifeof biological subjects, ascending from innate instincts, tosubconscious motivations discovered by Sigmund Freud (4),and up to the idealized concept of personal love at the verysummit known at present]. The same is true for function vi(the energy necessary for the maintenance of the thinkingprocesses may be borrowed from some internal source, forinstance from some quantity of a radioactive isotope).Thus thinking and life processes are by definition not

interconnected: a thinking CS may be either alive or lifeless,and a living CS may be either thinking or thoughtless. In thesecond of these four cases the CS would constitute Al.Let us also mention lifeless and thoughtless CS implement-

ing prescribed complex programs, such as the program ofexploration of the outer planets and satellites of the solarsystem, carried out by CS "Voyager 1-Voyager 2." The aimof this CS had been to collect several specified kinds ofinformation (including some current self-inspection informa-tion on its own orientation and internal state) and to transmitit back to the Earth. The performance was not quite auto-matic; it was radio-controlled from the Earth. The systemmight be organized closer to a thinking CS, were its perfor-mance more reliable.Another example is given by chess-playing "foreseeing"

computers, which have reached at present the level of worldchampionship and can exceed this level if it is desirable. Theyhave shown that automatically going over all the possiblechessboard positions for a certain number of moves forwardplus a reasonable cost function of positions successfullyreplaces the incomplete memory and intuitive preferences ofa human player. This may mean that chess-playing itself in itspure form (if we disregard the partners' behavior and theenvironment) is not a thinking process.We have included function i (receiving information) into

the general definition as a necessary condition for thinkingprocesses (and for life processes as well). Information may bereceived by a CS by means of suitable sensors (in biologicalCS, organs of sense) or in recorded form (written, photo-graphic, video- and audiotapes, etc). In the case of humansthere are five classical senses-sight (visible light or electro-magnetic waves with a length of 700-400 nm), hearing(acoustic waves with frequencies of 10-20,000 Hz), touch(tactile, pressure, warm-cold, pain), smell, and taste (sour,salty, sweet, bitter, etc); in addition there exists a sense ofacceleration (vestibular apparatus) and also baro-, mechan-ical-, chemical-, thermo-, and osmotic interoreceptors sup-plying some information on the internal state of the humanbody. Organs of sense include primary receptors and encod-ing devices transforming external influences into signals thatare transmitted through suitable channels (nerves in the case

of humans) to specific analysis in a central processor (thebrain in the case of humans).

Psychology: Monin

Proc. Natl. Acad. Sci. USA 89 (1992)

The ability to receive and store sensory information on theenvironment is very important for thinking systems. Just thisgives semantics to thinking CS [but of course not the "spe-cific biochemical properties of brains," which mystically"enable them to cause consciousness and other sorts ofmental phenomena"; quotations from Searle (1)]. In somecases it is sufficient for thinking (and for consciousness) tohave even a very narrow channel of sensory information onthe environment. Thus, cases are known of blind, deaf, anddumb individuals who have been taught by using the sense oftouch and the Braille alphabet and have reached rather highlevels of intellectual activities. It seems strange that Searledenies that the computer receiving video images has seman-tics. I believe, on the contrary, that experiments with com-puters of this kind (supplied with programs of pattern rec-ognition and self-learning) would be very fruitful in Alconstruction.

Memory

DEFINITION 4. Memory is a complex of processes ofplacing the information in memory banks, keeping it there,and producing it according to prescriptions available in thesystem or to inquiries arising in it.The contents of the memory store may be divided into data

bases ["declarative knowledge", according to Anderson (5)and Kihlstrom (6)] and program libraries ("procedural knowl-edge" in refs. 5 and 6). Data bases are further divided intochronological (with memorized circumstances of their acqui-sition by the CS) and semantic ones (which represent the"mental lexicon" of the abstract knowledge memorizedwithout obligatory connection to the circumstances of theiracquisition). The program library contains prescriptions ofalgorithms for the solution of some problem or for actions insome conditions. Reminder: An algorithm is a sequence oftransformations of discrete constructive objects (in the prac-tically important case, words of some alphabet)-i.e., theexact prescription defining the calculation process whichstarts from arbitrary initial data and is directed to getting theresult determined by these data; the process is characterizedby sets of possible initial data, intermediate and final results,and rules of beginning, direct transformation, ending, andextraction of the result.Memory is also divided into short-term ("working") and

long-term memories [in the "parallel distributed processing"theory (7) there exist a number of working memories]. Aworking memory may apparently have a relatively limitedvolume and may contain a number of notions on the state ofthe system and its aims at a given moment of time, and alsoknowledge from the data bases and the program library,activated by actions of the working memory itself or byexternal stimuli. A long-term memory may in principle bepractically limitless-i.e., may have room for all the infor-mation acquired by the system during its lifetime (this isapparently so in the case of humans; in this case long-termmemory includes also innate-i.e., inherited, geneticallyencoded programs; there is not much point in including intotheir number the programs of ontogenesis contained in thegenes in all chromosomes of all cells, but there is everyreason to include instincts or unconditioned reflexes, definedand listed above; in some lifeless thinking CS these instinctsor a part of them may be absent.

Consciousness

DEFINITION 5. Consciousness is a process ofrealization bythe thinking CS of some set of algorithms consisting of thecomparison of its knowledge, intentions, decisions, andactions with reality-i.e., with accumulated and continuouslyreceived internal and external information.

In literal translation from Russian, "consciousness"means "coknowledge," that is, "existing together withknowledge" [see Shreider (8)]. Consciousness algorithms arerealized by means ofa working memory using current sensualinformation and chronological and semantic data bases (whileprogram libraries are used subconsciously). Thereby con-sciousness transforms the available knowledge into the com-municable form, which may be fixed or registered by outputdevices and, therefore, transmitted to the outer world.According to Kihlstrom (6), consciousness is not identical

to any functions of perception and cognition such as memory,selective responses to stimuli, problem solving, etc.-all ofthem may be performed subconsciously but may also beaccompanied by consciousness. In other words, quality iv ofDefinition 2 is not necessary for most of the thinking pro-cesses. But it is necessary for some of them, such ascommunicating information of one's own states to the outerworld, voluntary control, etc; that is why we have includedquality iv in Definition 2 (though in the case of Al thedefinition of a subconsciously thinking CS may be construc-tive).The very first consciousness algorithm consists of singling

oneself out of the environment-i.e., in self-identification orself-reference. According to the classical work of WilliamJames (9), the key to the consciousness is self-awareness orself-reference-i.e., the presence of the concept "I": theuniversal conscious thought is not "the feelings exist" and"the thoughts exist" but "I think" and "I feel." Thisalgorithm may conditionally be called "Soul" (understooddifferently than the religious concept of soul, which refersonly to living thinking CS or even only to humans, and impliesa combination of life and mentality, the latter in a broadersense than mere self-reference).The algorithm of self-identification may be realized in

particular by fixation of some characteristic or other of one'sown states at present or in the past, carrying out self-inspection test programs prescribed earlier or formed in theprocesses of learning or self-learning, continually or at somespecific moments of time prescribed or worked out by thesystem itself. The conscious system may nevertheless havea very incomplete and distorted view about itself (like anilliterate human, often having no clear view about his or heranatomy, physiology, and psychology, for instance, notknowing up to now what is consciousness), but still itidentifies itself.For example, a computer may extract and give out to a

printer or to a display data on missteps which have happenedin it during a specified period of time, with indication of theirnature and causes. This may be done by means of a specif-ically prescribed program or following the prescription ofself-learning. The latter means, first, the need to form theconcept of a misstep as some definite class of computer'sactions, having adequate general and in their totality specificfeatures; second, to form algorithms of misstep patternrecognition consisting of recognition operators and solvingrules; third, to realize these algorithms according to somechosen time-table; fourth, to give out the results in somechosen form. Similarly a computer may realize self-inspection test programs or construct them for itself, follow-ing the prescription of self-learning.

Self-identification (on a rather low level-i.e., highly spe-cialized) is probably performed by the chess-playing com-puter: it receives external information (partner moves); it hasworking and long-term memories, a program library, includ-ing the rules of the game, prescribed relative costs of chesspieces, algorithms of standard openings and end-games, andrules for estimating chessboard positions; and it is able to giveout all the information on its state. But of course it "does notknow" what things the chess pieces are; there is no need of

5776 Psychology: Monin

Proc. Natl. Acad. Sci. USA 89 (1992) 5777

this for the game. In all of this there is apparently nodifference from a human chess player, say, playing blindly.

Self-learning programs appear to be especially importantfor consciousness algorithms. Let us emphasize, however,that not only self-learning but also direct teaching of a giventhinking CS by another CS plays a great role in the devel-opment processes of any intelligence-for instance, in bio-logical populations, teaching of offspring by parents and in ahuman society also by teachers and/or by computers. Teach-ing of computers by humans and/or by other computers atfirst glance appears to be trivial, but in principle it may helpin developing new educational methods in general. Forexample, attempts at teaching computers to translate textsfrom one language into another have shown that somerevisions of the rules of grammar are desirable.

It appears that the concept "I"-i.e., the program ofself-identification, may be worked out for any modern com-puter. The elaboration of alternative programs of this kind inthe near future may promote the development of new gen-erations of computers and their interactions with humans-i.e., the creation of human-computer civilization. Compila-tion of the more and more complete lists of consciousnessalgorithms seems to have great prospects.

Conscience

DEFINITION 6. Conscience is a realization ofan algorithmof the good and evil pattern recognition.

It is one of consciousness algorithms. According to Shrei-der (8) conscience is a realization of the meaning of one'sactions and the following moral responsibility for them. Inliteral translation from Russian it is a "co-notice"-that is,"existing together with notification" (from society); theprefix "co-" emphasizes social conditionality of the crite-rion. Conscience is difficult to define for a single CS becauseit is not equivalent to the concepts of expediency, efficiency,usefulness, reliability, etc. In human society it might bedifferent in various populations and in different historicalperiods; let us note, for example, the social attitude towardscannibalism and concentration camps.

In the presence of the concepts of good and evil in the database of a CS (introduced there, for example, most probablyby teaching and possibly always adapted to some extent tothe general structure and the volume of the data base of thegiven CS, including available instincts) a conscience may bedefined as an algorithm of good and evil pattern recognitionin the actions performed by the system or analyzed aspossible. The results of the conscience algorithm may berealized by a system in very different ways (and not onlyconsciously; one can, for instance, perspire or blush withshame).

It appears that the definition of conscience for a systemwithout consciousness is impossible. In contrast, conscious-ness without conscience is possible (for instance, unfortu-nately, in the case of some human individuals).The Holy Bible testifies that there was a time when

everything was good and the consciousness of humans wasserene, and the only evil existing potentially was the eatingof the Tree of the Knowledge of Good and Evil, which wasforbidden by the Lord God: if there is no evil, no conscienceis necessary. But in the presence of evil the CS withoutconscience seem frightening, even ones like simple robotsand the simplest automatic machines. Because of their "soul-lessness" they are usually perceived by the human con-sciousness negatively, cautiously, or sneeringly. This is theroot of the frightening fantasies on Frankenstein, Golem,Terminator, etc. and Chaplin's warning sequences with con-veyor lines and automatic machines.The elaboration of the simplest conditional "conscience"

algorithms for any computer does not seem to be difficult. At

higher levels it may acquire practical value. In the future,while constructing AI systems, the problem of introducingconscience into them should be taken seriously, as hasalready been suggested by I. Asimov in the form of the "lawsof robotechnics" (10): the Al should have the concepts ofgood and evil, the latter including everything harmful forhumans (with some priorities in cases of alternatives).

The Subconscious

Most of the thinking processes are performed without the use

of the working memory and consciousness. Let us try toclassify them, basically following Kihlstrom (6) and keepingin mind the possibilities of their algorithmization.Then, of course, the automatic processes, using "proce-

dural knowledge" (the program library), come first. Theyinclude in particular the acquisition of sensual information(through sight, hearing, etc.) which goes on without anyintention on the part of the subject and without consumingany of one's attentional resources-i.e., unconsciously or

nonconsciously in the strict sense of the word. Some of theseautomatic processes are innate-i.e., are hereditarily "hard-wared" into the system-while others are acquired throughexperience, such as skill learning, repetition, routinization,rehearsal, and training. The tendency to liberate conscious-ness from repetitive routine actions may be the result of thelimited "power" (i.e., volume and speed of actions) of theworking memory.

In the second place, let us mention the "preconscious"processes such as subliminal perception and implicit mem-ory, producing some latent data bases activated below thethreshold of conscious awareness, but still able to influencethinking processes and actions of the CS. An example ofsubliminal perception is given by cinema advertisementsusing sequences too short (say, less than 5 msec) to beperceived consciously but still influencing purchasers' pref-erences. Other examples are the so-called Poctzel phenom-enon (reappearance of subliminal stimuli in a subject's sub-sequent dreams) and recognition experiments of the typestimulus word (prime)-random mask-target with timing pre-venting conscious detection of the masked prime, whichnevertheless helped in detecting the target.

Implicit memory is revealed by a change in actions attrib-utable to the information acquired during prior experiencewhich itself was not consciously registered. An example isgiven by the so-called Korsakoffs syndrome, consisting in ananterograde amnesia-i.e., inability to recall events occur-

ring since the onset of an illness due to the cut-off of thechronological memory, while all the semantic memory andprogram library remain intact.The third class of subconscious processes is revealed by

the phenomena of hypnosis-alterations of the thinking pro-cesses induced by another person. For example, hypnoticanalgesia consists in an induced loss of the conscious aware-

ness of pain, while all the physiological reactions to pain(such as an increase in heart-beat rate) are present. Thisphenomenon is somewhat similar to the catatonia syndrome,when a person stays conscious but does not react to pain or

any other stimuli and does not move at all. Another exampleis a posthypnotic amnesia, which is a selective disruption ofthe conscious memory retrieval.

It appears that all these subconscious processes may be inprinciple algorithmized; it is quite obvious for the establish-ment of selective thresholds of getting into the workingmemory and of putting off the consciousness according to

prescriptions introduced from without. Therefore, again,thinking and life are not interconnected.

The author thanks Dr. V. B. Kuznetsov for reading the text and

Psychology: Monin

Proc. Natl. Acad. Sci. USA 89 (1992)

Mrs. R. L. Leonova and Mrs. N. I. Solntseva for preparing themanuscript.

1. Searle, J. R. (1990) Sci. Am. (Intl. Ed.) 262 (1), 20-25.2. Fleishman, B. S. (1982) in Fundamentals ofSystemology (Ra-

dio and Communication, Moscow), pp. 17-54 (in Russian).3. Gilbert, E. N. (1959) Ann. Math. Stat. 30, 1141-1144.4. Freud, S. (1966-1969) in Gesammelte Werke (G. Fischer,

Stuttgart, F.R.G.), Vols. 1-18.

5. Anderson, J. R. (1983) in The Architecture of Cognition (Har-vard Univ. Press, Cambridge, MA).

6. Kihlstrom, J. F. (1987) Science 237, 1445-1452.7. Hinton, G. E. & Anderson, J. A., eds. (1981) in Parallel

Models ofAssociative Memory (ErIbaum, Hillsdale, NJ).8. Shreider, Yu. (1989) The New World 11, 244 (in Russian).9. James, W. (1890) in Principles ofPsychology (Holt, New York).

10. Asimov, I. (1970) I, Robot (Fawcett, Greenwich, CT).

5778 Psychology: Monin