M P I B C B Perceiving is more than Seeing · ... or touch? How do we find our way in the world?...

4
P erceptions are the basis for our understanding of the world. For the act of seeing, for instance, it is not enough to simply project the outside world onto the retina and from there onto a sort of screen in the brain. Perception is not just the passive recording of sensory stimuli, but rather an active mental recon- struction of the real world that sur- rounds us. As a result, our brain dis- mantles what appears on the retina into highly abstract bits of informa- tion that correspond in the final analysis to a sort of symbolic repre- sentation of the outside world; a self-made model of the world. What we perceive really depends essential- ly on unconscious cognitive deci- sions and conclusions. The brain usually makes these on its own with- out us having to bother. In doing so, it uses previously collected knowl- edge, experience, expectations, and prejudices. Once the brain has learned some- thing, it is often no longer especially bothered about the actual realities. We cannot force our way to the out- side through the nerves in order to reach the true reality, to get to Kant’s “thing itself”. Everything that gets into our consciousness from the out- side is transmitted through the clear- ing centres of our sensory organs – shapes, faces, and movements. But also seemingly absolute things such as matter, space, time, and even the I we experience ourselves, are, as we 4/2001 M AX P LANCK R ESEARCH 45 In the World of the SENSES FOCUS How does the world get into our head? Is what we perceive really true? What happens in our brain when we see, hear, smell, or touch? How do we find our way in the world? Research groups at the MAX PLANCK INSTITUTE FOR BIOLOGICAL CYBERNETICS in Tübingen are using so-called “psychophysical investigation methods” to answer these Perceiving is more than Seeing experience them in everyday life, something that is artificial, self- made and constructed by our brain. It is quite clear that our brain uses an unbelievable amount of detailed information which it has at its dis- posal, amongst which is information we are not even aware of. How does the brain manage to select the “cor- rect” information and pull it togeth- er? What strategies have unfolded during the course of evolutionary development in the interplay be- tween the brain and the senses that allow us to find our way in this world? In three projects at the Max Planck Institute for Biological Cy- bernetics in Tübingen, light is being shed on parts of this sweeping ques- tion. They form spotlights on the way to looking over the brain’s shoulder as it works. SIGHT AND MOVEMENT ALLIES IN THE CONTEST We have already started to become uneasy on the way up. However, we only experience the tingling in our stomachs when the roller coaster reaches its highest point before fi- nally plunging downwards at top speed from one loop to the next. If it gets too bad, then there is only one thing left to do: close your eyes. And if it is not too late already, then per- haps that way you will be able to leave your dinner where you put it with enjoyment when you ate it. It is the interplay between visual and questions. This means that sensory impressions of test subjects are manipulated and their perceptual reactions are studied. Colleagues working with HEINRICH H. BÜLTHOFF in the PSYCHOPHYSICS DEPARTMENT are attempting first and foremost to trace the principles of perception and the co-ordination of movement in space. 44 M AX P LANCK R ESEARCH 4/2001 Our whole nervous system is designed to interact with the environment. As a result, we must collate the various bits of information that come from our senses. In the end, this enables us to carry out motor actions successfully. For a long time, there has been speculation as to the ways in which this integration of sensory information takes place. René Descartes, for instance, assumed that the pineal body was the “seat of the soul” – it was here that all information was supposed to come together and where motor actions were initiated. FIG.: MPI FOR BRAIN RESEARCH

Transcript of M P I B C B Perceiving is more than Seeing · ... or touch? How do we find our way in the world?...

Page 1: M P I B C B Perceiving is more than Seeing · ... or touch? How do we find our way in the world? ... that causes our body trouble. ... nificance to the sense of balance than to our

Perceptions are the basis for ourunderstanding of the world. For

the act of seeing, for instance, it isnot enough to simply project theoutside world onto the retina andfrom there onto a sort of screen inthe brain. Perception is not just thepassive recording of sensory stimuli,but rather an active mental recon-struction of the real world that sur-rounds us. As a result, our brain dis-mantles what appears on the retinainto highly abstract bits of informa-tion that correspond in the finalanalysis to a sort of symbolic repre-sentation of the outside world; aself-made model of the world. Whatwe perceive really depends essential-ly on unconscious cognitive deci-sions and conclusions. The brainusually makes these on its own with-out us having to bother. In doing so,it uses previously collected knowl-edge, experience, expectations, andprejudices.

Once the brain has learned some-thing, it is often no longer especiallybothered about the actual realities.We cannot force our way to the out-side through the nerves in order toreach the true reality, to get to Kant’s“thing itself”. Everything that getsinto our consciousness from the out-side is transmitted through the clear-ing centres of our sensory organs –shapes, faces, and movements. Butalso seemingly absolute things suchas matter, space, time, and even the Iwe experience ourselves, are, as we

4 / 2 0 0 1 M A X P L A N C K R E S E A R C H 45

In the World of the SENSESFOCUS

How does the world get into our head? Is what we perceive really true? What happens

in our brain when we see, hear, smell, or touch? How do we find our way in the world?

Research groups at the MAX PLANCK INSTITUTE FOR BIOLOGICAL CYBERNETICS in

Tübingen are using so-called “psychophysical investigation methods” to answer these

Perceiving is more than Seeing

experience them in everyday life,something that is artificial, self-made and constructed by our brain.

It is quite clear that our brain usesan unbelievable amount of detailedinformation which it has at its dis-posal, amongst which is informationwe are not even aware of. How doesthe brain manage to select the “cor-rect” information and pull it togeth-er? What strategies have unfoldedduring the course of evolutionarydevelopment in the interplay be-tween the brain and the senses thatallow us to find our way in thisworld? In three projects at the MaxPlanck Institute for Biological Cy-bernetics in Tübingen, light is beingshed on parts of this sweeping ques-tion. They form spotlights on theway to looking over the brain’sshoulder as it works.

SIGHT AND MOVEMENT –ALLIES IN THE CONTEST

We have already started to becomeuneasy on the way up. However, weonly experience the tingling in ourstomachs when the roller coasterreaches its highest point before fi-nally plunging downwards at topspeed from one loop to the next. If itgets too bad, then there is only onething left to do: close your eyes. Andif it is not too late already, then per-haps that way you will be able toleave your dinner where you put itwith enjoyment when you ate it. It isthe interplay between visual and

questions. This means that sensory impressions of test subjects are manipulated

and their perceptual reactions are studied. Colleagues working with HEINRICH H.

BÜLTHOFF in the PSYCHOPHYSICS DEPARTMENT are attempting first and foremost to

trace the principles of perception and the co-ordination of movement in space.

44 M A X P L A N C K R E S E A R C H 4 / 2 0 0 1

Our whole nervous system is designed to interact with the environment. As a result, we must collate thevarious bits of information that come from our senses.In the end, this enables us to carry out motor actionssuccessfully. For a long time, there has been speculationas to the ways in which this integration of sensory information takes place. René Descartes, for instance, assumed that the pineal body was the “seat of the soul”– it was here that all information was supposed to come together and where motor actions were initiated.

FIG.:

MPI

FO

RBR

AIN

RESE

ARCH

Page 2: M P I B C B Perceiving is more than Seeing · ... or touch? How do we find our way in the world? ... that causes our body trouble. ... nificance to the sense of balance than to our

4 / 2 0 0 1 M A X P L A N C K R E S E A R C H 47

In the World of the SENSES

vestibular information, i.e. informa-tion originating from our vestibularorgan in the ear, that causes ourbody trouble. The body can usuallycope quite well with us not stayingin one place and moving about inspace. The eye (or rather the brain)has learned that a movement of theimage on the retina need not neces-sarily mean that the surroundingsare moving too. On the contrary, thecentral nervous system asks themuscles whether it is a spontaneousmovement that is the cause of the“film” on the retina, and remainingsatisfied if the data agrees as normal.However, if this is not the case, thebrain sounds the alarm.

How is the complex visual-vestibular interplay organised?Which system, eye or vestibular or-gan, is predominant? How are thesetwo sensory modalities offset? Asyet, there are only a few investiga-tions existing in scientific literaturewhich relate to complex situationsthat are close to reality. The changein perceived intrinsic movement inspace, “spatial updating”, is the re-

46 M A X P L A N C K R E S E A R C H 4 / 2 0 0 1

FOCUS

sions. It is precisely virtual realitymethods that are so well suited to in-vestigations of this type, since re-searchers can “change” the world atwill by manipulating the appropriateparameters. For example, they candecouple the visual and vestibularsensory information. Whilst the eyesare offered a static image, like onewould see if one looked straightahead without moving, the test sub-jects are turned through 90 degreesin space. The whole thing is nowsupposed to be simulated in an opti-cally more varied environment withphoto-realistic versions of Tübin-gen’s market place, and then com-pared to the real experience in themarket place.

It is not only the eyes that con-tribute to spatial perception – in cas-es of doubt the brain also uses othersources of information to reconstructthe three-dimensional environment.A year ago, Marc Ernst from HeinrichH. Bülthoff’s work group, togetherwith collaborators from the Uni-versity of California (Berkeley/USA),was able to demonstrate that eventhe hands can serve as “visual aids”.In their experiments the researchersdiscovered that feeling with the fin-gers can significantly and effectivelyinfluence visual perception.

SEEING CORRECTLY

MEANS FEELING

THROUGH YOUR FIINGERS

To be able to perceive the world inthree dimensions, the visual systemuses different optical stimuli fromwhich it is possible to reconstructthe spatial position and structure ofobjects observed. These stimuli in-clude shadows, perspective distor-tions of the object, and the irregular-ities between the retinal images dueto the different positions of the eyesknown as disparity. The brain’s taskis to combine the information avail-able into an overall picture whilst at

search topic of information scientistMarkus von der Heyde and physicistBernhard Riecke. Three cognitiveoutputs are significant for this. Onone hand, memory, both short-termand long-term memory, plays a rolealongside immediate perception. Onthe other, how we behave, i.e. howwe move in space, is crucial.

As a result, von der Heyde has setup a motion platform with which heintends to record and quantify thecontribution of both sensory modali-ties to our overall perception (fig. 1).The test subjects sit on this movableapparatus and are moved linearly orrotated through space. The perceivedintrinsic position in space is mea-sured by asking the people to esti-mate their position in the room. Inthe tests which follow, the real visualstimulus is replaced by a virtual im-age. By way of a “head-mounted dis-play" (HMD), a sort of “data helmet”,both eyes are presented with an im-age that is linked to the person’smovement. This image may corre-spond to the one a person with a re-stricted field of vision would have (inthe real version of the experiment,this is achieved by using sight-re-stricting glasses); however, the virtu-al image may also be decoupled fromthe actual movement and offer com-pletely different visual stimuli. Thetest subjects produce the same out-puts under both real and virtual con-ditions as long as the scenes matchup. This fulfils a basic condition inorder to obtain meaningful resultseven under altered virtual conditions.In these initial experiments, it waspossible to confirm the dominance ofthe visual system already knownfrom literature.

The question the two researchersfrom Tübingen are now asking iswhether there are also conditionsunder which we concede greater sig-nificance to the sense of balancethan to our optical sensory impres-

Fig. 1: Test subject on motion platform. Theperception of intrinsic movement and sightare decoupled and manipulated separately. Asa result, the researchers obtain feedback asto how the world is represented in the brain.

the same time weighting the varioussensory impressions according totheir reliability. Thus, a plausible sig-nal contributes more to the recon-struction of the spatial image than aless plausible one. Therefore, thecombination of the sensory impres-sions determines how we see our en-vironment. But how does the brainknow which stimulus it can trustmost?

To clarify this question, Marc Ernstdesigned an apparatus that deludedthe test subjects into accepting theimage of an inclined surface. The in-clination of this surface was trans-mitted by means of two optical para-meters. One by way of a texture gra-dient, generated by means of per-spectively converging vanishinglines, the other by way of a disparitydifference based on the stereo effectdue to binocular vision. It was possi-ble to alter both these signals inde-pendently of each other and differ-ently to each other so that in eachcase, they displayed different incli-nations of the surface – thus bring-ing the observer’s brain into a con-

flict situation. The results show thatthe brain resolved the contradictionbetween the two stimuli by weight-ing each signal and thus came to acompromise, an interim value. Thetest subjects actually perceived anevenly inclined surface.

In the next stage of this experi-ment, the participants in the test notonly saw the inclination of the sur-face, they also felt it. In order toachieve this, a virtual dice that theycould push with one finger was pro-jected onto the image of the inclinedsurface. The resistance offered to thefinger when pushed in one directionor another was simulated by a com-puter. This finger-felt inclination wasthen adjusted respectively in the ex-periment, so that it corresponded toone of the two optical stimuli thatwas either the inclination that wastransmitted by the disparity differ-ence or that generated by the texturegradient. This contact really did influ-ence the weighting of the visual sig-nals. The signal that was finger-feltwas given a higher weighting in eachcase, i.e., it was obviously considered

more reliable by the brain. “One andthe same surface is perceived differ-ently after finger-feeling than in theprevious purely optical test”, saysMarc Ernst. “For the observer it ap-pears to be inclined more strongly inthe direction of the optical signal thatagrees with the slope felt. It is stillpossible to demonstrate this effect upto a day after the touch test.”

SMALL OR LARGE CIRCLES –EVERYTHING IS

SIMPLY AN ILLUSION

The researchers were able to con-firm what takes place unconsciouslyand continuously in everyday lifewhen we reach for objects or climbover steps. In addition to visual in-formation, the brain also uses tactilecheck-back signals from the handsand legs to obtain a spatial impres-sion. In this case, information fromthe motor system is matched withthat from the visual system and canaffect the weighting of optical stim-uli almost like a teacher.

These results rock the foundationsof theoretical considerations that

Fig. 2: The Ebbinghaus illusion. The test subjects look at a table with one circle of large and one circle of small rings. In the centre there is an aluminium disk of identical size in each case. In the grasping experiment, the test subjects are asked to pick up the disk. Infrared-emitting diodes on the thumb and first finger make it possible to measure the grasping distance between the fingertips. In the

perception experiment, the test subjects are supposed to match a reference circle on a monitor to the size of the aluminium disk. Volker Franz’s results show that, quantitatively, the motor action isduped just as much as the visual system - in both experiments the tests subjects estimate the disk surrounded by the large circles to be smaller than that surrounded by small circles.

FIG.:

MPI

FO

RBI

OLO

GIC

ALCY

BERN

ETIC

S

Page 3: M P I B C B Perceiving is more than Seeing · ... or touch? How do we find our way in the world? ... that causes our body trouble. ... nificance to the sense of balance than to our

4 / 2 0 0 1 M A X P L A N C K R E S E A R C H 49

In the World of the SENSES

have emerged primarily on the basisof investigations using brain-dam-aged patients in whom one or otherbranch of perception is disturbed inits function. There are supposed tobe two independent paths in thebrain that play a part in the percep-tion of objects – one path is respon-sible for the conscious and immedi-ate perception of stimuli (“percep-tion”), whilst the other checks theobject’s relevance of action (“ac-tion”). Scientists from the Universityof Verona (Italy) and the Universityof Western Ontario (Canada) ap-peared in 1995 to have been able toconfirm this theory neatly in healthytest subjects. Their experimentsseemed to confirm that the so-calledEbbinghaus illusion, a well-knownperceptual illusion, only leads usastray when we see it and not whenwe reach for it. Although the twocircles in the middle (fig. 2) wereconsidered to be different sizes, the

test subjects selected by Aglioti,DeSouza and Goodale each openedtheir hand equally wide in both situ-ations when they were asked totouch the circular disc.

However, for Volker Franz from theMPI psychophysics work group thereremained doubts not least because heand the two professors, ManfredFahle from the University of Bremenand Karl Gegenfurtner from the Uni-versity of Magdeburg, believed theyhad discovered methodical deficien-cies in the study referred to above.They repeated the experiment on thespot under modified conditions andin actual fact, Franz obtained exactlythe opposite results. On grasping theEbbinghaus figures, the test subjectsfrom Tübingen opened their hands indifferent widths, as (incorrectly) toldto do by their visual system. There-fore, the motor action is misleadingand from a quantitative point of viewit is similar to our sight. The “redis-

covered” illusion therefore led to dis-illusionment. The apparently won-derful evidence for the hypothesis ofseparate processing paths had beenruined.

Nevertheless, the psychologistfrom Tübingen plays the role of thespoilsport with composure. “In sci-ence one has to be prepared for sur-prises like these. It is for preciselythis reason that the repetition offundamental experiments is so im-portant. The result does not neces-sarily refute the hypothesis, but forthe time being merely strips it of onebasic proof that is, admittedly, se-ductively simple.” In the meantimeVolker Franz is continuing his workin collaboration with Prof. OdmarNeumann’s work group in Bielefeld.In masking experiments the re-searchers offer their test subjects twoslightly different visual stimuli inquick succession. The two stimuliconsist in each case of the inferred

48 M A X P L A N C K R E S E A R C H 4 / 2 0 0 1

FOCUS

outlines of a square and differ in sizeas well as in the orientation present-ed (see diagram at bottom of page48). The stimuli are shown one afteranother so rapidly, in a techniqueknown as “metacontrast”, that mostof the test subjects say that in eachcase they only noticed the secondstimulus. However, if one asks thesame test subjects to press a buttonas a result of the stimulus and eitherright or left depending on the orien-tation, something strange happens.The test subjects react more quicklyif the first stimulus (which they havenot consciously perceived) has thesame orientation as the second. Con-versely, they need longer if a squarewith a different orientation is offeredbefore the second, now consciouslyperceived stimulus. Therefore, al-though they cannot give any infor-mation about the first stimulus, itevidently plays a part in the execu-tion of an action – the “action” pathis affected without the “perception”path apparently taking any notice ofit. If these initial results of the mask-ing experiments were to be con-firmed in further tests, then thiswould support the Milner-Goodaletheory of the two independent pro-cessing routes, and the scientists’world would be back on courseagain.

Bringing images to life is not onlythe aim of Hollywood’s dream facto-ry, but is also on the mind of many aMax Planck scientist. Scientists areasking how it is possible to recon-struct the complete spatially charac-

terised profile of a person from apicture or a photo of them. Theproblem – referred to as an under-determined problem – is that of re-constructing a three-dimensionalobject from a purely two-dimension-al document. To be able to achievethis, one has to fall back on empiri-cal knowledge, the knowledge ofwhat objects of this type (such asfaces) look like if one looks at themfrom a different perspective.

MORPHS – ARTIFICIALLY

MANUFACTURED HEADS

Evidently, our brain has no diffi-culties doing this since it solvesproblems of this type on a daily ba-sis. Our eyes constantly transmitnothing more than illustrations ofthe outside three-dimensional worldto a two-dimensional retina – ourbrain has learnt to establish the thirddimension from this data. In thiscase it establishes the open parame-ters from experience – we have al-ready seen countless faces in ourlives and know what they shouldlook like, even if the detailed infor-mation necessary to do this is notavailable to us immediately.

In 1999 the physicist Volker Blanzworking in the group in Tübingen

succeeded in imitating the brain. Hedeveloped an algorithm which heused on the computer to reconstructa complete head from the two-di-mensional image of a single face. Todo this, however, he first had to“supply” the computer with the fore-knowledge of how faces in our worldshould look. Prior to this, ThomasVetter and Niko Troje of the samework group had already developed amethod making it possible to modeland record faces in three dimensions.Using a special device, the scientistsscanned as many faces as possible,viewed from all sides, and fed thedata into a computer. In the mean-time, the database of faces in Tübin-gen has increased to over 200 heads,strictly divided into half male andhalf female faces. Based on specificcharacteristics such as the positionof eyes, nose, mouth, ears etc. thecomputer is now able – with the helpof a mixed image sequence calledMorph – to establish an average facefrom the over 200 recorded dataheads (fig. 3), the so-called averagevalue head (generic face model).

This average value head was thestarting point for the algorithm thatVolker Blanz created to reconstruct aspatial face profile from a model pic-

Fig. 3: A neutral average value face was calculated from 100 male and 100female faces (top row). On the left of the picture is the female average val-ue face, on the right the male, with the series of images showing the inter-mediate stages in each case. The second row shows morphs of the faces oftwo people of different sexes. The faces in the third row originate from afemale person and were masculinised along the row of images from left to

right (the original is in the “female” box). The faces in the fourth row origi-nate from a male person and were feminised along the row of images fromright to left (the original is in the “male” box). In each case there is also oneimage of a “super” female and one of a “super” male face (to the left andright respectively of the three points).

Fig. 4: The most up-to-date computer technology was used to investigatethe relative significance of shape and movement information in the re-cognition of faces. Using the Famous Technologies’ face animation system,it is possible to record facial expressions and transfer them to any numberof computerised models of human heads. Using the “morphable model”,designed by scientists in Tübingen, it is possible to systematically changethe shape information in a face. Combining both techniques makes it possible to blend shape and movement information of faces at will.

Page 4: M P I B C B Perceiving is more than Seeing · ... or touch? How do we find our way in the world? ... that causes our body trouble. ... nificance to the sense of balance than to our

Information on the 52nd

Annual General Meeting can be obtained free of charge from the Press and Public Relations Department of the Max Planck SocietyHofgartenstraße 880539 MunichFax 089/2108-1207e-mail: [email protected]

50 M A X P L A N C K R E S E A R C H 4 / 2 0 0 1

and biologist Barbara Knappmeyerwould like to discover what role fa-cial expression plays in the recogni-tion and identification of faces.Everyone knows the inimitable smileof famous actor Jack Nicholson.Would we recognise this smile againif it were not Jack Nicholson whowas snarling, but Tom Cruise? There-fore, in collaboration with the Britishpsychologist Ian Thornton, BarbaraKnappmeyer has brought movementinto “her” faces. First of all she high-lighted characteristic points in thefaces of test subjects (fig. 4) andfilmed them whilst speaking, chew-ing, laughing, frowning, and creat-ing other facial expressions. Sheplayed the film sequences into thecomputer which carried out the rest.It was then possible to link everyface from the database with everycharacteristic movement of a filmsequence – “Tom” and “Jack”, bothrandomly selected virtual faces fromthe database, can now exchangetheir smiles at will.

MOVING FEATURES –RECOGNITION IN THE BRAIN

In the first phase of the experi-ment the test subjects were allowedto become familiar with the movingfaces of Tom and Jack. In this caseevery face was given its appropriateexpression. Once the test subjectshad finally made friends with theirvirtual companions, the test phasebegan. A chewing face, either Jackor Tom or even a mixture of Jackand Tom, a so-called morph, ap-

peared on the screen. On a continu-ous scale, the morph may lie nearerto Jack, nearer to Tom or somewherein the middle (fig. 5) – that is the di-viding point on the scale betweenTom and Jack at which the test sub-jects indicate that they recogniseJack instead of Tom in the face. Theresearcher then asked herself thequestion as to whether the dividingpoint shifted towards Jack if themorph was simultaneously animatedwith Jack’s characteristic chewing.

The result is clear. The characteris-tic movement of the faces learnedduring the training phase demon-strably influenced recognition of theface. On the way from Tom to Jack,Jack clearly came to light earlier ifthe morph moved like Jack, and con-versely appeared later if it movedlike Tom. “Thus it appears that theface recognition system in the hu-man brain not only uses statisticalinformation with regard to the faceshape, but also the specific move-ment of faces,” Barbara Knappmeyercommented on the initial results ofher studies. In this case, the move-ment can be as varied and complexas the facial expressions of a person.Thus, the brain has learned to dealwith such complex questions and indaily life can fall back at any timeon an appropriate tool to identify theworld around us. This all happenswithout our (conscious) help - and,like Alice in Wonderland, we watchin fascination when the cat disap-pears and only its grin is left behind.

RAINER ROSENZWEIG

FOCUS

Fig. 5: Thought experiment. Do characteristic movements of faces play a part in recognition?Using a “morphing sequence”, which reflects a successive change in the facial expressions of Tom into Jack, test subjects are asked which faces in this sequence look more like Tom ormore like Jack respectively. If facial expression plays a part in recognition, one would expectmore “Jack” answers if the face moved like Jack’s and vice versa. The effect of the informa-tion by way of facial expression ought to be particularly marked in the region of the “fiftypercent morph” as the shape information is ambiguous at this point. The diagram shows the anticipated curves. In actual fact the experimental findings confirm this expectation.

ture. To do this, the average valuehead must be photographed underthe same conditions (such as posi-tion, lighting etc.) as the model pic-ture. This photo is compared pixel bypixel with the model, and a sort of“average value” is created using aspecial process. The process is com-plicated because one has to pay at-tention to the appropriate correspon-dence – naturally one can only com-pare an eye with an eye and not withthe mole next to it. The researchersthen transfer the picture thus ob-tained back to the three-dimensionalhead. To do this, they simply reversethe process which they used in thebeginning to make a photo from thethree-dimensional average valuehead. The result even surprises pro-fessional film makers. Audrey Hep-burn looks at us, turns her head andwe can look at her profile and evenset her face in motion. All this ispossible even though we only had aphoto of her as a model.

In the meantime, the database offaces and the “morphable model”based on it has become the startingpoint for other exciting scientificquestions. Until now, only a few sci-entific studies have been concernedwith the movement or animation offaces and the effect of this on recog-nition of people. The mathematician

M A X P L A N C K G E S E L L S C H A F T JV 2001 1

52. Jahresversammlung 2001 in Berlin

AUS DEM INHALT

Hubert Markl, Präsident der Max-Planck-Gesellschaft:

Freiheit, Verantwortung, Menschenwürde: Warum

Lebenswissenschaften mehr sind als Biologie

Peter Berthold, Direktor der Vogelwarte Radolfzell,

Max-Planck-Forschungsstellefür Ornithologie:

Vogelzug als Modell der Evolutions- und

Biodiversitätsforschung

Manfred T. Reetz, Direktor des Max-Planck-Instituts

für Kohlenforschung, Mülheim/Ruhr:

Evolution im Reagenzglas –Biokatalysatoren

auf dem Vormarsch

AD V E RT I S E M E N T