Essays

52
Essays 1. Would you consider something to be intelligent if it could hold a conversation with you? Discuss with reference to ELIZA or PARRY, and the Turing and Loebner tests. 2. Discuss the extent to which Artificial Intelligence has increased, or changed, our understanding of what it means to be intelligent. 3. Consider the following; expert systems, PARRY, SAM (Script Applier Mechanism). With reference to the Chinese Room argument, discuss the extent to which each of them can be said to understand the world. 4. Discuss the relationship between the arguments made by Turing about the Turing Test, and by Searle about the Chinese Room. 5. Discuss the plausibility of a Language of Thought as a representation system in the brain and what its relationship to AI could or should be. 6. Describe how you would design a computer conversationalist (perhaps like CONVERSE) and what conversational abilities you would hope to put in it, linking these to possible processes as far as you can.

description

Essays. 1. Would you consider something to be intelligent if it could hold a conversation with you? Discuss with reference to ELIZA or PARRY, and the Turing and Loebner tests. - PowerPoint PPT Presentation

Transcript of Essays

Page 1: Essays

Essays1. Would you consider something to be intelligent if it could hold a

conversation with you? Discuss with reference to ELIZA or PARRY, and the Turing and Loebner tests.

2. Discuss the extent to which Artificial Intelligence has increased, or changed, our understanding of what it means to be intelligent.

3. Consider the following; expert systems, PARRY, SAM (Script Applier Mechanism). With reference to the Chinese Room argument, discuss the extent to which each of them can be said to understand the world.

4. Discuss the relationship between the arguments made by Turing about the Turing Test, and by Searle about the Chinese Room.

5. Discuss the plausibility of a Language of Thought as a representation system in the brain and what its relationship to AI could or should be.

6. Describe how you would design a computer conversationalist (perhaps like CONVERSE) and what conversational abilities you would hope to put in it, linking these to possible processes as far as you can.

Page 2: Essays

COM1070: Introduction to Artificial Intelligence: week 4

Yorick WilksComputer Science DepartmentUniversity of Sheffieldwww.dcs.shef.ac.uk/-yorick

Page 3: Essays

Arguments about meaning and understanding (and programs) Searle’s Chinese Room argument The Symbol Grounding argument Bar-Hillel’s argument about the

impossibility of machine translation

Page 4: Essays

Searle’s ExampleThe Chinese Room

An operator O. sits in a room; Chinese symbols come in which O. does not understand. He has explicit instructions (a program!) in English in how to get an output stream of Chinese characters from all this, so as to generate “answers” from “questions”. But of course he understands nothing even though Chinese speakers who see the output find it correct and indistinguishable from the real thing.

Page 5: Essays
Page 6: Essays

The Chinese Room

Read chapter 6 in Copeland (1993): The curious case of the Chinese Room.

Clearer account: pgs 292-297 in Sharples, Hogg, Hutchinson, Torrance and Young (1989) ‘Computers and Thought’ MIT Press: Bradford Books.

Original source: Minds, Brains and Programs: John Searle (1980)

Page 7: Essays

Important philosophical critic of Artificial Intelligence. See also more recent book:Searle, J.R. (1997) The Mystery of Consciousness. Granta Books, London

Weak AI: computer is valuable tool for study of mind, ie can formulate and test hypotheses rigorously.

Strong AI: appropriately programmed computer really is a mind, can be said to understand, and has other cognitive states.

Page 8: Essays

Searle is an opponent of strong AI, and the Chinese room is meant to show what strong AI is, and why it is wrong.

It is an imaginary Gedankenexperiment

like the Turing Test.

Page 9: Essays

Can digital computers think?

Could take this as an empirical argument - wait and see if AI researchers manage to produce a machine that thinks.

Empirical means something which can be settled by experimentation and evidence gathering.

Example of empirical question:

Are all ophthalmologists in New York over 25 years of age?

Page 10: Essays

Example of non-empirical question:

are all ophthalmologists in New York eye specialists?

Searle - ‘can a machine think’ is not an empirical question. Something following a program could never think, says S.

Contrast this with Turing, who believed:

‘Can machines think?’ was better seen as a practical/empirical question, so as to avoid the philosophy (it didn’t work!).

Page 11: Essays

Back into the Chinese Room

|Operator in room with pieces of paper.

Symbols written on paper which operator cannot understand.

Slots in wall of room - paper can come in and be passed out.

Operator has set of rules telling him/her how to build, compare and manipulate symbol-structures using pieces of paper in room, together with those passed in from outside.

Page 12: Essays

Example of rule:

if the Input slot pattern is XYZ, write that on the next empty line of the exercise book labelled ‘Input store’

transform this into sets of bits (say, 1001001111), then perform specified set of manipulations on those bits (giving another bit string).

then pair this result with Chinese characters, put in ‘Output store’ and push through Output slot.

Page 13: Essays

But symbols mean nothing to operator.

Instructions correspond to program which simulates linguistic ability and understanding of native speaker of Chinese.

Sets of symbols passed in and out correspond to sentences of meaningful dialogue.

More than this: Chinese Room program is (perhaps!) able to pass the Turing Test with flying colours!

Page 14: Essays

According to Searle, behaviour of operator is like that of computer running a program. What point do you think Searle is trying to make with this example?

Page 15: Essays

Searle: Operator does not understand Chinese - only understands instructions for manipulating symbols.

Behaviour of operator is like behaviour of computer running same program.

Computer running program does not understand any more than the operator does.

Page 16: Essays

Searle: operator only needs syntax, not semantics.

Semantics - relating symbols to real world.

Syntax - knowledge of formal properties of symbols (how they can be combined).

Mastery of syntax: mastery of set of rules for performing symbol manipulations.

Mastery of semantics: to have understanding of what those symbols mean (this is the hard bit!!)

Page 17: Essays

Example: from Copeland.

Arabic sentence

Jamal hamati indaha waja midah

2 syntax rules for arabic:

a) To form the I-sentence corresponding to a given sentence, prefix the whole sentence with the symbols ‘Hal’

b) To form the N-sentence corresponding to any reduplicative sentence, insert the particle ‘laysa’ in front of the predicate of the sentence.

Page 18: Essays

What would I sentence and N sentence corresponding to Arabic sentence be. (sentence is reduplicative and its predicate consists of everything following ‘hamati’)?

Jamal hamati indaha waja midah

Page 19: Essays

But syntax rules tell us nothing about the semantics. Hal forms an interrogative, and laysa forms a negation. Question asks whether your mother-in-law’s camel has belly ache:

Hal jamal hamati indaha waja midah

and second sentence answers in the negative:

Laysa indaha waja midah

According to Searle, computers just engaging in syntactical manoeuvres like this.

Page 20: Essays

Remember back to PARRY

PARRY was not designed to show understanding, but was often thought to do so. We know it worked with a very simple but large mechanism:

• Why are you in the hospital?• I SHOULDN’T BE HERE.• Who brought you here?• THE POLICE.• What trouble did you have with the police?• COPS DON’T DO THEIR JOB.

Page 21: Essays

Strong AI: Machine can literally be said to understand the responses it makes.

Searle’s argument is that like the operator in the Chinese Room, PARRY’s computer does not understand anything it responds--which is certainly true of PARRY but is it true in principle, as Searle wants?

Page 22: Essays

Searle: Program carries out certain operations in response to its input, and produces certain outputs, which are correct responses to questions.

But hasn’t understood a question any more than an operator in the Chinese Room would have understood Chinese.

Page 23: Essays

Questions: is Searle’s argument convincing?

Does it capture some of your doubts about computer programs?

Page 24: Essays

Suppose for a moment Turing had believed in Strong AI. He might have argued:

a computer succeeding in the imitation game will have same mental states that would have been attributed to human.

Eg understanding the words of the language been used to communicate.

But, says Searle. the operator cannot understand Chinese.

Page 25: Essays

Treat the Chinese Room system as a black box and ask it (in Chinese) if it understands Chinese -

“Of course I do”

Ask operator (if you can reach them!) if he/she understands Chinese -

“search me, its just a bunch of meaningless squiggles”.

Page 26: Essays

Responses to Searle:

1. Insist that the operator can in fact understand Chinese -

Like case in which person plays chess who does not know rules of chess but is operating under post-hypnotic suggestion.

Compare blind-sight subjects who can see but do not agree they can----consciousness of knowledge may be irrelevant here!

Page 27: Essays

2. Systems Response (so called by Searle)

concede that the operator does not understand Chinese, but that system as a whole, of which operator is a part, DOES understand Chinese.

Copeland: Searle makes an invalid argument (operator = Joe)

Premiss - No amount of symbol manipulation on Joe’s part will enable Joe to understand the Chinese input.

Page 28: Essays

Therefore No amount of symbol manipulation on Joe’s part will enable the wider system of which Joe is a component to understand the Chinese input.

Burlesque of the same thing clearly doesn’t follow.

Page 29: Essays

Premiss: Bill the cleaner has never sold pyjamas to Korea.

Therefore the company for which Bill works has never sold pyjamas to Korea.

Page 30: Essays

Searle’s rebuttal of systems reply: if symbol operator doesn’t understand Chinese, why should you be able to say that symbol operator (Joe) plus bits of paper plus room understands Chinese.

System as a whole behaves as though it understands Chinese. But that doesn’t mean that it does.

Page 31: Essays

Recent restatement of Chinese Room Argument

From Searle (1997) The Mystery of Consciousness

1. Programs are entirely syntactical

2. Minds have a semantics

3. Syntax is not the same as, nor by itself sufficient for, semantics

Therefore programs are not minds. QED

Page 32: Essays

Step 1: - just states that a program written down consists entirely of rules concerning syntactical entities, that is rules for manipulating symbols. Physics of implementing medium (ie computer) is irrelevant to computation.

Step 2: - just says what we know about human thinking. When we think in words or other symbols we have to know what those words mean - a mind has more than uninterpreted formal symbols running through it, it has mental contents or semantic contents.

Page 33: Essays

Step 3: - states the general principle that Chinese Room thought experiment illustrates. Merely manipulating formal symbols does not guarantee presence of semantic contents.

‘..It does not matter how well the system can imitate the behaviour of someone who really does understand, nor how complex the symbol manipulations are; you can not milk semantics out of syntactical processes alone..’

(Searle, 1997)

Page 34: Essays

The Internalised Case

Suppose the operator learns up all these rules and table and can do the trick in Chinese. On this version, the Chinese Room has nothing in but the operator.

Can one still say the operator understands nothing of Chinese?

Consider: a man appears to speak French fluently but say, no he doesn’t really, he’s just learned up a phrase book. He’s joking, isn’t he?

Page 35: Essays

You cannot really contrast a person with rules-known-to-the person

We shall return at intervals to the Chomsky view that language behaviour in humans IS rule following (and he can determine what the rules are!)

Page 36: Essays

Searle says this shows the need for semantics but semantics means two things at different times:

Access to objects via FORMAL objects (more symbols) as in logic and the formal semantics of programs.

Access to objects via physical contact and manipulation--robot arms or prostheses (or what children do from a very early age).

Page 37: Essays

Semantics fun and games

Programs have access only to syntax (says S.). If he is offered a formal semantics (which is of one interpretation rather than another) – that’s just more symbols ( S’s silly reply ).

Soon you’ll encounter the ‘formal semantics of programs’ so don’t worry about this bit.

If offered access to objects via a robot prothesis from inside the box: Searle replies that’s just more program or it won’t have reliable ostension/reference like us.

Page 38: Essays

Remember Strong AI is the straw man of all time.

“computers, given the right programs can be literally said to understand and have other cognitive states”. (p.417)

Searle has never been able to show that any AI person has actually claimed this!

[Weak AI – mere heuristic tool for study of the mind]

Page 39: Essays

Consider the internalised Chinese “speaker”: is he mentally ill? Would we even consider he didn’t understand? What semantics might he lack? For answering questions about S’s paper? ; for labels, chairs, hamburgers?

The residuum in S’s case is intentional states.

Page 40: Essays

Later moves:

S makes having the right stuff necessary for having I-states (becoming a sort of biological materialist about people; thinking/intentionality requires our biological make up i.e. carbon not silicon. Hard to argue with this but it has no obvious plausilility).

He makes no program necessary – This is just circular – and would commit him to withdrawing intentionality from cats if …. etc. (Putman’s cats).

Page 41: Essays

The US philosopher Putnam made it hard to argue that things must have certain properties. He said: suppose it turned out that all cats

were robots from Mars. What would we do? Stop calling cats ‘cats’--since they didn’t

have the ‘necessary property’ ANIMATE? Just carry on and agree that cats weren’t

animate after all?

Page 42: Essays

Dennett: I-state is a term in S’s vocabulary for which he will allow no consistent set of criteria – but he wants people/dogs in and machines out at all costs.

Suppose an English speaker learned up Chinese by tables and could give a good performance in it? (And would be like the operator OUT OF THE ROOM)

Would Searle have to say he had no I-state about things he discussed in Chinese?

Page 43: Essays

Is there any solution to the issues raised by Searle’s Chinese Room? Are there any ways of giving the symbols real meaning?

Page 44: Essays

Symbol grounding

Harnad, S. (1990) The Symbol Grounding Problem. Physical D 42, 335-346.

Copy of paper can be obtained from:(http://www.cogsci.soton.ac.uk/harnad/genpub.html)

computation consists of manipulation of meaningless symbols.

For them to have meaning they must be grounded in non-symbolic base.

Like the idea of trying to learn Chinese from a Chinese dictionary.

Page 45: Essays

Not enough for symbols to be ‘hooked up’ to operations in the real world. (See Searle’s objection to robot answer.)

Symbols need to have some intrinsic semantics or real meaning.

For Hanard, symbols are grounded in iconic representations of the world.

Alternatively, imagine that symbols emerge as a way of referring to representations of the world - represent-ations that are built up as a result of interactions with the world.

Page 46: Essays

For instance, a robot that learns from scratch how to manipulate and interact with objects in the world.

(Remember Dreyfus argument that intelligent things MUST HAVE GROWN UP AS WE DO)

In both accounts, symbols are no longer empty and meaningless because they are grounded in non-symbolic base - i.e. grounded in meaningful representations.

(Cf. formal semantics on this view!)

Page 47: Essays

Does Harnard’s account of symbol grounding really provide an answer to the issues raised by Searle’s Chinese Room?

What symbol grounding do humans have?

Symbols are not inserted into our heads ready-made.

For example, before a baby learns to apply the label ‘ball’ to a ball, it will have had many physical interactions with it, picking it up, dropping it, rolling it etc.

Page 48: Essays

Child eventually forms concept of what ‘roundness’ is, but this is based on long history of many physical interactions with the object.

Perhaps robotic work in which symbols emerge from interactions with the real world might provide a solution.

See work on Adaptive Behaviour e.g. Rodney Brooks.

Page 49: Essays

Another famous example linking meaning/knowledge to understanding:

This is the argument that we need stored knowledge to show understanding.

Remember McCarthy’s dismissal of PARRY--not AI because it did not know who was US President.

Is knowledge of meaning different from knowledge of things? ‘The Edelweiss is a flower that grows in the Alps’.

Page 50: Essays

Famous example from history of machine translation (MT) Bar-Hillel’s proof that MT was IMPOSSIBLE

(not just difficult) ---------------------------------------- Little Johnny had lost his box He was very sad Then he found it The box was in the PEN Johnny was happy again

Page 51: Essays

Bar-Hillel’s argument: The words are not difficult nor is the

structure To get the translation right in a language

where pen is NOT both playpen and writing pen,

You need to know about the relative sizes of playpens, boxes and writing pens.

I.e you need a lot of world knowledge

Page 52: Essays

One definition of AI is: knowledge based processing Bar-Hillel and those who believe that in AI,

look at the ‘box’ example and AGREE about the problem (needs

knowledge for its solution) DISAGREE about what to do (for AI it’s a

task, for B-H impossible)

Does the web change this question at all?