Artificial Stupidity The Myth of the Intelligent Agent Richard Walker Koeln, November 29, 2005.
-
date post
21-Dec-2015 -
Category
Documents
-
view
219 -
download
3
Transcript of Artificial Stupidity The Myth of the Intelligent Agent Richard Walker Koeln, November 29, 2005.
Artificial Stupidity
The Myth of the Intelligent Agent
Richard WalkerKoeln, November 29, 2005
KoelnNovember, 29, 2005
Artificial Stupidity (Examples) An old paper (good version) An old paper (bad version) A nice drawing A spread sheet
KoelnNovember, 29, 2005
Artificial Stupidity
In each of these cases, the software performs a task which could easily have been performed by a human being.
This introduces Artificial Stupidity
Definition: ‘Artificial stupidity’ is the stupidity produced by attempts to replace complex human decision-making with so-called ‘intelligent’ software
KoelnNovember, 29, 2005
The Argument /1
Software designers want to build “intelligent” systems in which the computer takes the initiative on behalf of the user (“intelligent agents”)
Intelligent Agents systematically fail – Artificial Stupidity There exists a (very large) set of decision-making problems,,
where computers cannot, in principle, replace human beings The limitations have nothing to do with technology Even they were based on a perfect simulation of the brain,
“intelligent agents” would not be able to take decisions in the same way as a human being
This depends on the “computer’s position in the world” – ecology If we do build intelligent agents they will have an “alien
intelligence”
KoelnNovember, 29, 2005
The Argument /2
BUT designers continue in the attempt to build “intelligent software”
Many of these attempts are ergonomically disastrous, particularly when they mimic human intelligence
Intelligent agents are socially and culturally dangerous
An alternative design strategy Computers as a tool
Consequences for design A note of caution…
KoelnNovember, 29, 2005
Herbert Simon (1963)
“Machines will be capable, within twenty years, of doing any work that a man can do”
The Shape of Automation for Men and Management
KoelnNovember, 29, 2005
Martha Pollack (1991)
“We want to build intelligent actors, not just intelligent thinkers. Indeed, it is not even clear
how one could assess intelligence in a system that never acted -- or, put otherwise, how a system could exhibit intelligence in the
absence of action”
‘Computers and Thought Lecture’, IJCAI-91
KoelnNovember, 29, 2005
Intelligent Agents
“Our product contains an intelligent agent” Semantic debate Definition: An Intelligent Agent as a piece of
software that: Acts/takes decisions (sends an email, makes a
recommendation, concludes a purchase) Acts autonomously (does not merely respond to user input) Acts in ways which benefit users Is “intelligent”
KoelnNovember, 29, 2005
Examples
Information management Searching for, filtering, categorizing, distributing information
(personal news pages, anti-spam) E-Commerce
Buying selling (e.g. on stock market) Choosing hotels/restaurants Chat-bots
Help systems/ technical support Microsoft paperclip Chat bots
Security Detecting terrorists
Education Intelligent tutors
KoelnNovember, 29, 2005
Testing an Intelligent Agent
Turing Criteria (can it match a human being?) Error rate at least equal to minimum acceptable human performance Error mode comparable to human mode
Usefulness criteria (easier) Net benefit must be greater than net benefit with a human agent If the cost of a human agent infinite (task not possible) or very high, the
artificial agent must give tangible benefits to the user
humanagentagent CostsCostsBenefits ,0
humanhumanagentagent CostsBenefitsCostsBenefits
KoelnNovember, 29, 2005
How Well do they Work?
Many systems not testable because not available No systems for choosing flights/hotels/restaurants No intelligent tutors
KoelnNovember, 29, 2005
Detecting Terrorists
CAPPSII US system for vetting aircraft passenger list for potential terrorists Secret algorithm Data mining
Benefits Cost savings (fewer trained staff) Simple technological fix Reduces staff responsibility
Costs Fewer trained staff False security Passengers not flown Pilots lost jobs
Vulnerability Balance: zero benefits – many costs
KoelnNovember, 29, 2005
Chat Bots
Used in call centers, technical support etc. Example
Human: I wanted to use you in a demonstration of what a chatbot
can do
ALICE: Interesting. I want to find out more about you.
Human: I'm a professor at Cologne University
ALICE: You are a professor at Cologne University? Do you get to
meet a lot of people?
Human: A fair number
ALICE: Are you telling the truth?
Human: I don't see why I should lie
ALICE: Look harder.
KoelnNovember, 29, 2005
Costs and Benefits
Benefits Cost savings (fewer trained staff)
Costs Inadequate information to customers Loss of customers
Simpler versions have interface to Call Center Agent – Artificial Stupidity remains
KoelnNovember, 29, 2005
Conclusions
In the real world we use very few intelligent agents
Those we do use are not very good The reasons have nothing to do with
technology
KoelnNovember, 29, 2005
Software can be Autonomous
Washing machine ABS Autopilot system
Collision avoidance Automatic defibrillators Buying or selling on stock market
Not perfect – but meets usefulness criteria
KoelnNovember, 29, 2005
Agents which Work
Limited number of input parameters Context-independent
Given the input parameters, the procedure can always be executed in the same way
Path-independent Previous executions of procedure irrelevant
Algorithm simple (easy to verify) Algorithm uses little or no background
information
KoelnNovember, 29, 2005
The Washing Machine (Decision to Wash) Decision to wash Context-independent
Procedure doesn’t need to take account of anything outside washing machine
Path-independent Doesn’t learn from previous attempts gone wrong
Limited number of Parameters Is there enough water? Water temperature Desired temperature
Simple algorithm If( enough_water AND temp>=desired_temp) THEN wash
No background information Does not know anything about what kind of clothes it is washing or how they
should be washed
KoelnNovember, 29, 2005
More Complex Problems
Context dependency Current state of user (mood, goals, desires, comfort, health etc.) Current state of world (including other humans)
Path dependency User memory (declarative, procedural, autobiographical) Reflects past states of user and world
Potentially infinite number of parameters Potentially any aspect of user or world, present or past may be
relevant to problem Different parameters relevant in different contexts Problem of how to select relevant parameters
Complex algorithm Algorithm requires complex background information
KoelnNovember, 29, 2005
Choice of Restaurant (Business Dinner)/1 Context dependency
My Goals Show him how big and powerful we are Show him we don’t waste money
Requirements of guest What sort of dinner would please him/her Cultural knowledge
Constraints What does my boss want How much can I put on the expense account What are the ‘social rules’ for the situation)
Body state I am very hungry, This is the third business dinner this week
Emotions Am I in an exploratory or a conservative mood?
Other parties: opinions of colleagues/family …
KoelnNovember, 29, 2005
Choice of Restaurant (Business Dinner)/2 Path dependency
Previous experience with customer Previous experience in unknown countries Previous experience with business dinners
Background knowledge Experience with restaurant advertisements Language knowledge Knowledge of local cuisine …
KoelnNovember, 29, 2005
Chaitin/ Kolmogoroff Complexity A problem characterized by length of shortest
program capable of coding solution Agents which work have low C/K complexity Agents which do not work have high C/K
complexity
KoelnNovember, 29, 2005
Artificial Intelligence
Goal: to imitate human cognition (more recently: the brain) Technologies Good Old Fashioned Artificial Intelligence (GOFAI)
CYC Artificial Neural Networks (ANNs)
Supervised learning (back-prop) Unsupervised learning (Kohonen) Reinforcement Learning “Conscious machines”
Evolutionary Computing
KoelnNovember, 29, 2005
The Feasibility of an Artificial Brain Brain is a physical-chemical system. Nothing in principle prevents us from simulating it Even if very large our computing power is catching up
Human genome <=Microsoft Office Whole brain simulations 2015-20
Long term possibility of artificial cells BUT
Even if we could do full-scale brain simulation in real-time OR build a system which grows Even if it could learn Even if machines had ‘self-awareness’ , ‘emotions’, ‘feelings’
We would still not be able to build useful intelligent agents
KoelnNovember, 29, 2005
Evolution
Many human capabilities are “built in” by evolution Emotions (representation of body state)
Mood affects cognition Feelings (ability to represent and think about emotions) Low level perception
E.g. movement -->saliency High level perception
Automatic fear snakes, spiders… Automatic behavior
Baby cries for attention This not very complex (genome c. 3 Gb)
KoelnNovember, 29, 2005
Development
Genome codes for a system that develops (incorporates information) during its interaction with the environment
As cells duplicate different cells express different genes
Cell duplication and gene expression environmentally controlled 6*1013 cells in human body (6 thousand bullion)
Of which 1011 are neurons Each cell has 25.000-30.000 genes which can be on or off Expression (activation) of gene depends on internal and
external environment
KoelnNovember, 29, 2005
Development /2
During development (interaction with environment) huge amount of information incorporated in body Brain (not just cognitive parts but also procedural memory
– motor procedures – social procedures) Immune system Morphology (skeleton, muscles, CV system)
Development itself is deeply context-dependent People who develop in different societies/physical
environments/families develop in different ways
KoelnNovember, 29, 2005
Human Decisions
All aspects of context present Saliency
Determined by “inbuilt” and developed knowledge Humans response
Automatic response Brain-body system automatically produces action (balancing while
riding a bike) “Moral”, “Ethical” decisions
Mediated response Mental simulation of actions (includes motor and emotional areas)
Responses often wrong but much better than random response Humans respond to any situation – even if they have had no
previous experience of it In their responses humans use both “built in” and developed
capabilities
KoelnNovember, 29, 2005
Non-Transparency of Decision-Making Humans not fully aware of reasons for
decision-making Emotional memory can work in absence of
awareness (Damasio experiments) Main way of finding out through self-
observation and mental simulation Understanding of reasons for
decisions/behavior extremely poor Impossible to articulate in language
KoelnNovember, 29, 2005
The Critical Obstacle
An agent with genuinely human intelligence would need the full range of information incorporated in the genotype and the phenotype
Self report does not work: humans do not understand reasons for own decisions
And even if they did, the volume of information would be too large for practical self-reporting
Observing actual behavior not enough – actual behavior is only a small sample of potential behavior No way of knowing reactions to rare (but critical) situations
The only way to incorporate the information normally used by a human agent would be to:
Build a (growing) system which functions as same way as human baby Bring it up as human being (in same culture as human beings)
KoelnNovember, 29, 2005
The Impossibility of Intelligent agents No way of transferring/
incorporating information required for human-like decision-making
Therefore impossible… Failures have led to a
loss of enthusiasm in academic community
The Rise and Fall of the Intelligent Agent
0
200
400
600
800
1000
1200
1400
1600
1800
2000
1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005
Year
Pap
ers
(Go
og
le S
cho
lar)
Citations (Google Scholar)
Note: Survey on November 20,
2005. Data for 2005 extrapolated from partial annual f igures
KoelnNovember, 29, 2005
John Maynard Keynes
“Practical men, who believe themselves to be quite exempt from any intellectual
influence, are usually the slaves of some defunct economist.“
The General Theory of Employment, Interest and Money (1935) Ch. 24 "Concluding Notes"
KoelnNovember, 29, 2005
‘Alien intelligence’
Loss of academic enthusiasm does not imply loss of industrial interest
Small scale agents still implemented on large scale (examples at beginning of lecture)
To extent system has goals, values, emotions etc. they will not be human goals Example of video game
For foreseeable future will be much simpler than human cognition
In most cases problem simply ignored Systems with no knowledge, goals, emotions, bodily state,
context of user
KoelnNovember, 29, 2005
Ergonomics
Designers and marketing managers still believe that computers should be intelligent
Building intelligence into a computers a ‘good thing’ Positive marketing point (the “intelligent washing machine”)
Most attempts to make computers/machines intelligent systematically makes them harder to use (examples at beginning)
KoelnNovember, 29, 2005
Broader Effects
Powerful economic logic in favor of intelligent agents Much cheaper – less demanding than humans
Intelligent agents sold (and bought) as replacement for humans Autonomous help agents ‘Knowledge management’ for call centers Chat bots for e-commerce (fortunately rare)
Destroys employment in jobs requiring human skills
KoelnNovember, 29, 2005
Broader Effects /2
Forces us to interact with alien, low-grade intelligence Extreme plasticity of the human brain The “intentional stance”
Tamagochi Kids and videogames Adults and computer applications
At work In services
KoelnNovember, 29, 2005
Herbert Dreyfuss
“What we need to be afraid of is not computers with superhuman intelligence, but humans
with subhuman intelligence”
What Computers can’t do, 1972
KoelnNovember, 29, 2005
Design and Artificial Stupidity
An apparent contradiction Systems should be simple Goal of “intelligence” is to eliminate need for
unnecessary user actions/knowledge E.g. make it possible to configure a network without
understanding protocols etc. Eliminating “intelligent agents” seems to make this
impossible
KoelnNovember, 29, 2005
Albert Einstein
“The supreme goal of all theory is to make the irreducible basic elements as simple and as few
as possible without having to surrender the adequate representation of a single datum of
experience.”
(actual quotation)
"Everything should be made as simple as possible, but not simpler"
(usual paraphrase)
KoelnNovember, 29, 2005
Design Guidelines
Examine each possibility for automation Does there exist a context-free procedure which will give
guaranteed benefits If so – use it
Implement system context-dependent system only when Can be shown to give benefits (user tests) No human alternative (e.g. Google)
Where decision requires human intelligence, provide information and options, not decision Amazon
KoelnNovember, 29, 2005
A Doubt
Not always clear whether there exists a context-free solution to problem
Many problems though to require “intelligence” have useful context-free solutions Anti-SPAM Anti-Virus Amazon recommendations Google
The final test can only be experimental BUT experiments will often fail
KoelnNovember, 29, 2005
Shakespeare
Glendower: I can call spirits from the vasty deep
Hotspur: Why, so can I, or so can any man; But will they come when you do call for
them?
W. Shakespeare, Henry IV Part I
KoelnNovember, 29, 2005
Bibliography Damasio, A. R. (1994). Descartes' Error. New York, G.P. Putnam's Sons. Damasio, A. R. (1999). The feeling of what happens. New York, Harcourt Brace. Dreyfuss, H. L. (1972). What Computers Can't Do, A Critique of Artificial Reason. New
York. Dreyfuss, H. L. (1986). Mind over Machine. New York, NY, USA, The Free Press. Jain, L. C., Z. Chen, et al., Eds. (2002). Intelligent Agents and Their Applications (Studies
in Fuzziness and Soft Computing, Vol. 98). Heidelberg, Germany, Physica Verlag. Pfeifer, R. and C. Scheier (1999). Understanding Intelligence. Cambridge, MA, MIT
Press. Pollack, M. (1991). Computers and Thought Lecture. International Joint Conference on
Artificial Intelligence (IJCAI 91), Sydney, Australia, Morgan Kauffman. Simon, H. A. (1965). The Shape of Automation for Men and Management. New York, NY,
USA, Harper and Row. West-Eberhard, M. J. (2003). Developmental Plasticity and evolution. New York, Oxford
University Press. Winograd, T. and F. Flores (1987). Understanding computers and cognition - a new
foundation for design. Norward, NY, USA, Ablex Publishing Corporation.