Agents & Mobile Agents Introduction – Agents & Mobile Agents ©Shonali Krishnaswamy1.
Agentm2 Agents Copy
Transcript of Agentm2 Agents Copy
1st Activity• Form a group
– Give a short name, not more than 15 chars. • Numbers: Max 5 student Min 4 students• Write up the members names, group leader,
must be signed and pass it to me ASAP.• Don’t force anyone to be included in a group.
Quiz 2.1
Choose the right answer(s)•An A.I program can be called _______
A. WetWareB. FormulaC. Intelligent AgentD. Intelligent SystemE. Intelligent AppsF. Smart System
Quiz 2.2
• Choose the right answer(s)• An Intelligent Agent can _____ and
_____ .A.Perceive and ActB.Perceive and ReactC.Sense and ActionD.Observe and Take Action
Intelligent Agents
Chapter 2
Outline
• Agents and environments• Rationality• PEAS (Performance measure,
Environment, Actuators, Sensors)• Environment types• Agent types
Agents• An agent is anything that can be viewed as
perceiving its environment through sensors and acting upon that environment through actuators
••
Analogy
Human• Sensors
– Human agent: eyes, ears, and other organs for actuators
• Actuators– Hands, legs, mouth, and
other body parts
Agents• Sensors
– Cameras, infrared range finders, heat sensors, thermometers, etc.
• Actuator– various motors for
actuators
Smart Car
Group Quiz 2.1
•What is the main difference between a smart car and normal car in the context of this discussion, (we are talking about AI and Agents)?
Agents and environments
• The agent function maps from percept histories to actions:
[f: P* A]• The agent program runs on the physical
architecture to produce f• agent = architecture + program•
•
Vacuum-cleaner world
• Percepts: location and contents, e.g., [A,Dirty]
• Actions: Left, Right, Suck, NoOp••
A Simple Agent Function
Percept Sequence Action[A, clean] right
[A, dirty] suck
[B, clean] left
[B, dirty] suck
[A , clean], [A, clean] right
[A, clean], [A, clean], [A, dirty] suck
The Concept of Rationality• The problem in developing an Intelligent Apps or
Robot is that - human is the creator.• Human being has weaknesses.
– Human are notorious for ‘sour grapes’ – believing they did not really want something like a prize after not getting it!
• From the creation, we understood the quality of the human as creator.
• Imagine your mom design a robot as her kitchen helper. Who will produce better robot? Your mom or YOU?
Rational agents• An agent should strive to "do the right thing",
based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful
• Performance measure: An objective criterion for success of an agent's behavior
• E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc.
•••
Rational agents
• Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.– Today, almost each of organizations has their
own KPI (Key Performance Index)– What is a Muslims KPI? What is a Christian KPI?–
Rational agents
• Rationality is distinct from omniscience (all-knowing with infinite knowledge)
• Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration)
• An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt)
•
PEAS• PEAS: Performance measure, Environment,
Actuators, Sensors• Must first specify the setting for intelligent agent
design• Consider, e.g., the task of designing an
automated taxi driver:– Performance measure– Environment– Actuators– Sensors–
PEAS• Must first specify the setting for intelligent agent
design• Consider, e.g., the task of designing an
automated taxi driver:– Performance measure:
• Safe, fast, legal, comfortable trip, maximize profits– Environment:
• Roads, other traffic, pedestrians, customers– Actuators:
• Steering wheel, accelerator, brake, signal, horn– Sensors:
• Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard
•
PEAS• Agent: Medical diagnosis system• Performance measure:
– Healthy patient, minimize costs, lawsuits• Environment:
– Patient, hospital, staff• Actuators:
– Screen display (questions, tests, diagnoses,treatments, referrals)
• Sensors: – Keyboard (entry of symptoms, findings, patient's answers)
PEAS
• Agent: Part-picking robot• Performance measure:
– Percentage of parts in correct bins• Environment:
– Conveyor belt with parts, bins• Actuators:
– Jointed arm and hand• Sensors:
– Camera, joint angle sensors
PEAS
• Agent: Interactive English tutor• Performance measure:
– Maximize student's score on test• Environment:
– Set of students• Actuators:
– Screen display (exercises, suggestions, corrections)
• Sensors: Keyboard
Agent Environment
Read it as the apps or software environment that can perceive
and act
Fully observable vs. partially observable
• An agent's sensors give it access to the complete state of the environment at each point in time.
• If the agent has no sensors at all then the environment is unobservable. – Eg. A robot driver don’t know what other
drivers are thinking.
Deterministic vs. stochastic• Deterministic: The next state of the environment
is completely determined by the current state and the action executed by the agent. – If the environment is deterministic except for the
actions of other agents, then the environment is strategic.
• Nondeterministic: Actions are characterized by their possible outcomes without probabilities.
• Stochastic: Uncertainty about outcomes is quantified in terms of probability.
Episodic vs. sequential• Episodic: The agent's experience is divided into atomic
"episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself.– Classification task is an example.
• Sequential: The current decision could affect all future decisions.– Chess and taxi are sequential.
–
Single agent vs. multiagent
• Single Agent: An agent operating by itself in an environment.– Solving Crossword puzzle Apps: Single agent– Chess Apps : Multi Agent
• Competitive multi agent environment– Robot Driver : Partially cooperative multi
agent because it wants to avoid collisions.
Static vs. dynamic
• Static: The environment is unchanged while an agent is deliberating. Otherwise, it is dynamic.
• The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does.– Driving robot environment is dynamic.– Chess without a clock is static.– Chess with a clock is semi-dynamic– Crossword puzzles are static.
•
Discrete vs. continuous
• Discrete: A limited number of distinct, clearly defined percepts and actions.– Chess: Has a finite number of distinct states.
• Continuous: Has infinite number of distinct states.– Taxi driving: A continuous-state and
continuous-time problem.– Camera: Input are discrete
Developing a Chess Game with 1. Fully observable?2. Deterministic ? 3. Episodic ?4. Static ?5. Discrete ?6. Single agent ?
1. Yes2. Yes, but Strategic3. No4. Semi5. Yes6. No
Environment typesChess with Chess without Taxi
driving a clock a clock
Fully observable Yes Yes No Deterministic Strategic Strategic No Episodic No No No Static Semi Yes No Discrete Yes Yes NoSingle agent No No No
• The environment type largely determines the agent design• The real world is (of course) partially observable, stochastic,
sequential, dynamic, continuous, multi-agent
•
Quiz 2.31. Fully observable?2. Deterministic ? 3. Episodic ?4. Static ?5. Discrete ?6. Single agent ?
Determine the environment for a robot waiter at McDonald.
Agent functions and programs
• An agent is completely specified by the agent function mapping percept sequences to actions
• One agent function (or a small equivalence class) is rational
• Aim: find a way to implement the rational agent function concisely
••
Table-lookup agent
• \input{algorithms/table-agent-algorithm}• Drawbacks:
– Huge table– Take a long time to build the table– No autonomy– Even with learning, need a long time to learn
the table entries•
A Simple Agent Function
Percept Sequence Action[A, clean] right
[A, dirty] suck
[B, clean] left
[B, dirty] suck
[A , clean], [A, clean] right
[A, clean], [A, clean], [A, dirty] suck
Agent program for a vacuum-cleaner agent
• \input{algorithms/reflex-vacuum-agent-algorithm}
•
Agent types
• Four basic types in order of increasing generality:
• Simple reflex agents• Model-based reflex agents• Goal-based agents• Utility-based agents•
Simple reflex agents
Simple reflex agents
• \input{algorithms/d-agent-algorithm}•
Model-based reflex agents
Model-based reflex agents
• \input{algorithms/d+-agent-algorithm}•
Goal-based agents
•
Utility-based agents
Learning agents
GROUP QUIZ 2.3: PEAS
• Write your group answers in a paper.• 10 minutes.• Agent: Interactive McDonald Waiters to
take orders• Performance measure: • Environment: • Actuators: • Sensors:
Watch How Autonomous Cars Would Affect Intersections – Video