CS 430 Lecture 6 - University of...

28
Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 1 Lecture 6 Reminder: Homework 1 due today Questions?

Transcript of CS 430 Lecture 6 - University of...

Page 1: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 1

Lecture 6

Reminder: Homework 1 due today Questions?

Page 2: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 2

Outline

Chapter 2 - Intelligent Agents Structure of Agents

Chapter 3 - Solving Problems by Searching Problem-solving Agents Problem Definition

Page 3: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 3

Structure of Agents

Agent function describes behavior. Job of AI is to design agent program that implements agent function on a particular architecture (computing device with physical sensors and actuators).

Page 4: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 4

Structure of Agents

Architecture makes the percepts from the sensors available to the program, runs the program, and feeds the program's action choices to the actuators as they are generated.

Page 5: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 5

Table-Driven-Agent Program

Receives: perceptReturns: actionStatic data:

percepts, a sequence of percepts, initially empty table, a table of actions, indexed by percepts,

initially fully specified

1. Append percept to percepts2. action = Lookup (percepts, table)3. Return action

Page 6: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 6

Structure of Agents

Although Table-Driven-Agent is not feasible, it provides the basis for understanding how well possible programs implement the agent function.

Goal is to produce rational behavior from a smallish program rather than vast tables, analogous to replacing math tables with calculator algorithms.

Page 7: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 7

Structure of Agents

Basic kinds of agents Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents

The differences are in how much internal state is stored and how much processing is done to decide what the result action is.

Page 8: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 8

Simple-Reflex-Agent Program

Receives: perceptReturns: actionStatic data:

rules, a set of condition-action rules

1. state = InterpretInput(percept)2. rule = RuleMatch (state, rules)3. action = rule.Action4. Return action

Page 9: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 9

Model-Based Reflex Agent

Handle partially observability by keeping track of what has been seen. Maintain an internal state that depends on the percept history.

Requires two kinds of knowledge to be encoded in agent program.

How the world evolves independently of the agent How the agent's actions affect the world

Forms a model of the world.

Page 10: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 10

Model-Based-Reflex-Agent Program

Receives: perceptReturns: actionStatic data:

state, current world state model, how next state is related

to current state and action rules, a set of condition-action

rules action, most recent action,

initially none

1. state = UpdateState (state, action, percept, model)2. rule = RuleMatch (state, rules)3. action = rule.Action4. Return action

Page 11: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 11

Model-Based Reflex Agent

Issues Even with model, agent cannot determine exact

state of partially observable environment. It is a "best guess".

Model does not have to be literal.

Page 12: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 12

Goal-Based Agents

Knowing the current state of environment is not always enough to decide what to do next. Correct decision often depends on what state the agent it trying to get to.

Add one or more goals that describe situations that are desirable. May require searching and planning to determine correct action.

Page 13: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 13

Model-Based, Goal-Based Agent

Though more complicated, making information about the future explicit makes the agent more flexible.

Page 14: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Utility-Based Agents

Goals are not enough to produce high-quality behavior. Often many action sequences will result in achieving a goal, but some are better than others.

Goals distinguish between "happy" and "unhappy" states. Want to know exactly how "happy" in making decisions. Use the term utility to sound more scientific.

Utility function is an internalization of the performance measure used to compare states.

Page 15: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 15

Utility-Based Agents

Not the only way to be rational, but provides more flexibility. E.g., provides tradeoffs when there are conflicting goals or uncertainty in achieving goals.

Page 16: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 16

Learning Agents

Preferred method of for creating systems in many areas of AI. Allows agent to operate in an initially unknown environment.

Four conceptual components Learning element – responsible for making improvements Performance element – responsible for selecting actions

(previous kinds of agents) Critic – provides feedback by comparing against external

performance standard Problem generator – suggests actions for exploration

Page 17: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 17

Learning Agents

Learning element can change any of the "knowledge" components. Goal is to bring components in closer agreement with available feedback, thus improving overall performance

Page 18: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 18

Problem-Solving Agents

A problem-solving agent is a goal-based agent that considers states of the world as atomic wholes with no internal structure visible to the problem-solving algorithm.

A problem-solving agent decides what to do by finding one or more sequences of actions that lead to goal states, then following the sequence to the solution.

Page 19: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 19

Goal Formulation

Goal formulation is complex and depends on the current situation and the agent's performance measure. Textbook example postulates a tourist in Arad, Romania. There are many things it could want to accomplish.

Agent may determine it has a non-refundable ticket leaving from Bucharest tomorrow. If minimizing waste (money) is a performance measure, then agent may adopt goal of getting to Bucharest tomorrow.

Page 20: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 20

Problem Formulation

A problem goal is the set of world states in which the agent goal is satisfied. Agent's task is to determine actions that will reach a goal state.

Problem formulation is process of determining the level of actions and states being considered. Abstraction is used to make problems tractable. For the Romanian tourist problem consider actions at the level of driving from one major town to another.

Page 21: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 21

Problem Formulation

The agent may be presented with multiple actions that do not lead directly to a goal state. In the case of an unknown environment, it can do no better than choosing an action at random.

For the Romanian tourist, perhaps it has a map of Romania. Now it can examine different possible sequences of future actions that lead to states of known value, then choose the best sequence to follow.

Page 22: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 22

Problem Formulation

Environment for which this works is observable - agent knows which city it is in discrete - each city is connected to only a few other

cities known - the map is accurate deterministic - if agent chooses to drive to a city, it

will end up in the chosen city

Under ideal conditions, a solution is a fixed sequence of actions.

Page 23: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 23

Solution Search

The process of looking for a sequence of actions that reaches the goal is called search.

Once a solution is found, the agent carries out the recommended actions in the execution phase. When execution completes, agent formulates a new goal.

Note that during execution, agent ignores its percepts, since it knows in advance what the next action will be.

Page 24: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 24

Simple-Problem-Solving-Agent Program

Receives: percept; Returns: actionStatic data:

seq, action sequence, initially empty state, description of current world state goal, a goal, initially null problem, problem formulation

1. state = UpdateState (state, percept)2. if seq is empty then 2.1 goal = FormulateGoal (state) 2.2 problem = FormulateProblem (state, goal) 2.3 seq = Search (problem) 2.4 if seq = failure then return null action3. action = First (seq); seq = Rest (seq)4. Return action

Page 25: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 25

Romanian-Map-Problem State Space

Assume transitions in both directions between each city.

Page 26: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 26

Problem Definition

Initial state agent starts in. E.g. In(Arad). Description of possible actions available to

agent. Actions(s) returns set of actions that can be executed in state s. E.g. Actions(In(Arad)) = { Go(Sibiu), Go(Timisoara), Go(Zerind) }

Description of what each action does. I.e. a transition model. Define the successor state that is reachable by a single action. E.g. Result(In(Arad), Go(Zerind)) = In(Zerind)

Together these define the state space.

Page 27: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 27

Problem Definition

A goal test that determines whether a given state is a goal state. Can be an explicit set of possible goal states, e.g. { In(Bucharest) }, or specified by an abstract property, e.g. "checkmate" in chess.

A path cost function that assigns a numerical cost to each path that reflects the performance measure. E.g., number of kilometers to Bucharest. Assume that cost of path is sum of cost individual actions.

Page 28: CS 430 Lecture 6 - University of Evansvilleuenics.evansville.edu/~hwang/s17-courses/cs430/lecture06-agents2… · Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 14

Monday, January 23 CS 430 Artificial Intelligence - Lecture 6 28

Problem Definition

Step cost of taking action a in state s to state s' is denoted c(s, a, s'). E.g., for Romanian tourist, step costs are route distances on the map. Assume step costs are non-negative.

Optimal solution is the one with the lowest path cost.