Gabriela Ochoa goc/...9/12/2016 6 EXAMPLE: ROMANIA Google Map: Romania 11 ABSTRACTION: SELECTING A...
Transcript of Gabriela Ochoa goc/...9/12/2016 6 EXAMPLE: ROMANIA Google Map: Romania 11 ABSTRACTION: SELECTING A...
-
9/12/2016
1
ARTIFICIAL INTELLIGENCE (CSC9YE )
LECTURES 2 AND 3: PROBLEM SOLVING
BY SEARCH
Gabriela Ochoa
http://www.cs.stir.ac.uk/~goc/
OUTLINE
Problem solving by searching
Problem formulation
Example problems
Search algorithms (2 Lectures)
Uninformed search
Informed (heuristic) search
Artificial Intelligence: A Modern Approach
(Third edition) by Stuart Russell and Peter Norvig http://aima.cs.berkeley.edu/
2
http://www.cs.stir.ac.uk/~goc/http://aima.cs.berkeley.edu/http://aima.cs.berkeley.edu/
-
9/12/2016
2
PRELIMINARIES
Definition of search Cambridge Dictionary
to look somewhere carefully in order to find something
to try to find the answer to a problem
Meanings of search in computing science (next slide)
More Robot Demos
Asimo. Developed by Honda Research and Development:
http://world.honda.com/ASIMO/technology/2011/intelligence/index
.html
https://www.youtube.com/watch?v=JlRPICfnmhw
Boston Robotics
http://www.bostondynamics.com/index.html
https://www.youtube.com/user/BostonDynamics
3
SEARCH IN COMPUTING SCIENCE
1. Search for stored data
• Finding information stored in disc or
memory.
• Examples: Sequential search, Binary
search
2. Search for web documents
• Finding information on the world
wide web
• Results are presented as a list of
results
3. Search for paths or routes
• Finding a set of actions that will
bring us from an initial stat to a goal
stat
• Relevant to AI
• Examples: depth first search, breath
first search, branch and bound, A*,
Monte Carlo tree search.
4. Search for solutions
• Find a solution in a large space of
candidate solutions
• Relevant to AI, Optimisation, OR
• Examples: evolutionary algorithms,
Tabu search, simulated annealing,
ant colony optimisation, etc.
At least 4 meanings of the word search in CS
4
http://world.honda.com/ASIMO/technology/2011/intelligence/index.htmlhttp://world.honda.com/ASIMO/technology/2011/intelligence/index.htmlhttp://world.honda.com/ASIMO/technology/2011/intelligence/index.htmlhttps://www.youtube.com/watch?v=JlRPICfnmhwhttps://www.youtube.com/watch?v=JlRPICfnmhwhttp://www.bostondynamics.com/index.htmlhttp://www.bostondynamics.com/index.html
-
9/12/2016
3
EXAMPLES OF SEARCH PROBLEMS
A robot vehicle would search for a route to a
given destination.
An automated air traffic controller would search
for a safe landing sequence for a set of incoming
planes
In games of strategy, such as chess or checkers:
search for a sequence of moves to beat your
opponent
More generally: trying to find a particular object
from a large number of such objects.
Search problems are common in AI: Planning and
Learning.
5
SOLVING PROBLEMS BY SEARCHING
Problem-solving agents decide what to do by finding
sequences of actions that lead to desirable states
What is a problem and what is a solution?
Problem: a goal and a set of means to achieve it
Solution: a sequence of actions to achieve that goal
Given a precise definition of problem, it is possible to
construct a search process for finding solutions
6
-
9/12/2016
4
EXAMPLE: PAC-MAN
An arcade game developed by Namco
First released in Japan in May 1980
Google Doodle PAC-MAN
e-chalk version in Macromedia (PAC-MAN)
Artificial Intelligence Problem:
Develop an automatic player for PAC-MAN
Formulate it as a Search Problem
7
SEARCH PROBLEMS A search problem consists of:
A state space
A successor function (with actions, costs)
A start state and a goal test
A solution is a sequence of actions (a plan) which transforms the start state to a goal state
“N”, 1.0
“E”, 1.0
https://www.google.co.uk/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=pacman gamehttps://www.google.co.uk/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=pacman gamehttps://www.google.co.uk/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=pacman gamehttp://www.echalk.co.uk/amusements/Games/pacman/pacman.htmlhttp://www.echalk.co.uk/amusements/Games/pacman/pacman.htmlhttp://www.echalk.co.uk/amusements/Games/pacman/pacman.html
-
9/12/2016
5
WHAT’S IN A STATE SPACE?
Problem: Pathing States: (x,y) location
Actions: NSEW
Successor: update location only
Goal test: is (x,y)=END
Problem: Eat-All-Dots States: {(x,y), dot
booleans}
Actions: NSEW
Successor: update location and possibly a dot boolean
Goal test: dots all false
The world state includes every last detail of the environment
A search state keeps only the details needed for planning (abstraction)
EXAMPLE: ROMANIA
On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest
Formulate goal:
be in Bucharest
Formulate problem:
states: various cities
actions: drive between cities
Find solution:
sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
10
-
9/12/2016
6
EXAMPLE: ROMANIA Google Map: Romania
11
ABSTRACTION: SELECTING A STATE SPACE
Real world is absurdly complex state space must be abstracted for problem solving
(Abstract) state = set of real states
(Abstract) action = complex combination of real actions e.g., "Arad Zerind" represents a complex set of
possible routes, detours, rest stops, etc.
(Abstract) solution = set of real paths that are solutions in the real world
Each abstract action should be "easier" than the original problem
12
https://www.google.co.uk/maps/@46.5687604,24.8308311,7.42z
-
9/12/2016
7
PROBLEM FORMULATION
More formally, a problem is defined by 5 components:
1. Initial state where the agent starts: e.g., "at Arad"
2. Actions available to the agent e.g., Arad Zerind , Arad Sibiu, … etc.
3. Transition model: description of what each action does. State that results from doing action a in state s.
4. Goal test, determines whether a given state is a goal state. explicit, e.g., x = "at Bucharest“
implicit, e.g., Checkmate(x)
5. Path cost (additive) function that assigns a numeric cost to each path. Reflects agents performance measure
e.g., sum of distances, number of actions executed, etc.
c(x,a,y) is the step cost, assumed to be ≥ 0
13
OTHER EXAMPLES • Toy problems
• Real-world problems
-
9/12/2016
8
THE VACUUM WORLD
States: determined by both the agent location and the dirt
locations. Question : How many states?
Initial states: Any state can be designated as the initial state.
Actions: each state has 3 actions: Left, Right, and Suck
Transition model: effects of the actions, transition among
states. Some actions have no effect. Example: sucking in a
clean square
Goal test: This checks whether all the squares are clean.
Path cost: Each step costs 1, so the path cost is the number of
steps in the path.
15
THE VACUUM WORLD
16 Compared with the real world, this toy problem has discrete locations,
discrete dirt, reliable cleaning, and it never gets any dirtier
-
9/12/2016
9
EXAMPLE: THE 8-PUZZLE
• States
• Actions
• Transition model
• Goal test
• Path cost
Location of each tile and the white space
Move blank left, right, up, down
Given state and action, return resulting state
Checkers whether state matches the goal
1 per move, path cost is the number of steps
Solution: Sequence of actions (moves) from the start state to the goal state 17
SEARCHING FOR SOLUTIONS • Having formulated some problems, we now need to solve
them.
• A solution is an action sequence, so search algorithms
work by considering various possible action sequences.
-
9/12/2016
10
TREE SEARCH ALGORITHMS
The possible action sequences starting at the initial state form a tree with the initial state at the root;
Basic idea: Simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states)
19
Shaded nodes:
expanded nodes
Outlined nodes: generated
but not expanded
20
-
9/12/2016
11
IMPLEMENTATION: STATES VS. NODES
A state is a (representation of) a physical configuration
A node is a data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth
The Expand function creates new nodes, filling in the
various fields and using the SuccessorFn of the problem
to create the corresponding states.
Data structure for nodes
• n.State: state to which the node
corresponds
• n.Parent: parent node
• n.Action: action applied to the
parent to generate the node
• N.Path-Cost: cost of the path
from initial state to the node
21
SEARCH STRATEGIES
A search strategy is defined by picking the order of node expansion
Strategies are evaluated along the following dimensions: completeness: does it always find a solution if one exists?
time complexity: number of nodes generated
space complexity: maximum number of nodes in memory
optimality: does it always find a least-cost solution?
Time and space complexity are measured in terms of
b: maximum branching factor of the search tree
d: depth of the least-cost solution
m: maximum depth of the state space (may be ∞) 22
-
9/12/2016
12
UNINFORMED SEARCH STRATEGIES
Uninformed, also called blind search strategies use
only the information available in the problem
definition
Breadth-first search
Depth-first search
Depth-limited search
Iterative deepening search
23
24
-
9/12/2016
13
BREADTH-FIRST SEARCH
Expand shallowest unexpanded node
Implementation:
Frontier is a FIFO queue, i.e., new successors go at
end
25
BREADTH-FIRST SEARCH
Expand shallowest unexpanded node
Implementation:
Frontier is a FIFO queue, i.e., new successors go at end
26
-
9/12/2016
14
BREADTH-FIRST SEARCH
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go at end
27
BREADTH-FIRST SEARCH
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go at end
28
-
9/12/2016
15
PROPERTIES OF BREADTH-FIRST SEARCH
29
Complete? Yes (if b is finite)
Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)
Space? O(bd+1) (keeps every node in memory)
Optimal? Yes (if cost = 1 per step)
Space is the bigger problem (more than time)
b: maximum branching factor of the search tree d: depth of the least-cost solution m: maximum depth of the state space (may be ∞)
DEPTH-FIRST SEARCH
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at front
30
-
9/12/2016
16
DEPTH-FIRST SEARCH
Expand deepest unexpanded node
Implementation: frontier = LIFO queue, i.e., put successors at front
31
DEPTH-FIRST SEARCH
Expand deepest unexpanded node
Implementation: frontier = LIFO queue, i.e., put successors at front
32
-
9/12/2016
17
DEPTH-FIRST SEARCH
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at
front
33
DEPTH-FIRST SEARCH
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at
front
34
-
9/12/2016
18
DEPTH-FIRST SEARCH
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at
front
35
DEPTH-FIRST SEARCH
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at
front
36
-
9/12/2016
19
DEPTH-FIRST SEARCH
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at
front
37
DEPTH-FIRST SEARCH
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at
front
38
-
9/12/2016
20
DEPTH-FIRST SEARCH
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at
front
39
DEPTH-FIRST SEARCH
Expand deepest unexpanded node
Implementation: frontier = LIFO queue, i.e., put successors at front
40
-
9/12/2016
21
PROPERTIES OF DEPTH-FIRST SEARCH
Complete? No: fails in infinite-depth spaces, spaces with loops
Can be modified to avoid repeated states along path
Complete in finite spaces
Time? bm
Space? O(bm) (linear space!)
Optimal? No (if cost = 1 per step)
b: maximum branching factor of the search tree d: depth of the least-cost solution m: maximum depth of the state space (may be ∞)
41
DEPTH-LIMITED SEARCH
Avoid problems of DFS by imposing a maximum
depth of a path: I
Can be implemented with special algorithms or
with the general search with operators to keep
track of the path depth
Example Romania: l = 20 (# cities)
Complete but not optimal
42
-
9/12/2016
22
DEPTH-LIMITED SEARCH
= depth-first search with depth limit l,
i.e., nodes at depth l have no successors
Recursive implementation:
43
ITERATIVE DEEPENING SEARCH
• Sidesteps the issue of choosing the depth limit
• Tries all possible depth limits: 0, 1, 2, etc
• Combines the benefits of DFS and BFS
• Optimal and complete like BFS
• Modest memory requirements like DFS
44
-
9/12/2016
23
ITERATIVE DEEPENING SEARCH L =0
45
ITERATIVE DEEPENING SEARCH L =1
46
-
9/12/2016
24
ITERATIVE DEEPENING SEARCH L =2
47
ITERATIVE DEEPENING SEARCH L =3
48
-
9/12/2016
25
SUMMARY
Problem formulation requires abstracting
away real world details to define a state-space
that can be explored
There are several uninformed search
strategies
Iterative deepening search uses only linear
space, and not much more time than the other
strategies
49
COMPARING SEARCH STRATEGIES
completeness: does it always find a solution if one exists? time complexity: number of nodes generated space complexity: maximum number of nodes in memory optimality: does it always find a least-cost solution?
Time and space complexity are measured in terms of
b: maximum branching factor of the search tree d: depth of the least-cost solution m: maximum depth of the state space (may be ∞)
50
-
9/12/2016
26
INFORMED SEARCH
Uninformed search strategies can find
solutions to problems by systematically
generating new states, and testing them
against the goal
These strategies are inefficient in most cases
Informed search strategies: use problem-
specific knowledge
Knowledge is given by an evaluation function
that returns a number describing the
desirability (or lack thereof) of expanding a
nodes 51
BEST-FIRST SEARCH
Idea: use an evaluation function for each node –
estimate of “desirability”
⇒ Expand most desirable unexpanded node
Implementation:
frontier is a queue sorted in decreasing order of desirability
Special cases:
greedy search: expand node closest to the goal
A∗ search: expand node on the least-cost solution path
52
-
9/12/2016
27
ROMANIA WITH STEP COSTS IN KM
53
GREEDY SEARCH EXAMPLE
54
-
9/12/2016
28
A* SEARCH: MINIMISING THE TOTAL
ESTIMATED SOLUTION COST
The most widely known form of best-first search
Node evaluation, combines:
g(n): path cost from the start node to node n
h(n): the cost to get from the node to the goal
f(n) = g(n) + h(n)
f(n) is the estimated cost of the cheapest solution
through n
It makes sense to try first the node with lowest f(n)
This strategy is more than just reasonable: provided
that the heuristic function h(n) satisfies certain
conditions, A∗ search is both complete and optimal. 55
A* SEARCH EXAMPLE
56
-
9/12/2016
29
A* SEARCH EXAMPLE
57
A* SEARCH EXAMPLE
58
-
9/12/2016
30
A* SEARCH EXAMPLE
59
A* SEARCH EXAMPLE
60
-
9/12/2016
31
A* SEARCH EXAMPLE
61
SUMMARY: HEURISTIC SEARCH
Best-first search: GENERAL-SEARCH where
minimum-cost nodes (according to some
measure) are expanded first.
Greedy search:
Expands the node closest to the goal
More efficient (in time) that uninformed algorithms
Not optimal and not complete
A* search:
Expands the node on the least-cost solution path
Complete, optimal, and most efficient among all
optimal algorithms. State space still prohibitive 62