Chap3_Heuristic Search Technique
-
Upload
pulkit-jasrapuria -
Category
Documents
-
view
219 -
download
0
Transcript of Chap3_Heuristic Search Technique
-
8/8/2019 Chap3_Heuristic Search Technique
1/95
-
8/8/2019 Chap3_Heuristic Search Technique
2/95
Mahesh Maurya,NMIMS 2
-
8/8/2019 Chap3_Heuristic Search Technique
3/95
Mahesh Maurya,NMIMS 3
-
8/8/2019 Chap3_Heuristic Search Technique
4/95
` A framework for describing search methods is
provided and several general purpose search
techniques are discussed.
` All are varieties of Heuristic Search: Generate and test
Hill Climbing
Best First Search
Problem Reduction
Constraint Satisfaction
Means-ends analysis
4Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
5/95
` Algorithm:1. Generate a possible solution. For some problems,
this means generating a particular point in the
problem space. For others it means generating a
path from a start state
2. Test to see if this is actually a solution by comparing
the chosen point or the endpoint of the chosen path
to the set of acceptable goal states.
3. If a solution has been found, quit, Otherwise returnto step 1.
5Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
6/95
` Is a variant of generate-and test in which feedbackfrom the test procedure is used to help thegenerator decide which direction to move in searchspace.
`
The test function is augmented with a heuristicfunction that provides an estimate of how close agiven state is to the goal state.
` Computation of heuristic function can be done withnegligible amount of computation.
`
Hill climbing is often used when a good heuristicfunction is available for evaluating states but whenno other useful knowledge is available.
6Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
7/95
` Algorithm:1. Evaluate the initial state. If it is also goal state, then
return it and quit. Otherwise continue with the initialstate as the current state.
2. Loop until a solution is found or until there are no newoperators left to be applied in the current state:
a. Select an operator that has not yet been applied to the currentstate and apply it to produce a new state
b. Evaluate the new statei. If it is the goal state, then return it and quit.ii. If it is not a goal state but it is better than the current state, then
make it the current state.iii. If it is not better than the current state, then continue in the loop.
7Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
8/95
` The key difference between Simple Hill climbing
and Generate-and-test is the use of evaluation
function as a way to inject task specific knowledge
into the control process.` Is on state better than another ? For this algorithm
to work, precise definition of better must be
provided.
8Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
9/95
` It is a depth first search procedure since completesolutions must be generated before they can be tested.
` In its most systematic form, it is simply an exhaustivesearch of the problem space.
` Operate by generating solutions randomly.` Also called as British Museum algorithm` If a sufficient number of monkeys were placed in front of
a set of typewriters, and left alone long enough, thenthey would eventually produce all the works ofshakespeare.
` Dendral: which infers the struture of organic compoundsusing NMR spectrogram. It uses plan-generate-test.
9Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
10/95
` This is a variation of simple hill climbing which
considers all the moves from the current state and
selects the best one as the next state.
` Also known as Gradient search
10Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
11/95
1. Evaluate the initial state. If it is also a goal state, thenreturn it and quit. Otherwise, continue with the initialstate as the current state.
2. Loop until a solution is found or until a complete
iteration produces no change to current state:a. Let SUCC be a state such that any possible successor of the
current state will be better than SUCC
b. For each operator that applies to the current state do:
i. Apply the operator and generate a new state
ii. Evaluate the new state. If it is a goal state, then return it and quit.
If not, compare it to SUCC. If it is better, then set SUCC to thisstate. If it is not better, leave SUCC alone.
c. If the SUCC is better than the current state, then set currentstate to SUCC,
11Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
12/95
This simple policy has three well-
known drawbacks:
1. Local Maxima: a local maximum
as opposed to global maximum.
2. Plateaus:A
n area of the searchspace where evaluation function is flat,
thus requiring random walk.
3. Ridge: Where there are steep
slopes and the search direction is
not towards the top but towards
the side.
(a)
(b)
(c)
12Mahesh Maurya,NMIMS
Figure 5.9 Local maxima, Plateaus and
ridge situation for Hill Climbing
-
8/8/2019 Chap3_Heuristic Search Technique
13/95
` In each of the previous cases (local maxima, plateaus & ridge),
the algorithm reaches a point at which no progress is being
made.
` A solution is to do a random-restart hill-climbing - where
random initial states are generated, running each until it halts
or makes no discernible progress. The best result is then
chosen.
Figure 5.10 Random-restarthill-climbing (6 initial values) for 5.9(a)13Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
14/95
` An alternative to a random-restart hill-climbing when stuck on a
local maximum is to do a reverse walk to escape the local
maximum.
` This is the idea of simulated annealing.
` The term simulated annealing derives from the roughly analogous
physical process of heating and then slowly cooling a
substance to obtain a strong crystalline structure.
` The simulated annealing process lowers the temperature by slow
stages until the system ``freezes" and no further changes occur.
14Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
15/95
Figure 5.11 Simulated Annealing Demo (http://www.taygeta.com/annealing/demo1.html)
15Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
16/95
` Probability of transition to higher energy state is
given by function: P = eE/kt
Where E is
the positive change in the energy levelT is the temperature
K is Boltzmann constant.
16Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
17/95
` The algorithm for simulated annealing is slightly
different from the simple-hill climbing procedure.
The three differences are:
The annealing schedule must be maintained Moves to worse states may be accepted
It is good idea to maintain, in addition to the current state,
the best state found so far.
17Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
18/95
1. Evaluate the initial state. If it is also a goal state, then return it and quit.Otherwise, continue with the initial state as the current state.
2. Initialize BEST-SO-FAR to the current state.3. Initialize T according to the annealing schedule
4. Loop until a solution is found or until there are no new operators left to beapplied in the current state.
a. Select an operator that has not yet been applied to the current state and applyit to produce a new state.
b. Evaluate the new state. Compute:x E = ( value of current ) ( value of new state)
x If the new state is a goal state, then return it and quit.
x If it is a goal state but is better than the current state, then make it the current state.Also set BEST-SO-FAR to this new state.
x If it is not better than the current state, then make it the current state with probability
p as defined above. This step is usually implemented by invoking a random numbergenerator to produce a number in the range [0, 1]. If the number is less than p, thenthe move is accepted. Otherwise, do nothing.
c. Revise T as necessary according to the annealing schedule
5. Return BEST-SO-FAR as the answer
18Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
19/95
` It is necessary to select an annealing schedule
which has three components: Initial value to be used for temperature
Criteria that will be used to decide when the temperature
will be reduced
Amount by which the temperature will be reduced.
19Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
20/95
` Combines the advantages of both DFS and BFSinto a single method.
` DFS is good because it allows a solution to befound without all competing branches having to
be expanded.` BFS is good because it does not get branches
on dead end paths.` One way of combining the two is to follow a
single path at a time, but switch paths wheneversome competing path looks more promisingthan the current one does.
20Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
21/95
` At each step of the BFS search process, we select the mostpromising of the nodes we have generated so far.
` This is done by applying an appropriate heuristic function to each ofthem.
` We then expand the chosen node by using the rules to generate itssuccessors
` Similar to Steepest ascent hill climbing with two exceptions: In hill climbing, one move is selected and all the others are rejected,
never to be reconsidered. This produces the straightline behaviour that ischaracteristic of hill climbing.
In BFS, one move is selected, but the others are kept around so that theycan be revisited later if the selected path becomes less promising.Further, the best available state is selected in the BFS, even if that statehas a value that is lower than the value of the state that was justexplored. This contrasts with hill climbing, which will stop if there are nosuccessor states with better values than the current state.
21Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
22/95
` It is sometimes important to search graphs so that duplicate pathswill not be pursued.
` An algorithm to do this will operate by searching a directed graph inwhich each node represents a point in problem space.
` Each node will contain: Description of problem state it represents
Indication of how promising it is Parent link that points back to the best node from which it came List of nodes that were generated from it
` Parent link will make it possible to recover the path to the goal oncethe goal is found.
` The list of successors will make it possible, if a better path is foundto an already existing node, to propagate the improvement down toits successors.
` This is called OR-graph, since each of its branhes represents analternative problem solving path
22Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
23/95
` We need two lists of nodes: OPEN nodes that have been generated and have
had the heuristic function applied to them but whichhave not yet been examined. OPEN is actually a
priority queue in which the elements with the highestpriority are those with the most promising value of theheuristic function.
CLOSED- nodes that have already been examined.We need to keep these nodes in memory if we want to
search a graph rather than a tree, since whenever anew node is generated, we need to check whether ithas been generated before.
23Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
24/95
1. Start with OPEN containing just the initial state2. Until a goal is found or there are no nodes left
on OPEN do:a. Pick the best node on OPEN
b. Generate its successorsc. For each successor do:
i. If it has not been generated before, evaluate it, add it toOPEN, and record its parent.
ii. If it has been generated before, change the parent if thisnew path is better than the previous one. In that case,
update the cost of getting to this node and to anysuccessors that this node may already have.
24Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
25/95
` It proceeds in steps, expanding one node at each step,until it generates a node that corresponds to a goalstate.
` At each step, it picks the most promising of the nodes
that have so far been generated but not expanded.` It generates the successors of the chosen node, applies
the heuristic function to them, and adds them to the listof open nodes, after checking to see if any of them havebeen generated before.
` By doing this check, we can guarantee that each nodeonly appears once in the graph, although many nodesmay point to it as a successor.
25Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
26/95
Step 1 Step 2 Step 3
Step 4 Step 5
A A
B C D3 5 1
A
B C D3 5 1
E F46A
B C D3 5 1
E F46
G H6 5
A
B C D3 5 1
E F46
G H6 5
A A2 1 26Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
27/95
` BFS is a simplification ofA*Algorithm` Presented by Hart et al` Algorithm uses:
f: Heuristic function that estimates the merits of each node wegenerate. This is sum of two components, g and h and f
represents an estimate of the cost of getting from the initial stateto a goal state along with the path that generated the currentnode.
g : The function g is a measure of the cost of getting from initialstate to the current node.
h : The function h is an estimate of the additional cost of gettingfrom the current node to a goal state.
OPEN CLOSED
27Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
28/95
1. Start with OPEN containing only initial node. Set thatnodes g value to 0, its h value to whatever it is, and itsf value to h+0 or h. Set CLOSED to empty list.
2. Until a goal node is found, repeat the following
procedure: If there are no nodes on OPEN, reportfailure. Otherwise pick the node on OPEN with thelowest f value. Call it BESTNODE. Remove it fromOPEN. Place it in CLOSED. See if the BESTNODE isa goal state. If so exit and report a solution. Otherwise,
generate the successors of BESTNODE but do not setthe BESTNODE to point to them yet.
28Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
29/95
` For each of the SUCCESSOR, do the following:a. Set SUCCESSOR to point back to BESTNODE. These
backwards links will make it possible to recover the path once asolution is found.
b. Compute g(SUCCESSOR) = g(BESTNODE) + the cost of gettingfrom BESTNODE to SUCCESSOR
c. See if SUCCESSOR is the same as any node on OPEN. If so callthe node OLD.
d. If SUCCESSOR was not on OPEN, see if it is on CLOSED. If so,call the node on CLOSED OLD and add OLD to the list ofBESTNODEs successors.
e. If SUCCESSOR was not already on either OPEN or CLOSED,
then put it on OPEN and add it to the list of BESTNODEssuccessors. Compute f(SUCCESSOR) = g(SUCCESSOR) +h(SUCCESSOR)
29Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
30/95
` Role of g function: This lets us choose whichnode to expand next on the basis of not only ofhow good the node itself looks, but also on thebasis of how good the path to the node was.
` h, the distance of a node to the goal.If h is aperfect estimator of h, then A* will convergeimmediately to the goal with no search.
30Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
31/95
` If h rarely overestimates h by more than , thenA* algorithm will rarely find a solution whose costis more than greater than the cost of the optimalsolution.
` Under certain conditions, the A* algorithm can beshown to be optimal in that it generates the fewestnodes in the process of finding a solution to a
problem.
31Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
32/95
` An Agenda is a list of tasks a system could
perform.
` Associated with each task there are usually two
things: A list of reasons why the task is being proposed
(justification)
Rating representing the overall weight of evidence
suggesting that the task would be useful.
32Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
33/95
1. Do until a goal state is reached or the agenda is empty:a. Choose the most promising task from the agenda.b. Execute the task by devoting to it the number of resources
determined by its importance. The important resources to considerare time and space. Executing the task will probably generateadditional tasks (successor nodes). For each of them do the
followings:i. See if it is already on the agenda. If so, then see if this same reason fordoing it is already on its list of justifications. If so, ignore this currentevidence. If this justification was not already present, add it to the list. Ifthe task was not on the agenda, insert it.
ii. Compute the new tasks rating, combining the evidence from all itsjustifications. Not all justifications need have equal weight. It is oftenuseful to associate with each justification a measure of how strong thereason it is. These measures are then combined at this step to producean overall rating for the task.
33Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
34/95
Person: I dont want to read any more aboutchina. Give me something else.
Computer: OK. What else are you interested in?Person: How about Italy? I think Id find Italy
interesting.Computer : What things about Italy are you
interested in reading about?Person: I think Id like to start with its history.
Computer: why dont you want to read any moreabout China?
34Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
35/95
` Mathematics discovery program developed by Lenat (77, 82)
` AM was given small set of starting facts about numbertheory and a set of operators it could use to develop new
ideas.` These operators included such things as Find
examples of a concept you already know.
` AMs goal was to generate new interestingMathematical concepts.
` It succeeded in discovering such things as primenumbers and Goldbachs conjecture.
` AM used task agenda.
35Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
36/95
` AND-OR graph (or tree) is useful for representing thesolution of problems that can be solved by decomposingthem into a set of smaller problems, all of which must thenbe solved.
` One AND arc may point to any number of successornodes, all of which must be solved in order for the arc topoint to a solution.
Goal: Acquire TV Set
Goal: Steal a TV Set Goal: Earn some money Goal: Buy TV Set
36Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
37/95
A
B C D5 3 4
A
B C D
F GE H I J
5 10 3 4 15 10
17 9 27
389
37Mahesh Maurya,NMIMS
-
8/8/2019 Chap3_Heuristic Search Technique
38/95
function TREE-SEARCH(problem,fringe) return a solution or failure
fringe n INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe)
loop do
ifEMPTY?(fringe) then return failure
node n REMOVE-FIRST(fringe)
ifGOAL-TEST[problem] applied to STATE[node] succeedsthen return SOLUTION(node)
fringe n INSERT-ALL(EXPAND(node,problem), fringe)
A strategy is defined by picking the order of node
expansion
Mahesh Maurya,NMIMS 38
-
8/8/2019 Chap3_Heuristic Search Technique
39/95
` General approach of informed search: Best-first search: node is selected for expansion based on an evaluation
function f(n)
` Idea: evaluation function measures distance to the
goal. Choose node which appears best
` Implementation:
fringe is queue sorted in decreasing order of desirability.
Special cases: greedy search,A*
search
Mahesh Maurya,NMIMS 39
-
8/8/2019 Chap3_Heuristic Search Technique
40/95
` [dictionary]A rule of thumb, simplification, or
educated guess that reduces or limits the search
for solutions in domains that are difficult and
poorly understood. h(n) = estimated cost of the cheapest path from node n to
goal node.
Ifn is goal then h(n)=0
More information later.
Mahesh Maurya,NMIMS 40
-
8/8/2019 Chap3_Heuristic Search Technique
41/95
` hSLD=straight-linedistance heuristic.
` hSLD can NOT be
computed from theproblem descriptionitself
` In this examplef(n)=h(n)
Expand node that is closest togoal
= Greedy best-first search
Mahesh Maurya,NMIMS 41
-
8/8/2019 Chap3_Heuristic Search Technique
42/95
Mahesh Maurya,NMIMS 42
Romania with step costs in km
-
8/8/2019 Chap3_Heuristic Search Technique
43/95
` Assume that we want to use greedy search to solve
the problem of travelling fromArad to Bucharest.
` The initial state=Arad
Mahesh Maurya,NMIMS 43
Arad (366)
-
8/8/2019 Chap3_Heuristic Search Technique
44/95
` The first expansion step produces: Sibiu, Timisoara and Zerind
` Greedy best-first will select Sibiu.
Mahesh Maurya,NMIMS 44
Arad
Sibiu(253)
Timisoara
(329)
Zerind(374)
-
8/8/2019 Chap3_Heuristic Search Technique
45/95
`
If Sibiu is expanded we get: Arad, Fagaras, Oradea and Rimnicu Vilcea
` Greedy best-first search will select: Fagaras
Mahesh Maurya,NMIMS 45
Arad
Sibiu
Arad
(366)Fagaras
(176)
Oradea
(380)
Rimnicu Vilcea
(193)
-
8/8/2019 Chap3_Heuristic Search Technique
46/95
` If Fagaras is expanded we get: Sibiu and Bucharest
` Goal reached !! Yet not optimal (see Arad, Sibiu, Rimnicu Vilcea, Pitesti)
Mahesh Maurya,NMIMS 46
Arad
Sibiu
Fagaras
Sibiu
(253)
Bucharest
(0)
-
8/8/2019 Chap3_Heuristic Search Technique
47/95
` Completeness: NO (cfr. DF-search) Check on repeated states
Minimizing h(n) can result in false starts, e.g. Iasi to Fagaras.
Mahesh Maurya,NMIMS 47
-
8/8/2019 Chap3_Heuristic Search Technique
48/95
` Completeness: NO (cfr. DF-search)
` Time complexity? Cfr. Worst-case DF-search
(with m is maximum depth of search space)
Good heuristic can give dramatic improvement.
Mahesh Maurya,NMIMS 48
O(bm )
-
8/8/2019 Chap3_Heuristic Search Technique
49/95
` Completeness: NO (cfr. DF-search)
` Time complexity:
` Space complexity:
Keeps all nodes in memory
Mahesh Maurya,NMIMS 49
O(bm )
O(bm )
-
8/8/2019 Chap3_Heuristic Search Technique
50/95
-
8/8/2019 Chap3_Heuristic Search Technique
51/95
` Best-known form of best-first search.
` Idea: avoid expanding paths that are alreadyexpensive.
` Evaluation function f(n)=g(n) + h(n) g(n) the cost (so far) to reach the node. h(n) estimated cost to get from the node to the goal.
f(n) estimated total cost of path through n to goal.
Mahesh Maurya,NMIMS 51
-
8/8/2019 Chap3_Heuristic Search Technique
52/95
` A* search uses an admissible heuristic A heuristic is admissible if it never overestimates the cost
to reach the goal
Are optimistic
Formally:
1. h(n) = 0so h(G)=0for any goal G.
e.g. hSLD(n) never overestimates the actual road distance
Mahesh Maurya,NMIMS 52
-
8/8/2019 Chap3_Heuristic Search Technique
53/95
Mahesh Maurya,NMIMS 53
-
8/8/2019 Chap3_Heuristic Search Technique
54/95
Mahesh Maurya,NMIMS 54
Algorithm A* (with any h on search Graph)
Input: a search graph problem with cost on the arcs
Output: the minimal cost path from start node to a goal node. 1. Put the start node s on OPEN.
2. If OPEN is empty, exit with failure
3. Remove from OPEN and place on CLOSED a node n having
minimum f.
4. If n is a goal node exit successfully with a solution pathobtained by tracing back the pointers from n to s.
5. Otherwise, expand n generating its children and directing
pointers from each child node to n.
For every child node n do
evaluate h(n) and compute f(n) = g(n) +h(n)=
g(n)+c(n,n)+h(n) If n is already on OPEN or CLOSED compare its new f with
the old f and attach the lowest f to n.
put n with its f value in the right order in OPEN
6. Go to step 2.
-
8/8/2019 Chap3_Heuristic Search Technique
55/95
` Find Bucharest starting at Arad f(Arad) = c(??,Arad)+h(Arad)=0+366=366
Mahesh Maurya,NMIMS 55
-
8/8/2019 Chap3_Heuristic Search Technique
56/95
` Expand Arrad and determine f(n) for each node f(Sibiu)=c(Arad,Sibiu)+h(Sibiu)=140+253=393
f(Timisoara)=c(Arad,Timisoara)+h(Timisoara)=118+329=447 f(Zerind)=c(Arad,Zerind)+h(Zerind)=75+374=449
` Best choice is Sibiu
Mahesh Maurya,NMIMS 56
-
8/8/2019 Chap3_Heuristic Search Technique
57/95
` Expand Sibiu and determine f(n) for each node f(Arad)=c(Sibiu,Arad)+h(Arad)=280+366=646
f(Fagaras)=c(Sibiu,Fagaras)+h(Fagaras)=239+179=415
f(Oradea)=c(Sibiu,Oradea)+h(Oradea)=291+380=671 f(Rimnicu Vilcea)=c(Sibiu,Rimnicu Vilcea)+
h(Rimnicu Vilcea)=220+192=413
` Best choice is Rimnicu Vilcea
Mahesh Maurya,NMIMS 57
-
8/8/2019 Chap3_Heuristic Search Technique
58/95
` Expand Rimnicu Vilcea and determine f(n) for each node f(Craiova)=c(Rimnicu Vilcea, Craiova)+h(Craiova)=360+160=526
f(Pitesti)=c(Rimnicu Vilcea, Pitesti)+h(Pitesti)=317+100=417
f(Sibiu)=c(Rimnicu Vilcea,Sibiu)+h(Sibiu)=300+253=553
` Best choice is Fagaras
Mahesh Maurya,NMIMS 58
-
8/8/2019 Chap3_Heuristic Search Technique
59/95
` Expand Fagaras and determine f(n) for each node f(Sibiu)=c(Fagaras, Sibiu)+h(Sibiu)=338+253=591
f(Bucharest)=c(Fagaras,Bucharest)+h(Bucharest)=450+0=450
` Best choice is Pitesti !!!
Mahesh Maurya,NMIMS 59
-
8/8/2019 Chap3_Heuristic Search Technique
60/95
` Expand Pitesti and determine f(n) for each node f(Bucharest)=c(Pitesti,Bucharest)+h(Bucharest)=418+0=418
` Best choice is Bucharest !!! Optimal solution (only ifh(n) is admissable)
` Note values along optimal path !!
Mahesh Maurya,NMIMS 60
-
8/8/2019 Chap3_Heuristic Search Technique
61/95
` Suppose suboptimal goal G2 in the queue.` Let n be an unexpanded node on a shortest to optimal goal G.
f(G2 ) = g(G2) since h(G2)=0
> g(G) since G2 is suboptimal>= f(n) since h is admissible
Since f(G2) > f(n), A* will never select G2for expansion
Mahesh Maurya,NMIMS 61
-
8/8/2019 Chap3_Heuristic Search Technique
62/95
` Discards new paths to repeated state. Previous proof breaks down
` Solution: Add extra bookkeeping i.e. remove more expensive of
two paths.
Ensure that optimal path to any repeated state is always
first followed.
x Extra requirement on h(n): consistency (monotonicity)
Mahesh Maurya,NMIMS 62
-
8/8/2019 Chap3_Heuristic Search Technique
63/95
` A heuristic is consistent if
` If h is consistent, we have
i.e. f(n) is nondecreasing along any path.
Mahesh Maurya,NMIMS 63
h(n) e c(n,a,n') h(n')
f(n') ! g(n') h(n')
! g(n) c(n,a,n') h(n')
u g(n) h(n)
u f(n)
-
8/8/2019 Chap3_Heuristic Search Technique
64/95
` A* expands nodes in order ofincreasing fvalue
` Contours can be drawn instate space Uniform-cost search adds circles.
F-contours are gradually
Added:
1) nodes with f(n)
-
8/8/2019 Chap3_Heuristic Search Technique
65/95
` Completeness: YES Since bands of increasing fare added
Unless there are infinitly many nodes with f
-
8/8/2019 Chap3_Heuristic Search Technique
66/95
` Completeness: YES
` Time complexity: Number of nodes expanded is still exponential in the
length of the solution.
Mahesh Maurya,NMIMS 66
-
8/8/2019 Chap3_Heuristic Search Technique
67/95
` Completeness: YES
` Time complexity: (exponential with path length)
` Space complexity:
It keeps all generated nodes in memory Hence space is the major problem not time
Mahesh Maurya,NMIMS 67
-
8/8/2019 Chap3_Heuristic Search Technique
68/95
` Completeness: YES
` Time complexity: (exponential with path length)
` Space complexity:(all nodes are stored)
` Optimality: YES Cannot expand fi+1 until fi is finished. A* expands all nodes with f(n)< C*
A* expands some nodes with f(n)=C*
A* expands no nodes with f(n)>C*
Also optimally efficient(not including ties)
Mahesh Maurya,NMIMS 68
-
8/8/2019 Chap3_Heuristic Search Technique
69/95
` Some solutions to A* space problems (maintain
completeness and optimality) Iterative-deepeningA* (IDA*)
x Here cutoff information is the f-cost (g+h) instead of depth
Recursive best-first search(RBFS)x Recursive algorithm that attempts to mimic standard best-first
search with linear space.
(simple) Memory-bounded A* ((S)MA*)
x Drop the worst-leaf node when memory is full
Mahesh Maurya,NMIMS 69
-
8/8/2019 Chap3_Heuristic Search Technique
70/95
function RECURSIVE-BEST-FIRST-SEARCH(problem) return a solution or failure
return RFBS(problem,MAKE-NODE(INITIAL-STATE[problem]),)
function RFBS(problem, node, f_limit) return a solution or failure and a new f-costlimit
ifGOAL-TEST[problem](STATE[node]) then return node
successors n EXPAND(node, problem)
ifsuccessors is empty then return failure, foreach s in successors do
f[s] n max(g(s) + h(s), f[node])
repeat
bestn the lowest f-value node in successors
if f[best] > f_limitthen return failure, f[best]
alternative n the second lowest f-value among successorsresult, f[best] n RBFS(problem, best, min(f_limit, alternative))
if result{ failure then return result
Mahesh Maurya,NMIMS 70
-
8/8/2019 Chap3_Heuristic Search Technique
71/95
` Keeps track of the f-value of the best-alternative
path available. If current f-values exceeds this alternative f-value than
backtrack to alternative path.
Upon backtracking change f-value to best f-value of itschildren.
Re-expansion of this result is thus still possible.
Mahesh Maurya,NMIMS 71
-
8/8/2019 Chap3_Heuristic Search Technique
72/95
` Path until Rumnicu Vilcea is already expanded` Above node; f-limit for every recursive call is shown on top.`
Below node: f(n)` The path is followed until Pitesti which has a f-value worse
than the f-limit.
Mahesh Maurya,NMIMS 72
-
8/8/2019 Chap3_Heuristic Search Technique
73/95
` Unwind recursion and store best f-value for current best leafPitesti
result, f[best]n
RBFS(problem, best, min(f_limit, alternative))` bestis now Fagaras. Call RBFS for new best
bestvalue is now 450
Mahesh Maurya,NMIMS 73
-
8/8/2019 Chap3_Heuristic Search Technique
74/95
` Unwind recursion and store best f-value for current best leafFagaras
result, f[best] n RBFS(problem, best, min(f_limit, alternative))
` bestis now Rimnicu Viclea (again). Call RBFS for new best Subtree is again expanded.
Best alternative subtree is now through Timisoara.
` Solution is found since because 447 > 417.
Mahesh Maurya,NMIMS 74
-
8/8/2019 Chap3_Heuristic Search Technique
75/95
` RBFS is a bit more efficient than IDA* Still excessive node generation (mind changes)
` Like A*, optimal ifh(n) is admissible
` Space complexity is O(bd).
IDA*
retains only one single number (the current f-cost limit)` Time complexity difficult to characterize
Depends on accuracy if h(n) and how often best path changes.
` IDA* en RBFS suffer from too little memory.
Mahesh Maurya,NMIMS 75
-
8/8/2019 Chap3_Heuristic Search Technique
76/95
` Use all available memory. I.e. expand best leafs until available memory is full
When full, SMA* drops worst leaf node (highest f-value)
Like RFBS backup forgotten node to its parent
` What if all leafs have the same f-value? Same node could be selected for expansion and deletion.
SMA* solves this by expanding newestbest leaf and deleting oldestworst leaf.
` SMA* is complete if solution is reachable, optimal if
optimal solution is reachable.
Mahesh Maurya,NMIMS 76
-
8/8/2019 Chap3_Heuristic Search Technique
77/95
-
8/8/2019 Chap3_Heuristic Search Technique
78/95
` E.g for the 8-puzzle Avg. solution cost is about 22 steps (branching factor +/- 3)
Exhaustive search to depth 22: 3.1 x 1010
states. A good heuristic function can reduce the search process.
Mahesh Maurya,NMIMS 78
-
8/8/2019 Chap3_Heuristic Search Technique
79/95
` E.g for the 8-puzzle knows two commonly used heuristics
` h1 = the number of misplaced tiles h1(s)=8
` h2= the sum of the distances of the tiles from their goal positions(manhattan distance). h2(s)=3+1+2+2+2+3+3+2=18
Mahesh Maurya,NMIMS 79
-
8/8/2019 Chap3_Heuristic Search Technique
80/95
` Effective branching factor b* Is the branching factor that a uniform tree of depth d
would have in order to contain N+1 nodes.
Measure is fairly constant for sufficiently hard problems.
x Can thus provide a good guide to the heuristics overallusefulness.
x A good value of b* is 1.
Mahesh Maurya,NMIMS 80
N1!1 b*(b*)2 ... (b*)d
-
8/8/2019 Chap3_Heuristic Search Technique
81/95
` 1200 random problems with solution lengths from 2
to 24.
` Ifh2(n) >= h1(n) for all n (both admissible)
then h2dominates h1 and is better for search
Mahesh Maurya,NMIMS 81
-
8/8/2019 Chap3_Heuristic Search Technique
82/95
` Admissible heuristics can be derived from the exactsolution cost of a relaxed version of the problem: Relaxed 8-puzzle forh1 : a tile can move anywhere
As a result, h1(n) gives the shortest solution Relaxed 8-puzzle forh2: a tile can move to any adjacent square.
As a result, h2(n) gives the shortest solution.
The optimal solution cost of a relaxed problem is nogreater than the optimal solution cost of the realproblem.
ABSolver found a usefull heuristic for the rubic cube.
Mahesh Maurya,NMIMS 82
-
8/8/2019 Chap3_Heuristic Search Technique
83/95
` Admissible heuristics can also be derived from the solution cost of asubproblem of a given problem.
` This cost is a lower bound on the cost of the real problem.
` Pattern databases store the exact solution to for every possiblesubproblem instance. The complete heuristic is constructed using the patterns in the DB
Mahesh Maurya,NMIMS 83
-
8/8/2019 Chap3_Heuristic Search Technique
84/95
` Another way to find an admissible heuristic is
through learning from experience: Experience = solving lots of 8-puzzles
An inductive learning algorithm can be used to predict
costs for other states that arise during search.
Mahesh Maurya,NMIMS 84
-
8/8/2019 Chap3_Heuristic Search Technique
85/95
` Previously: systematic exploration of search
space. Path to goal is solution to problem
` YET, for some problems path is irrelevant. E.g 8-queens
` Different algorithms can be used Local search
Mahesh Maurya,NMIMS 85
-
8/8/2019 Chap3_Heuristic Search Technique
86/95
` Local search= use single current state and move to
neighboring states.
` Advantages: Use very little memory
Find often reasonable solutions in large or infinite state spaces.
` Are also useful for pure optimization problems. Find best state according to some objective function.
e.g. survival of the fittest as a metaphor for optimization.
Mahesh Maurya,NMIMS 86
-
8/8/2019 Chap3_Heuristic Search Technique
87/95
Mahesh Maurya,NMIMS 87
-
8/8/2019 Chap3_Heuristic Search Technique
88/95
` is a loop that continuously moves in the directionof increasing value It terminates when a peak is reached.
` Hill climbing does not look ahead of the immediate
neighbors of the current state.` Hill-climbing chooses randomly among the set of
best successors, if there is more than one.
` Hill-climbing a.k.a. greedy local search
Mahesh Maurya,NMIMS 88
-
8/8/2019 Chap3_Heuristic Search Technique
89/95
function HILL-CLIMBING(problem) return a state that is a local maximuminput:problem, a problem
local variables: current, a node.
neighbor, a node.
currentn MAKE-NODE(INITIAL-STATE[problem])
loop do
neighborn a highest valued successor ofcurrent
ifVALUE [neighbor] VALUE[current] then return STATE[current]
currentn neighbor
Mahesh Maurya,NMIMS 89
-
8/8/2019 Chap3_Heuristic Search Technique
90/95
` 8-queens problem (complete-state formulation).
` Successor function: move a single queen to
another square in the same column.
` Heuristic function h(n): the number of pairs of
queens that are attacking each other (directly or
indirectly).
Mahesh Maurya,NMIMS 90
-
8/8/2019 Chap3_Heuristic Search Technique
91/95
a) shows a state of h=17 and the h-value for each
possible successor.
b) A local minimum in the 8-queens state space (h=1).
Mahesh Maurya,NMIMS 91
a) b)
-
8/8/2019 Chap3_Heuristic Search Technique
92/95
` Ridge = sequence of local maxima difficult for greedyalgorithms to navigate
`
Plateaux = an area of the state space where the evaluationfunction is flat.
` Gets stuck 86% of the time.
Mahesh Maurya,NMIMS 92
-
8/8/2019 Chap3_Heuristic Search Technique
93/95
` Stochastic hill-climbing Random selection among the uphill moves.
The selection probability can vary with the steepness ofthe uphill move.
`
First-choice hill-climbing cfr. stochastic hill climbing by generating successors
randomly until a better one is found.
` Random-restart hill-climbing Tries to avoid getting stuck in local maxima.
Mahesh Maurya,NMIMS 93
-
8/8/2019 Chap3_Heuristic Search Technique
94/95
` Escape local maxima by allowing bad moves. Idea: but gradually decrease their size and frequency.
` Origin; metallurgical annealing
` Bouncing ball analogy: Shaking hard (= high temperature). Shaking less (= lower the temperature).
` If T decreases slowly enough, best state is reached.
` Applied for VLSI layout, airline scheduling, etc.
Mahesh Maurya,NMIMS 94
-
8/8/2019 Chap3_Heuristic Search Technique
95/95
function SIMULATED-ANNEALING(problem, schedule) return a solution stateinput:problem, a problem
schedule, a mapping from time to temperature
local variables: current, a node.
next, a node.
T, a temperature controlling the probability of downward steps
currentn MAKE-NODE(INITIAL-STATE[problem])
for t n 1 to do
Tn schedule[t]
ifT= 0then return current
nextn a randomly selected successor ofcurrent
En
VA
LUE[next] - VA
LUE[current]ifE > 0 then currentn next
else currentn nextonly with probability eE /T