Lecture 5-1CS250: Intro to AI/Lisp “If I Only had a Brain” Search Lecture 5-1 October 26 th,...
-
Upload
harry-gordon -
Category
Documents
-
view
217 -
download
0
Transcript of Lecture 5-1CS250: Intro to AI/Lisp “If I Only had a Brain” Search Lecture 5-1 October 26 th,...
Lecture 5-1 CS250: Intro to AI/Lisp
“If I Only had a Brain” Search
Lecture 5-1
October 26th, 1999
CS250
Lecture 5-1 CS250: Intro to AI/Lisp
Blind Search
• No information except– Initial state– Operators– Goal test
• If we want worst-case optimality, need exponential time
Lecture 5-1 CS250: Intro to AI/Lisp
“How long ‘til we get there?”
• Add a notion of progress to search– Not just the cost to date– How far we have to go
Lecture 5-1 CS250: Intro to AI/Lisp
Best-First Search
• Next node in General-Search– Queuing function– Replace with evaluation function
• Go with the most desirable path
Lecture 5-1 CS250: Intro to AI/Lisp
Heuristic Functions
• Estimate with a heuristic function, h(n)– Problem specific (Why?)
• Information about getting to the goal– Not just where we’ve been
• Examples– Route-finding?– 8 Puzzle?
Lecture 5-1 CS250: Intro to AI/Lisp
Greedy Searching
• Take the path that looks the best right now– Lowest estimated cost
• Not optimal
• Not complete
• Complexity?
Time: O(bm)Space: O(bm)
Lecture 5-1 CS250: Intro to AI/Lisp
Best of Both Worlds?
• Greedy– Minimizes total estimated cost to goal, h(n)– Not optimal– Not complete
• Uniform cost– Minimizes cost so far, g(n)– Optimal & complete– Inefficient
Lecture 5-1 CS250: Intro to AI/Lisp
Greedy + Uniform Cost
• Evaluate with both criteriaf(n) = g(n) + h(n)– What does this mean?
• Sounds good, but is it:– Complete?– Optimal?
Lecture 5-1 CS250: Intro to AI/Lisp
Admissible Heuristics
• Optimistic: Never overestimate the cost of reaching the goal
• A* Search = Best-first + Admissible h(n)
Lecture 5-1 CS250: Intro to AI/Lisp
A* Search
• Complete
• Optimal, if: – Heuristic is admissible
Lecture 5-1 CS250: Intro to AI/Lisp
Monotonicity
• Monotonic heuristic functions are nondecreasing– Why might this be an important feature?
• Non-monotonic? Use pathmax:– Given a node, n, and its child, n’
f(n’) = max(f(n), g(n’) + h(n’))
Lecture 5-1 CS250: Intro to AI/Lisp
A* in Action
Contoured state spaceA* starts at initial nodeExpands leaf node of lowest f(n) = g(n) + h(n)Fans out to increasing contours
Lecture 5-1 CS250: Intro to AI/Lisp
A* in Perspective
• What if h(n) = 0 everywhere?– A* is uniform cost
• What if h(n) is an exact estimate of the remaining cost?– A* runs in linear time!
• Different errors lead to different performance factors
• A* is the best (in terms of expanded nodes) of optimal best-first searches
Lecture 5-1 CS250: Intro to AI/Lisp
A*’s Complexity
• Depends on the error of h(n)– Always 0: Breadth-first search– Exactly right: Time O(n)– Constant absolute error: Time O(n), but
more than exactly right– Constant relative error: Time O(nk), Space
O(nk)
• See Figure 4.8
Lecture 5-1 CS250: Intro to AI/Lisp
Branching Factors
• Where f ’ is the next smaller cost, after f
)1(@#
)(@#
kDepthNodes
kDepthNodesAverageBlind
)'(@#
)(@#
fCostNodes
fCostNodesAverageHeuristic
Lecture 5-1 CS250: Intro to AI/Lisp
Inventing Heuristics• Dominant heuristics: Bigger is better, if
you don’t overestimate
• How do you create heuristics?– Relaxed problem– Statistical approach
• Constraint satisfaction– Most-constrained variable– Most-constraining variable– Least-constraining value
Lecture 5-1 CS250: Intro to AI/Lisp
Improving on A*
• Best of both worlds with DFID
• Can we repeat with A*?– Successive iterations:
• Increasing search depth (as with DFID)• Increasing total path cost
Lecture 5-1 CS250: Intro to AI/Lisp
Iterative Deepening A*
• Good stuff in A*
• Limited memory
Lecture 5-1 CS250: Intro to AI/Lisp
Iterative Improvements
• Loop through, trying to “zero in” on the solution
• Hill climbing– Climb higher– Problems?
• Solution? Add a touch of randomness
Lecture 5-1 CS250: Intro to AI/Lisp
Annealing
an.neal vb [ME anelen to set on fire, fr. OE onaelan, fr. on
+ aelan to set on fire, burn, fr. al fire; akin to OE aeled fire,
ON eldr] vt (1664) 1 a: to heat and then cool (as steel or
glass) usu. for softening and making less brittle; also: to
cool slowly usu. in a furnace b: to heat and then cool
(nucleic acid) in order to separate strands and induce
combination at lower temperature esp. with
complementary strands of a different species 2:
strengthen, toughen ~ vi: to be capable of combining with
complementary nucleic acid by a process of heating and
cooling
Lecture 5-1 CS250: Intro to AI/Lisp
Simulated Annealing
(defun simulated-annealing-search (problem &optional(schedule (make-exp-schedule)))
(let* ((current (create-start-node problem)) (successors (expand current problem)) (best current)
next temp delta) (for time = 1 to infinity do (setf temp (funcall schedule time)) (when (or (= temp 0) (null successors)) (RETURN (values (goal-test problem best) best))) (when (< (node-h-cost current) (node-h-cost best)) (setf best current)) (setf next (random-element successors)) (setf delta (- (node-h-cost next) (node-h-cost current))) (when (or (< delta 0.0) ; < because we are minimizing (< (random 1.0) (exp (/ (- delta) temp)))) (setf current next successors (expand next problem))))))
Lecture 5-1 CS250: Intro to AI/Lisp
Let*
(let* ((current (create-start-node problem)) (successors (expand current problem)) (best current) next temp delta)
BODY)
Lecture 5-1 CS250: Intro to AI/Lisp
The Body(for time = 1 to infinity do (setf temp (funcall schedule time)) (when (or (= temp 0) (null successors)) (RETURN (values (goal-test problem best) best))) (when (< (node-h-cost current) (node-h-cost best)) (setf best current)) (setf next (random-element successors)) (setf delta (- (node-h-cost next) (node-h-cost current))) (when (or (< delta 0.0) ; < because we are minimizing (< (random 1.0) (exp (/ (- delta) temp)))) (setf current next successors (expand next problem))))))
Lecture 5-1 CS250: Intro to AI/Lisp
Proof of A*’s Optimality Suppose that G is an optimal goal state with with path
cost f*, and G2 is a suboptimal goal state, where g(G2) > f*.
Suppose A* selects G2 from the queue, will A* terminate?
Consider a node n that is a leaf node on an optimal path to G
Since h is admissible, f*>=f(n), and since G2 was chosen over n: f(n) >= f(G2)
Together, they imply f* >= f(G2)
But G2 is a goal, so h(G2) = 0, f(G2) = g(G2)
Therefore, f* >= g(G2)