Artificial Intelligence

44
INTRODUCTION AND PROBLEM SOLVING I UNIT-1 DEFINITION: Artificial Intelligence is the study of how to make computers do things at which, at the moment, people are better. SOME DEFINITIONS OF AI Building systems that think like humans “The exciting new effort to make computers think … machines with minds, in the full and literal sense” -- Haugeland, 1985 “The automation of activities that we associate with human thinking, … such as decision-making, problem solving, learning, …” -- Bellman, 1978 Building systems that act like humans “The art of creating machines that perform functions that require intelligence when performed by people” -- Kurzweil, 1990 “The study of how to make computers do things at which, at the moment, people are better” -- Rich and Knight, 1991 Building systems that think rationally “The study of mental faculties through the use of computational models” -- Charniak and McDermott, 1985 “The study of the computations that make it possible to perceive, reason, and act” -- Winston, 1992 Building systems that act rationally “A field of study that seeks to explain and emulate intelligent behavior in terms of computational processes” -- Schalkoff, 1990 “The branch of computer science that is concerned with the automation of intelligent behavior” -- Luger and Stubblefield, 1993 TURING TEST It is proposed by Alan Turing 1950 .According to this test, a computer could be considered to be thinking only when a human interviewer, conversing with both an unseen human being and an unseen computer, could not determine which is which. Description: 2 human being,1 computer

Transcript of Artificial Intelligence

Page 1: Artificial Intelligence

INTRODUCTION AND PROBLEM SOLVING I

UNIT-1

DEFINITION:

Artificial Intelligence is the study of how to make computers do things at which, at the moment, people are better.

SOME DEFINITIONS OF AI

Building systems that think like humans

“The exciting new effort to make computers think … machines with minds, in the full and literal sense” -- Haugeland, 1985

“The automation of activities that we associate with human thinking, … such as decision-making, problem solving, learning, …” -- Bellman, 1978

Building systems that act like humans

“The art of creating machines that perform functions that require intelligence when performed by people” -- Kurzweil, 1990

“The study of how to make computers do things at which, at the moment, people are better” -- Rich and Knight, 1991

Building systems that think rationally

“The study of mental faculties through the use of computational models” -- Charniak and McDermott, 1985

“The study of the computations that make it possible to perceive, reason, and act” -- Winston, 1992

Building systems that act rationally

“A field of study that seeks to explain and emulate intelligent behavior in terms of computational processes” -- Schalkoff, 1990

“The branch of computer science that is concerned with the automation of intelligent behavior” -- Luger and Stubblefield, 1993

TURING TEST

It is proposed by Alan Turing 1950 .According to this test, a computer could be considered to be thinking only when a human interviewer, conversing with both an unseen human being and an unseen computer, could not determine which is which.

Description:

2 human being,1 computer

Page 2: Artificial Intelligence

The computer would need to posses the following capabilities:

The computer processing: to enable it to communicate successfully in English Knowledge representation: to store what it knows or hears Automated reasoning: to use the stored information to answer questions and to draw new

conclusions Machine learning: to adapt to new circumstances and to detect and extrapolate patterns.

To pass the total Turing test, the computer will need,

Computer vision: to perceive objects Robotics: to manipulate objects and move about

Thinking and Acting Humanly

Acting humanly

"If it looks, walks, and quacks like a duck, then it is a duck”

The Turing Test

Interrogator communicates by typing at a terminal with TWO other agents. The human can say and ask whatever s/he likes, in natural language. If the human cannot decide which of the two agents is a human and which is a computer, then the computer has achieved AI

this is an OPERATIONAL definition of intelligence, i.e., one that gives an algorithm for testing objectively whether the definition is satisfied

Thinking humanly: cognitive modeling

Develop a precise theory of mind, through experimentation and introspection, then write a computer program that implements it

Example: GPS - General Problem Solver (Newell and Simon, 1961)

trying to model the human process of problem solving in general

Thinking Rationally- The laws of thought approach

Capture ``correct'' reasoning processes”

A loose definition of rational thinking: Irrefutable reasoning process

How do we do this

Develop a formal model of reasoning (formal logic) that “always” leads to the “right” answer

Implement this model

How do we know when we've got it right?

when we can prove that the results of the programmed reasoning are correct

Page 3: Artificial Intelligence

soundness and completeness of first-order logic

Example:

Ram is a student of III year CSE. All students are good in III year CSE.

Ram is a good student.

Acting Rationally

Act so that desired goals are achieved

The rational agent approach (this is what we’ll focus on in this course)

Figure out how to make correct decisions, which sometimes means thinking rationally and other times means having rational reflexes

correct inference versus rationality

reasoning versus acting; limited rationality

RELATION WITH OTHER DISCIPLINES:

- Expert Systems

- Natural Language Processor

- Speech Recognition

- Robotics

- Computer Vision

- Intelligent Computer-Aided Instruction

- Data Mining

- Genetic Algorithms

• Philosophy Logic, methods of reasoning, mind as physical system foundations of learning, language, rationality

• Mathematics Formal representation and proof algorithms, computation, (un)decidability, (in)tractability, probability

• Economics utility, decision theory

• Neuroscience physical substrate for mental activity

• Psychology phenomena of perception and motor control, experimental techniques

Page 4: Artificial Intelligence

• Computer building fast computers engineering

• Control theory design systems that maximize an objective function over time

• Linguistics knowledge representation, grammar

HISTORY OF AI:

• 1943 McCulloch & Pitts: Boolean circuit model of brain

• 1950 Turing's "Computing Machinery and Intelligence"

• 1956 Dartmouth meeting: "Artificial Intelligence" adopted

• 1952—69 Look, Ma, no hands!

• 1950s Early AI programs, including Samuel's checkers program, Newell & Simon's Logic Theorist, Gelernter's Geometry Engine

• 1965 Robinson's complete algorithm for logical reasoning

• 1966—73 AI discovers computational complexity Neural network research almost disappears

• 1969—79 Early development of knowledge-based systems

• 1980-- AI becomes an industry

• 1986-- Neural networks return to popularity

• 1987-- AI becomes a science

• 1995-- The emergence of intelligent agents

INTELLIGENT AGENT:

Agent = perceive + act

Thinking

Reasoning

Planning

Page 5: Artificial Intelligence

Agent: entity in a program or environment capable of generating action.

An agent uses perception of the environment to make decisions about actions to take.

The perception capability is usually called a sensor.

The actions can depend on the most recent perception or on the entire history (percept sequence).

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon the environment through actuators.

Ex: Robotic agent

Human agent

Agent

Sensors

ACTUATORS

?

E

N

V

I

R

O

N

M

E

N

T

PERCEPT

S

ACTION

Page 6: Artificial Intelligence

Agents interact with environment through sensors and actuators.

Percept sequence action

[A, clean] right

[A, dirt] suck

[B, clean] left

[B, dirty] suck

[A, clean], [A, clean] right

[A, clean], [A, dirty] suck

Fig: practical tabulation of a simple agent function for the vacuum cleaner world

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action.

The function is implemented as the agent program.

The part of the agent taking an action is called an actuator.

environment sensors agent function actuators environment

A B

Page 7: Artificial Intelligence

RATIONAL AGENT:

A rational agent is one that can take the right decision in every situation.

Performance measure: a set of criteria/test bed for the success of the agent's behavior.

The performance measures should be based on the desired effect of the agent on the environment.

Rationality:

The agent's rational behavior depends on:

the performance measure that defines success

the agent's knowledge of the environment

the action that it is capable of performing

The current sequence of perceptions.

Definition: for every possible percept sequence, the agent is expected to take an action that will maximize its performance measure.

Agent Autonomy:

An agent is omniscient if it knows the actual outcome of its actions. Not possible in practice.

An environment can sometimes be completely known in advance.

Page 8: Artificial Intelligence

Exploration: sometimes an agent must perform an action to gather information (to increase perception).

Autonomy: the capacity to compensate for partial or incorrect prior knowledge (usually by learning).

NATURE OF ENVIRONMENTS:

Task environment – the problem that the agent is a solution to.

Includes

Performance measure

Environment

Actuator

Sensors

Agent Type Performance Measures

Environment Actuators Sensors

Taxi Driver Safe, Fast, Legal, Comfort, Maximize Profits

Roads, other traffic, pedestrians, customers

Steering, accelerators, brake, signal, horn

Camera, sonar, GPS, Speedometer, keyboard, etc

Medical diagnosis system

Healthy patient, minimize costs, lawsuits

Patient, hospital, staff

Screen display (questions, tests, diagnoses, treatments, referrals)

Keyboard (entry of symptoms, findings, patient's answers)

Properties of Task Environment:

• Fully Observable (vs. Partly Observable)

– Agent sensors give complete state of the environment at each point in time

– Sensors detect all the aspect that is relevant to the choice of action.

– An environment might be partially observable because of noisy and inaccurate sensors or apart of the state are simply missing from the sensor data.

Page 9: Artificial Intelligence

• Deterministic (vs. Stochastic)

– Next state of the environment is completely determined by the current state and the action executed by the agent

– Strategic environment (if the environment is deterministic except for the actions of other agent.)

• Episodic (vs. Sequential)

– Agent’s experience can be divided into episodes, each episode with what an agent perceive and what is the action

• Next episode does not depend on the previous episode

– Current decision will affect all future sates in sequential environment

• Static (vs. Dynamic)

– Environment doesn’t change as the agent is deliberating

– Semi dynamic

• Discrete (vs. Continuous)

– Depends the way time is handled in describing state, percept, actions

• Chess game : discrete

• Taxi driving : continuous

• Single Agent (vs. Multi Agent)

– Competitive, cooperative multi-agent environments

– Communication is a key issue in multi agent environments.

Partially Observable:

Ex: Automated taxi cannot see what other devices are thinking.

Stochastic:

Ex: taxi driving is clearly stochastic in this sense, because one can never predict the behavior of the traffic exactly.

Semi dynamic:

If the environment does not change for some time, then it changes due to agent’s performance is called semi dynamic environment.

Single Agent Vs multi agent:

An agent solving a cross word puzzle by itself is clearly in a single agent environment.

An agent playing chess is in a two agent environment.

Page 10: Artificial Intelligence

Example of Task Environments and Their Classes

STRUCTURE OF AGENT:

Page 11: Artificial Intelligence

Simple Agents:

Table-driven agents: the function consists in a lookup table of actions to be taken for every possible state of the environment.

If the environment has n variables, each with t possible states, then the table size is tn.

Only works for a small number of possible states for the environment.

Simple reflex agents: deciding on the action to take based only on the current perception and not on the history of perceptions.

Based on the condition-action rule:

(if (condition) action)

Works if the environment is fully observable

Four types of agents:

1. Simple reflex agent

2. Model based reflex agent

3. goal-based agent

4. utility-based agent

Simple reflex agent

Definition:

SRA works only if the correct decision can be made on the basis of only the current percept that is only if the environment is fully observable.

Page 12: Artificial Intelligence

Characteristics

– no plan, no goal

– do not know what they want to achieve

– do not know what they are doing

Condition-action rule

– If condition then action

Ex: medical diagnosis system.

Page 13: Artificial Intelligence

Algorithm Explanation:

Interpret – Input:

Function generates an abstracted description of the current state from the percept.

RULE- MATCH:

Function returns the first rule in the set of rules that matches the given state description.

RULE - ACTION:

The selected rule is executed as action of the given percept.

Model-Based Reflex Agents:

Definition:

An agent which combines the current percept with the old internal state to generate updated description of the current state.

If the world is not fully observable, the agent must remember observations about the parts of the environment it cannot currently observe.

This usually requires an internal representation of the world (or internal state).

Since this representation is a model of the world, we call this model-based agent.

Ex: Braking problem

characteristics

Reflex agent with internal state

Sensor does not provide the complete state of the world.

must keep its internal state

Updating the internal world

Page 14: Artificial Intelligence

requires two kinds of knowledge

How world evolves

How agent’s action affect the world

Algorithm Explanation:

UPDATE-INPUT: This is responsible for creating the new internal stated description.

Goal-based agents:

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal).

Page 15: Artificial Intelligence

In some cases the goal is easy to achieve. In others it involves planning, sifting through a search space for possible solutions, developing a strategy.

Characteristics

– Action depends on the goal. (consideration of future)

– e.g. path finding

– Fundamentally different from the condition-action rule.

– Search and Planning

– Solving “car-braking” problem?

– Yes, possible … but not likely natural.

• Appears less efficient.

Utility-based agents

If one state is preferred over the other, then it has higher utility for the agent

Utility-Function (state) = real number (degree of happiness)

The agent is aware of a utility function that estimates how close the current state is to the agent's goal.

• Characteristics

– to generate high-quality behavior

Page 16: Artificial Intelligence

– Map the internal states to real numbers.

(e.g., game playing)

• Looking for higher utility value utility function

Learning Agents

Agents capable of acquiring new competence through observations and actions.

Learning agent has the following components

Learning element

Suggests modification to the existing rule to the critic

Performance element

Collection of knowledge and procedures for selecting the driving actions

Choice depends on Learning element

Critic

Observes the world and passes information to the learning element

Problem generator

Identifies certain areas of behavior needs improvement and suggest experiments

Page 17: Artificial Intelligence

Agent Example

A file manager agent.

Sensors: commands like ls, du, pwd.

Actuators: commands like tar, gzip, cd, rm, cp, etc.

Purpose: compress and archive files that have not been used in a while.

Environment: fully observable (but partially observed), deterministic (strategic), episodic, dynamic, discrete.

Agent vs. Program

Size – an agent is usually smaller than a program.

Purpose – an agent has a specific purpose while programs are multi-functional.

Persistence – an agent's life span is not entirely dependent on a user launching and quitting it.

Autonomy – an agent doesn't need the user's input to function.

Problem Solving Agents

• Problem solving agent

– A kind of “goal based” agent

– Finds sequences of actions that lead to desirable states.

Page 18: Artificial Intelligence

Formulate Goal, Formulate Problem

Search

Execute

PROBLEMS

Four components of problem definition

– Initial state – that the agent starts in

– Possible Actions

• Uses a Successor Function

– Returns <action, successor> pair

• State Space – the state space forms a graph in which the nodes are states and arcs between nodes are actions.

• Path

– Goal Test – which determine whether a given state is goal state

– Path cost – function that assigns a numeric cost to each path.

• Step cost

Problem formulation is the process of deciding what actions and states to consider, given a goal

Path:

A path in the state space is a sequence of states connected by a sequence of actions.

The sequence of steps done by intelligent agent to maximize the performance measure:

Goal Formulation: based on the current situation and the agent’s performance measure, it is the first step in problem solving.

Problem Formulation: it is the process of deciding what actions and states to consider, given a goal.

Search: the process of looking for different sequence.

Solution: A search algorithm takes a problem as input and returns a solution in the form of an action sequence.

Execution: Once a solution is found, the actions it recommends can be carried out called execution phase.

Page 19: Artificial Intelligence

Solutions

• A Solution to the problem is the path from the initial state to the final state

• Quality of solution is measured by path cost function

– Optimal Solution has the lowest path cost among other solutions

– An Agent with several immediate options of unknown value can decide what to do by first examining different possible sequences of actions that lead to a state of known value, and then choosing the best sequence Searching Process

– Input to Search : Problem

– Output from Search : Solution in the form of Action Sequence

A Problem solving Agent, Assuming the environment is

• Static

• Observable

• Discrete

• Deterministic

Page 20: Artificial Intelligence

Example

A Simplified Road Map of Part of Romania

Explanation:

• On holiday in Romania; currently in Arad

• Flight leaves tomorrow from Bucharest

• Formulate goal:

– be in Bucharest

• Formulate problem:

– states: various cities

– actions: drive between cities

• Find solution:

– sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

TOY PROBLEM

Example-1 : Vacuum World

Problem Formulation

• States

– 2 x 22 = 8 states

Page 21: Artificial Intelligence

– Formula n2n states

• Initial State

– Any one of 8 states

• Successor Function

– Legal states that result from three actions (Left, Right, Suck)

• Goal Test

– All squares are clean

• Path Cost

– Number of steps (each step costs a value of 1)

Page 22: Artificial Intelligence

State Space for the Vacuum World.

Labels on Arcs denote L: Left, R: Right, S: Suck

Example-2 : The 8-Puzzle

• States : Location of Tiles

• Initial State : One of States

• Successor Function : Move blank left, Right, Up, down

• Goal Test : Shown in Fig. Above

• Path Cost : 1 for each step

Page 23: Artificial Intelligence

Eight puzzle is from a family of “sliding –block puzzles”

• NP Complete

• 8 puzzle has 9!/2 = 181440 states

• 15 puzzle has approx. 1.3*1012 states

• 24 puzzle has approx. 1*1025 states

• Place eight queens on a chess board such that no queen can attack another queen

• No path cost because only the final state counts!

• Incremental formulations

• Complete state formulations

States : Any arrangement of 0 to 8 queens on the board. Arrangements of n queens, one per column in the leftmost n columns, with no queen attacking another are states

Initial state : No queens on the board

Page 24: Artificial Intelligence

Successor function: Add a queen to an empty square. Add a queen to any square in the leftmost empty column such that it is not attacked by any other queen. 2057 sequences to investigate

Goal Test: 8 queens on the board and none are attacked. 64*63*…*57 = 1.8*1014 possible sequences.

SOME MORE REAL-WORLD PROBLEMS

• Route finding

• Touring (traveling salesman)

• Logistics

• VLSI layout

• Robot navigation

• Learning

Robotic assembly:

States: real-valued co ordinates of robot joint angels part of the object to be assembled.

Actions: continuous motion of robot joint.

Goal test: complete assembly

Path cost: time to execute.

Route-finding Find the best route between two cities given the type & condition of existing roads & the driver’s preferences

• Used in

– computer networks

– automated travel advisory systems

– airline travel planning systems

• path cost

• money

• seat quality

• time of day

• type of airplane

Traveling Salesman Problem (TSP)

• A salesman must visit N cities.

Page 25: Artificial Intelligence

• Each city is visited exactly once and finishing the city started from.

• There is usually an integer cost c (a, b) to travel from city a to city b.

• However, the total tour cost must be minimum, where the total cost is the sum of the individual cost of each city visited in the tour.

• Given a road map of n cities, find the shortest tour which visits every city on the map exactly once and then return to the original city (Hamiltonian circuit)

• (Geometric version):

– A complete graph of n vertices (on an unit square)

– Distance between any two vertices: Euclidean distance

– n!/2n legal tours

– Find one legal tour that is shortest

It’s an NP Complete problem no one has found any really efficient way of solving them for large n. Closely related to the Hamiltonian-cycle problem.

VLSI layout

• The decision of placement of silicon chips on breadboards is very complex. (or standard gates on a chip).

• This includes

– cell layout

– channel routing

• The goal is to place the chips without overlap.

Page 26: Artificial Intelligence

• Finding the best way to route the wires between the chips becomes a search problem.

Searching for Solutions to VLSI Layout

• Generating action sequences

• Data structures for search trees

Generating action sequences

• What do we know?

– define a problem and recognize a solution

• Finding a solution is done by a search in the state space

• Maintain and extend a partial solution sequence

UNINFORMED SEARCH STRATEGIES

• Uninformed strategies use only the information available in the problem definition

– Also known as blind searching

– Uninformed search methods:

• Breadth-first search

• Uniform-cost search

• Depth-first search

• Depth-limited search

• Iterative deepening search

Page 27: Artificial Intelligence

BREADTH-FIRST SEARCH

Definition:

The root node is expanded first, and then all the nodes generated by the node are expanded.

• Expand the shallowest unexpanded node

• Place all new successors at the end of a FIFO queue

Implementation:

Page 28: Artificial Intelligence

Properties of Breadth-First Search

• Complete

– Yes if b (max branching factor) is finite

• Time

– 1 + b + b2 + … + bd + b(bd-1) = O(bd+1)

– exponential in d

• Space

– O(bd+1)

– Keeps every node in memory

– This is the big problem; an agent that generates nodes at 10 MB/sec will produce 860 MB in 24 hours

• Optimal

– Yes (if cost is 1 per step); not optimal in general

Lessons from Breadth First Search

• The memory requirements are a bigger problem for breadth-first search than is execution time

• Exponential-complexity search problems cannot be solved by uniformed methods for any but the smallest instances

Page 29: Artificial Intelligence

Ex: Route finding problem

Given:

Task: Find the route from S to G using BFS.

Step1:

Step 2:

Step3:

Page 30: Artificial Intelligence

Step4:

Answer : The path in the 2nd depth level that is SBG (or ) SCG.

Time complexity

1+b+b²+……………………..+

O )

DEPTH-FIRST SEARCH OR BACK TRACKING SEARCH:

Definition:

Expand one node to the depth of the tree. If dead end occurs, backtracking is done to the next immediate previous node for the nodes to be expanded

• Expand the deepest unexpanded node

• Unexplored successors are placed on a stack until fully explored

• Enqueue nodes on nodes in LIFO (last-in, first-out) order. That is, nodes used as a stack data structure to order nodes.

• It has modest memory requirement.

• It needs to store only a single path from the root to a leaf node, along with remaining unexpanded sibling nodes for each node on a path

• Back track uses less memory.

Page 31: Artificial Intelligence

Implementation:

Page 32: Artificial Intelligence
Page 33: Artificial Intelligence

Properties of Depth-First Search

• Complete

– No: fails in infinite-depth spaces, spaces with loops

• Modify to avoid repeated spaces along path

Page 34: Artificial Intelligence

– Yes: in finite spaces

• Time

– O(bm)

– Not great if m is much larger than d

– But if the solutions are dense, this may be faster than breadth-first search

• Space

– O(bm)…linear space

• Optimal

– No

• When search hits a dead-end, can only back up one level at a time even if the “problem” occurs because of a bad operator choice near the top of the tree. Hence, only does “chronological backtracking”

Advantage:

• If more than one solution exists or no of levels is high then dfs is best because exploration is done only a small portion of the white space.

Disadvantage:

• No guaranteed to find solution.

Example: Route finding problem

Given problem:

Task: Find a route between A to B

Page 35: Artificial Intelligence

Step 1:

Step 2:

Step 3:

Step 4:

S

B C A

D

Page 36: Artificial Intelligence

Answer: Path in 3rd level is SADG

DEPTH-LIMITED SEARCH

Definition:

A cut off (Maximum level of the depth) is introduced in this search technique to overcome the disadvantage of Depth First Search. The cut off value depends on the number of states.

DLS can be implemented as a simple modification to the general tree search algorithm or the recursive DFS algorithm.

DLS imposes a fixed depth limit on a dfs.

A variation of depth-first search that uses a depth limit

– Alleviates the problem of unbounded trees

– Search to a predetermined depth l (“ell”)

– Nodes at depth l have no successors

• Same as depth-first search if l = ∞

• Can terminate for failure and cutoff

• Two kinds of failure

Standard failure: indicates no solution

Cut off: indicates no solution within the depth limit

S

B C A

D

G

Page 37: Artificial Intelligence

Properties of Depth-Limited Search

• Complete

– Yes if l < d

• Time

– N(IDS)=(d)b+(d-1)b²+……………………..+(1)

– O(bl)

• Space

– O(bl)

• Optimal

– No if l > d

Advantage:

• Cut off level is introduced in DFS Technique.

Disadvantage:

• No guarantee to find the optimal solution.

Page 38: Artificial Intelligence

E.g.: Route finding problem

Given:

The number of states in the given map is five. So it is possible to get the goal state at the maximum depth of four. Therefore the cut off value is four.

Task: find a path from A to E.

1. 2. 3. 4.

Answer: Path = ABDE Depth=3

ITERATIVE DEEPENING SEARCH (OR) DEPTH-FIRST ITERATIVE DEEPENING (DFID):

Definition:

• Iterative deepening depth-first search It is a strategy that steps the issue of choosing the best path depth limit by trying all possible depth limit

Uses depth-first search

Finds the best depth limit

Gradually increases the depth limit; 0, 1, 2, … until a goal is found

Iterative Lengthening Search:

The idea is to use increasing path-cost limit instead of increasing depth limits. The resulting algorithm called iterative lengthening search.

A

B C

D E

A A

B C

A

B C

D

A

B C

D

E

Page 39: Artificial Intelligence

Implementation:

Properties of Iterative Deepening Search:

• Complete

– Yes

Page 40: Artificial Intelligence

• Time : N(IDS)=(d)b+(d-1)b2+…………+(1)bd

– O(bd)

• Space

– O(bd)

• Optimal

– Yes if step cost = 1

– Can be modified to explore uniform cost tree

Advantages:

• This method is preferred for large state space and when the depth of the search is not known.

• Memory requirements are modest.

• Like BFS it is complete

Disadvantages:

Many states are expanded multiple times.

Lessons from Iterative Deepening Search

• If branching factor is b and solution is at depth d, then nodes at depth d are generated once, nodes at depth d-1 are generated twice, etc.

– Hence bd + 2b(d-1) + ... + db <= bd / (1 - 1/b)2 = O(bd).

– If b=4, then worst case is 1.78 * 4d, i.e., 78% more nodes searched than exist at depth d (in the worst case).

• Faster than BFS even though IDS generates repeated states

– BFS generates nodes up to level d+1

– IDS only generates nodes up to level d

• In general, iterative deepening search is the preferred uninformed search method when there is a large search space and the depth of the solution is not known

Example: Route finding problem

Given:

A

B C

D E

F

G

Page 41: Artificial Intelligence

Task: Find a path from A to G.

Limit=0

Limit=1

Limit=2

1.

2.

3.

A

A

B C F

A

B C F

D

A

B C F

D

A

B C F

G

G

Page 42: Artificial Intelligence

A-B-D-E-G Limit 4

A B D-E-G

A-C-E-G Limit 3

A-F-G- Limit 2

Answer: Since it is a IDS tree the lowest depth limit (i.e.) A-F-G is selected as the solution path.

BI-DIRECTIONAL SEARCH

Definition:

It is a strategy that simultaneously searches both the directions (i.e) forward from the initial state and backward from the goal state and stops when the two searches meet in the middle.

• Alternate searching from the start state toward the goal and from the goal state toward the start.

• Stop when the frontiers intersect.

• Works well only when there are unique start and goal states.

• Requires the ability to generate “predecessor” states.

• Can (sometimes) lead to finding a solution more quickly.

Properties of Bidirectional Search:

1. Time Complexity: O(b d/2)

2. Space Complexity: O(b d/2)

3. Complete: Yes

4. Optimal: Yes

Page 43: Artificial Intelligence

Advantages:

Reduce time complexity and space complexity

Disadvantages:

The space requirement is the most significant weakness of bi-directional search.

If two searches do not meet at all, complexity arises in the search technique. In backward search calculating predecessor is difficult task. If more than one goal state exists then explicitly, multiple state searches are required.

Ex: Route Finding Problem

Given:

Task: Find a path from A to E

Search from forward (A):

Search from backward (E):

Answer: Solution path is A-C-E.

A

B C

D E

A

B C

A

E

E

D C

Page 44: Artificial Intelligence

COMPARING UNINFORMED SEARCH STRATEGIES

• Completeness

– Will a solution always be found if one exists?

• Time

– How long does it take to find the solution?

– Often represented as the number of nodes searched

• Space

– How much memory is needed to perform the search?

– Often represented as the maximum number of nodes stored at once

• Optimal

– Will the optimal (least cost) solution be found?

• Time and space complexity are measured in

– b – maximum branching factor of the search tree

– m – maximum depth of the state space

– d – depth of the least cost solution