© Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

29
© Enn Tyugu 1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu

Transcript of © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

Page 1: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 1

Algorithms of Artificial Intelligence

Lecture 7: Constraint satisfaction

E. Tyugu

Page 2: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 2

Problem solving The last part of the course is dedicated to problem solving

techniques. It is divided in its turn into the problem solving by

• constraint satisfaction• program synthesis• planning• agents.

These parts overlap to some extent, but each of them has its own methods and approaches to the problem solving. Constraint satisfaction and program synthesis are applicable, first of all, to computational problems, planning is a more general approach, and the most difficult to represent algorithmically. Agents are complex programs with intelligent behavior, i.e. with rather universal problem solving capabilities.

Page 3: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 3

Constraint satisfaction problem

• Let us have sets D1, D2, ... ,Dn. We shall call a relation a subset of a direct product Da ... Db, where a,...,b {1,...,n} are all different. Such a relation is the most general repesentation of a constraint.

• Each relation can be extended to the set D1 D2 ... Dn so that the projection of the extended relation on the set Da ... Db will be the the relation itself.

• A solution of the set of constraints given in the form of relations R1, ... , Rk is any element of the intersection S of extended relations: S = R1´ R2´ ... Rk´. This is a tuple which contains an element from each set D1, ... , Dn, and it satisfies the relations R1, ... ,Rk.

• Another problem on the same set of constraints can be finding all solutions

of the set of constarints, i.e. finding the intersection S itself. An important

variation of the problem is finding a tuple of elements only of some sets Dp , ... , Dq which can be extended to a solution defined above.

Page 4: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 4

Modification of the problem

We shall use very extensively the structure of the set of relations, i.e. the knowledge telling us which sets actually are bound by one or another constraint. This knowledge can be represented by a graph called a constraint network. Before doing this, we shall also agree that we consider the problem of finding one tuple as a solution of the set of constraints. We introduce a variable for each set D1, ... , Dn which takes values from this set, and denote these variables by x1, ... , xn respectively. Now the problem will be finding values of the variables x1, ... , xn which satisfy the constraints. Having a relation R as a subset of Da ... Db, we shall say that the relation R binds variables xa, ..., xb which have domains Da,...,Db.

Page 5: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 5

Constraint network Structure of a set of constraints given as relations can

be represented by a bipartite graph the set of nodes of which is constituted by variables x1, ... , xn and relations R1, ... , Rk. Edges of the graph correspond to the bindings, i.e. there is an edge betwen a relation R and a variable x iff the relation binds this variable. This is called a constraint network. Example:

Page 6: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 6

Example: three constraints

Page 7: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 7

Solution of the constraints

Page 8: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 8

Termikitsendused, ratsionaalpuud

• Loogilises programmeerimises tuleb lahendada kitsendusi unifitseerimiseks. Sel juhul on terme otstarbekas vaadelda kui puid.

• Rekursiivsed termid annavad lõpmatuid puid, milles on vaid lõplik hulk erinevaid alampuid – ratsionaalpuid.

• Häid kitsenduste lahendamise algoritme on teada vaid lõplike puude jaoks.

Page 9: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 9

Binary constraints

Let D1, ... , Dn be finite sets. We shall consider here binary relations and denote by Rij a relation which binds the variables xi and xj from the sets Di and Dj. This relation can be considered as a mapping from Di which is called its domain to Dj which is called its range. In the case of binary relations, the graph representation of a constraint network can be simplified. A binary constraint graph is a graph with variables as its nodes and constraints as its arcs. It can be obtained from the general constraint network by dropping all constraint nodes and substituting one arc instead of the two edges of each constraint. The direction of the arc is chosen so, that a constraint Rij will be represented by the arc from xi to xj.

Page 10: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 10

Arc-consistency If a tuple of values (v1, ... , vn) is a solution of the constraint satisfaction problem, then for any arc (xi,xj) of the binary constraint graph the pair of values (vi,vj) must satisfy the relation Rij. Therefore, if in the Di there is an element vi´ which doesn´t satisfy Rij with any element of Dj, then the element vi´ can be excluded from consideration when a solution is searched. And, vice versa, if an element vj´ of Dj has no match in Di that satisfies Rij, then the element vj´ can be excluded from consideration. We call such elements no-match elements. This gives the following consistency condidion for arcs. An arc (xi,xj) of a binary constraint graph is consistent, iff the sets Di and Dj don´t have no-match elements. This is expressed also by the following formula:

( u Di v Dj Rij(u,v))&( v Dj u Di Rij(u,v)).

An arc can be made consistent by removing from Di and Dj all no-match elements. (Strictly speaking, this changes the relations binding xi and xj, but it will not influence the solution of the consistency problem. ) A binary constraint graph is arc-consistent

if all its arcs are consistent.

Page 11: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 11

Arc-consistency algorithm The procedure semijoin(R,found) removes the no-match elements of the domain of the first bound variable of R and sets the variable found true, if such elements were found. We denote by Dom(R) and Range(R) the domains of the first and second bound variable of R, i.e. the domain and range of R. Inv(R) is the inverse relation of R, i.e. R(x,y) = Inv(R(y,x)).A.4.1:

semijoin(R,found) = for x Dom(R) do

L: begin for y Range(R) do

if R(x,y) then exit L od;

Dom(R):=Dom(R)\{x}; found:=trueend

od

Page 12: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 12

Arc-consistency algorithm

A.4.2: ArcConsistent(G) =found:=true;while found do

found:=false;for R G do

semijoin(R,found);semijoin(Inv(R),found)

odod

Page 13: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 13

Arc-consistency continued

A.4.3: for R G do semijoin(R,found); semijoin(Inv(R),found) od open:=(select var(G) ); for y open do

for R inc(y) dofound:=false;semijoin(R,found); semijoin(Inv(R),found)

if found then open:=open {incvar(R,y)} fi od;open:=open \ {y}

od

The algorithm for arc consistency can be significantly improved by taking into account that only domains of those constraints must be checked repeatedly, whose ranges have changed. The following clever arc consistency algorithm for acyclic graphs requires only time linear in the number of constraints.

select var(G) – selects a node from the graph Ginc(y) – set of relations which bind the node yincvar(R,y) – the node bound by R to y.

Page 14: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 14

Path consistency

Let us consider a network with arcs (xi,xj), (xi,xm) and (xm,xj). Even when all its arcs are consistent, there is no guarantee that the elements chosen from Di, Dj and Dm will satisfy together all three relations Rij, Rim and Rmj. These relations will be satisfied, if the graph is arc consistent and for each pair of values u Di and v Dj there exists a

value w Dm such that R(u,m)&R(m,v) is true. This can be written as

u Di vDj (Rij(u,v) wDm Rim(u,w) & Rmj(m,v)).

If this condition is satisfied for any path of length 2 in a network, the network is called path consistent, or 3-consistent (considering the number of consistent arcs). This

means that three consecutive nodes chosen along any path are consistent with respect to the relations between these nodes on the path. Path consistency can be generalized for any number of nodes in the following way.

Page 15: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 15

k - consistencyA network is k-consistent, iff given any instantiation of k-1 variables satisfying the condition that all the direct constraints among those variables are satisfied, it is possible to find an instantiation of any k-th variable such that k values taken together satisfy all the constraints among the k variables.

A network is strongly k-consistent, iff it is j-consistent for all j. k.

Even if a network with n variables is strongly k-consistent for k<n there is no guarantee that a solution exists.

Remark. For good performance of arc- and path-consistency algorithms a special data structure is used to represent binary relations. Every value bound by a relation is connected by pointers to all values which together with it satisfy the relation.

Page 16: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 16

Functional constraint networks A relation which binds variables u,...,v, x,...,y is called a

functional dependency with input variables u,...,v and output variables x,...,y, iff for any pair t1, t2 of tuples of values of its bound variables from the equality of all pairs of values for respective variables u,...,v in these tuples follows the equality of pairs of respective values for x,...,y.

Functional constraint network is a constraint network which contains only functional dependencies as constraints.

On a functional constraint network, it is easy to solve the following constraint satisfaction problem: given values of variables x1, …xm, find values of variables y1, …, yn that satisfy the constraints together with the given values of x1,…, xm.

Page 17: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 17

Function propagation

known - set of variables already known to be computableopen - set of unprocessed variables from the set knowncount(r) - counter of the constraint r showing how many input variables

of the constraint are still unknowncountdown(r) - decreases the value of the count(r) by oneinitcount - initializes the values of the counters according to the numbers

of input variables of the corresponding constraintssucc(e) - successors of the node e in the constraint networkplan - sequence of applied constraints which is a plan for computationstakefrom(s) - produces an element of the set s and excludes it from s.

Page 18: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 18

Function propagationA.4.4: plan:=();

known:=in; open:=in; initcount;while not empty(open) do

if out known then success fi; x:=takefrom(open); for r succ(x) do

if count > 1 & succ(r) knownthen countdown(r)elif count(r) = 1 then plan:=append(plan, r);open:=open (succ(r)\known);known:=known succ(r);countdown(r)fi

odod;failure

Page 19: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 19

Equational problem solver (EPS)It differs from A.4.4 in 1) representation of a plan, 2) applicability condition of a relation and 3) function suc(r) that gives all neighbors of r.

A.4.5: plan:=(); known:=in; open:=in; initcount; while not empty(open) do

if out known then success fi;x:=takefrom(open);for r succ(x) do

if count(r) > 1 then countdown(r)elif count(r) = 1 then plan:=append(plan,( r,succ(r)\known));

open:=open (succ(r) \known);known:=knownsucc(r);countdown(r)

fiod

od; failure

Page 20: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 20

Improved EPS

A.4.6:plan:=(); known:=in; open:=in; initcount;while not empty(open) do

if out known then success fi;x:=takefrom(open); for r succ(x) do

DERIVE(r,plan,open,known);od

od;failure

Here we encapsulate the derivation step into a separate procedure DERIVE:

Page 21: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 21

Improved EPS continued

A.4.7: DERIVE(r, plan, open, known): if count(r) >1

then countdown(r)

elif count(r) = 1 and input(r) known and then plan:=append(plan ,( r,succ(r)\known));

open:=open (succ(r) \known);known:=knownsucc(r);countdown(r)

fi

The procedure DERIVE can be adapted to various representations and generalizations of relations. Here we accept equations with input variables input(r) that must be known for applying a relation.

Page 22: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 22

EPS with structural relations

A.4.8: DERIVE1: if count(r) > 1 and not x=str(r)

then countdown(r) elif (count(r) =1 and input(r) known ) or x = str(r) then plan:=append(plan, r);

open:=open (succ(r) \known);known:=knownsucc(r);countdown(r)

fi

Here we accept structural relations r that bind a structured variable str(r) with its components.

Page 23: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 23

Clustering of equations

Example: given a system of independent linear equations, find its partitioning into minimal independently solvable, i.e. strongly connected blocks.

Page 24: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 24

Clustering of equations

A sufficient structural condition for the solvability of a system of linearly independent equations is that a bijective (one-to-one) mapping can be established between the variables and equations. After checking this condition, the algorithm of clustering is as follows:

1. Orientate the edges of the network so that each edge which binds a variable x with

a constraint C according to the established mapping becomes an arc (x,C) and all other edges become the arcs of the form (C',x’). 2. Find strongly connected parts P1,...,Pk of the directed graph obtained. Each

such part represents a minimal system of equations that must be solved simultaneously. 3. Order the parts P1,..., Pk in the ordering opposite to the directions of arcs

which bind them. This gives the correct order for constraint propagation between

P1,...,Pk.

Page 25: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 25

Clustering of equations continued

Maximal mapping:

NB! One variable (x5)is left out of the mapping - should be given, or the equation R2 can not be taken into account.

x5

R2

Page 26: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 26

Clustering of equations continued

Resulting constraint network (numbers of variables computable by clusters are shown in the cluster nodes):

Page 27: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 27

Higher-order functional constraints

The constraint c is a higher-order functional constraint,because it has a functional variable g as an input.

Page 28: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 28

Classes of constraint satisfaction problems

Constraint satisfaction

Finite domains Infinite domains

Subdefinite models

Constraint hierarchies

Inequalities

Interval constraintsFunctional constraints

n-ary constraints

Binary constraints

Consistencypropagation

Equationalproblem solvers

Higher-order functionalconstraints

Rational trees

Page 29: © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

© Enn Tyugu 29

Bibliography

• R. Bagnara, R. Gori, P. Hill, E. Zaffanella. (2001) Finite-tree analysis for constraint logic-based languages. In: Proc. 2001 JOINT CONFERENCE ON DECLARATIVE PROGRAMMING http://host.di.uevora.pt/~agp01/accepted.html

• A. Colmerauer (1984) Solving Equations and Inequations on Finite and Infinite Trees, In: Proceedings of the Conference on Fifth Generation Computer Systems, Tokyo.

faculty.plattsburgh.edu/jan.plaza/research/lp/papers/06A-equality.ps