A Multiagent Evolutionary Algorithm for Constraint Satisfaction...

23
SMCB-E-03172004-0141.R2 1 Abstract—With the intrinsic properties of constraint satisfaction problems (CSPs) in mind, we divide CSPs into two types, namely, permutation CSPs and non-permutation CSPs. According to their characteristics, several behaviors are designed for agents by making use of the ability of agents to sense and act on the environment. These behaviors are controlled by means of evolution, so that the multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs) results. To overcome the disadvantages of the general encoding methods, the minimum conflict encoding is also proposed. Theoretical analyses show that MAEA-CSPs has a linear space complexity and converges to the global optimum. The first part of the experiments uses 250 benchmark binary CSPs and 79 graph coloring problems from the DIMACS challenge to test the performance of MAEA-CSPs for non-permutation CSPs. MAEA-CSPs is compared with six well-defined algorithms and the effect of the parameters is analyzed systematically. The second part of the experiments uses a classical CSP, n-queen problems, and a more practical case, job-shop scheduling problems (JSPs), to test the performance of MAEA-CSPs for permutation CSPs. The scalability of MAEA-CSPs along n for n-queen problems is studied with great care. The results show that MAEA-CSPs achieves good performance when n increases from 10 4 to 10 7 , and has a linear time complexity. Even for 10 7 -queen problems, MAEA-CSPs finds the solutions by only 150 seconds. For JSPs, 59 benchmark problems are used, and good performance is also obtained. Index Terms—Constraint satisfaction problems, evolutionary algorithms, graph coloring problems, job-shop scheduling problems, multiagent systems, n-queen problems. I. INTRODUCTION large number of problems coming from artificial intelligence as well as other areas of computer science and engineering can be stated as Constraint Satisfaction Problems (CSPs) [1]-[3]. A CSP has three components: variables, values, and constraints. The purpose is to find an assignment of values to variables such that all constraints are satisfied. Many methods have been proposed to deal with CSPs, where backtracking and generate-and-test methods are well-known traditional approaches [1]. Although the backtracking method eliminates a Manuscript received March 17, 2004. This work is supported by the National Natural Science Foundation of China under Grant 60133010 and 60372045, and the “863” project under Grant 2002AA135080. The authors are with the Institute of Intelligent Information Processing, Xidian University, Xi’an, 710071, China Jing Liu, corresponding author, phone: 86-029-88202661; fax: 86-029-88201023; e-mail: [email protected] . subspace from the Cartesian product of all the variable domains, the computational complexity for solving most nontrivial problems remains exponential. Many studies have been conducted to investigate various ways of improving the backtracking method, such as consistency techniques [4], [5], the dependency-directed backtracking scheme [6], [7]. Nevertheless, it is still unable to solve nontrivial large-scale CSPs in a reasonable runtime. For the generate-and-test method, one of the most popular ideas is to employ a local search technique [8]-[10]. It generates an initial configuration and then incrementally uses “repair” or “hill climbing” to modify the inconsistent configuration to move to a neighborhood configuration having the best or better evaluation value among the neighbors, until a solution is found. As related to the idea of local search, other heuristics have also been developed, such as the min-conflicts heuristic [11], [12], GSAT [13]. In addition, local search techniques are often combined with Evolutionary Algorithms (EAs), another popular kind of method for solving CSPs. Historically, CSPs have been approached from many angles by EAs. In this field, the method used to treat the constraints is very important. The constraints can be handled directly, indirectly or by mixed methods [14]. Among the available methods, some put the emphasis on the usage of heuristics, such as ARC-GA [15], [16], COE-H GA [17], [18], Glass-Box [19], H-GA [20], [21], whereas others handle the constraints by fitness function adaptation, such as CCS [22], [23], MID [24]-[26], SAW [27], [28]. These methods dealt with CSPs by EAs from different angles and boosted the development of this field. Reference [29] has made an extensive performance comparison of all these EAs on a systematically generated test suite. Agent-based computation has been studied for several years in the field of distributed artificial intelligence [30], [31] and has been widely used in other branches of computer science [32]-[35]. Problem solving is an area with which many multiagent-based applications are concerned. It includes distributed solutions to problems, solving distributed problems, and distributed techniques for problem solving [30], [31]. Reference [33] introduced an application of distributed techniques for solving constraint satisfaction problems. They solved 7000-queen problems by an energy-based multiagent model. In [34], multiagent systems and genetic algorithms (GAs) are integrated to solve global numerical optimization problems, and the method can find high quality solutions at a low A Multiagent Evolutionary Algorithm for Constraint Satisfaction Problems Weicai Zhong, Jing Liu, and Licheng Jiao, Senior Member, IEEE A

Transcript of A Multiagent Evolutionary Algorithm for Constraint Satisfaction...

Page 1: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 1

Abstract—With the intrinsic properties of constraint

satisfaction problems (CSPs) in mind, we divide CSPs into two types, namely, permutation CSPs and non-permutation CSPs. According to their characteristics, several behaviors are designed for agents by making use of the ability of agents to sense and act on the environment. These behaviors are controlled by means of evolution, so that the multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs) results. To overcome the disadvantages of the general encoding methods, the minimum conflict encoding is also proposed. Theoretical analyses show that MAEA-CSPs has a linear space complexity and converges to the global optimum. The first part of the experiments uses 250 benchmark binary CSPs and 79 graph coloring problems from the DIMACS challenge to test the performance of MAEA-CSPs for non-permutation CSPs. MAEA-CSPs is compared with six well-defined algorithms and the effect of the parameters is analyzed systematically. The second part of the experiments uses a classical CSP, n-queen problems, and a more practical case, job-shop scheduling problems (JSPs), to test the performance of MAEA-CSPs for permutation CSPs. The scalability of MAEA-CSPs along n for n-queen problems is studied with great care. The results show that MAEA-CSPs achieves good performance when n increases from 104 to 107, and has a linear time complexity. Even for 107-queen problems, MAEA-CSPs finds the solutions by only 150 seconds. For JSPs, 59 benchmark problems are used, and good performance is also obtained.

Index Terms—Constraint satisfaction problems, evolutionary algorithms, graph coloring problems, job-shop scheduling problems, multiagent systems, n-queen problems.

I. INTRODUCTION large number of problems coming from artificial intelligence as well as other areas of computer science and

engineering can be stated as Constraint Satisfaction Problems (CSPs) [1]-[3]. A CSP has three components: variables, values, and constraints. The purpose is to find an assignment of values to variables such that all constraints are satisfied. Many methods have been proposed to deal with CSPs, where backtracking and generate-and-test methods are well-known traditional approaches [1]. Although the backtracking method eliminates a

Manuscript received March 17, 2004. This work is supported by the

National Natural Science Foundation of China under Grant 60133010 and 60372045, and the “863” project under Grant 2002AA135080.

The authors are with the Institute of Intelligent Information Processing, Xidian University, Xi’an, 710071, China

Jing Liu, corresponding author, phone: 86-029-88202661; fax: 86-029-88201023; e-mail: [email protected].

subspace from the Cartesian product of all the variable domains, the computational complexity for solving most nontrivial problems remains exponential. Many studies have been conducted to investigate various ways of improving the backtracking method, such as consistency techniques [4], [5], the dependency-directed backtracking scheme [6], [7]. Nevertheless, it is still unable to solve nontrivial large-scale CSPs in a reasonable runtime.

For the generate-and-test method, one of the most popular ideas is to employ a local search technique [8]-[10]. It generates an initial configuration and then incrementally uses “repair” or “hill climbing” to modify the inconsistent configuration to move to a neighborhood configuration having the best or better evaluation value among the neighbors, until a solution is found. As related to the idea of local search, other heuristics have also been developed, such as the min-conflicts heuristic [11], [12], GSAT [13]. In addition, local search techniques are often combined with Evolutionary Algorithms (EAs), another popular kind of method for solving CSPs.

Historically, CSPs have been approached from many angles by EAs. In this field, the method used to treat the constraints is very important. The constraints can be handled directly, indirectly or by mixed methods [14]. Among the available methods, some put the emphasis on the usage of heuristics, such as ARC-GA [15], [16], COE-H GA [17], [18], Glass-Box [19], H-GA [20], [21], whereas others handle the constraints by fitness function adaptation, such as CCS [22], [23], MID [24]-[26], SAW [27], [28]. These methods dealt with CSPs by EAs from different angles and boosted the development of this field. Reference [29] has made an extensive performance comparison of all these EAs on a systematically generated test suite.

Agent-based computation has been studied for several years in the field of distributed artificial intelligence [30], [31] and has been widely used in other branches of computer science [32]-[35]. Problem solving is an area with which many multiagent-based applications are concerned. It includes distributed solutions to problems, solving distributed problems, and distributed techniques for problem solving [30], [31]. Reference [33] introduced an application of distributed techniques for solving constraint satisfaction problems. They solved 7000-queen problems by an energy-based multiagent model. In [34], multiagent systems and genetic algorithms (GAs) are integrated to solve global numerical optimization problems, and the method can find high quality solutions at a low

A Multiagent Evolutionary Algorithm for Constraint Satisfaction Problems

Weicai Zhong, Jing Liu, and Licheng Jiao, Senior Member, IEEE

A

Page 2: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 2

computational cost even for the functions with 10 000 dimensions. All these results show that both agents and EAs have a high potential in solving complex and ill-defined problems.

In this paper, with the intrinsic properties of CSPs in mind, we divide CSPs into two types, namely, permutation CSPs and non-permutation CSPs. According to the different characteristics of the two types of problems, several behaviors are designed for agents. Furthermore, all such behaviors are controlled by means of evolution, so that a new algorithm, multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs), results. In MAEA-CSPs, all agents live in a latticelike environment. Making use of the designed behaviors, MAEA-CSPs realizes the ability of agents to sense and act on the environment in which they live. During the process of interacting with the environment and the other agents, each agent increases energy as much as possible, so that MAEA-CSPs can find solutions. Experimental results show that MAEA-CSPs provides good performance.

From another viewpoint, since MAEA-CSPs uses a lattice-based population, it can be also considered as a kind of fined-grained parallel GA. Fined-grained parallel GAs are a kind of parallel implementation of GAs [36]-[45], and have been applied to many problems, such as combinatorial optimization [46], function optimization [47], [48], scheduling problems [49]. But since they are just parallel techniques, they often present the same problem of premature convergence as traditional GAs [50]. MAEA-CSPs makes use of the ability of agents in sensing and acting on the environment, and puts emphasis on designing behaviors for agents. The experimental results show that MAEA-CSPs achieves good performance for various CSPs, that is, binary CSPs, graph coloring problems (GCPs), n-queen problems, and job-shop scheduling problems (JSPs). Therefore, MAEA-CSPs is a successful development of fined-grained parallel GAs.

The rest of this paper is organized as follows: Section II describes the agents designed for CSPs. Section III describes the implementation of MAEA-CSPs, and analyzes the space complexity and convergence. Section IV uses binary CSPs and GCPs to investigate the performance of MAEA-CSPs on non-permutation CSPs, whereas Section V uses n-queen problems and JSPs to investigate the performance on permutation CSPs. Finally, conclusions are presented in Section VI.

II. CONSTRAINT SATISFACTION AGENTS According to [31] and [33], an agent is a physical or virtual

entity essentially having the following properties: (a) it is able to live and act in the environment; (b) it is able to sense the local environment; (c) it is driven by certain purposes and (d) it has some reactive behaviors. Multiagent systems are computational systems in which several agents interact or work together in order to achieve purposes. As can be seen, the meaning of an agent is very comprehensive, and what an agent represents is different for different problems. In general, four elements

should be defined when multiagent systems are used to solve problems. The first is the meaning and the purpose of each agent. The second is the environment in which all agents live. Since each agent has only local perceptivity, so the third is the local environment. The last is the behaviors that each agent can take to achieve the purpose.

A. Constraint satisfaction problems A CSP has three components:

1) A finite set of variables, x={x1, x2, …, xn}; 2) A domain set D, containing a finite and discrete domain for each variable: D={D1, D2, …, Dn}, xi∈ Di= { }1 2 | |, , ,

iDd d d� , i=1, 2, …, n (1)

where |• | stands for the number of elements in the set; 3) A constraint set, C={C1(x1), C2(x2), …, Cm(xm)}, where xi, i=1, 2, …, m is a subset of x, and Ci(xi) denotes the values that the variables in xi cannot take simultaneously. For example, given a constraint C({x1, x2})=⟨d1, d2⟩, it means when x1=d1, d2 cannot be assigned to x2, and when x2=d2, d1 cannot be assigned to x1.

Thus, the search space of a CSP, S, is a Cartesian product of the n sets of finite domains, namely, S=D1×D2×…×Dn. A solution for a CSP, s=⟨s1, s2, …, sn⟩∈ S, is an assignment to all variables such that the values satisfy all constraints. Here is a simple example:

Example 1: A CSP is described as follows:

( ) ( ){( ) ( )( ) ( )( )

1 2 3

1 2 3

1 1 2 2 1 2

3 1 3 4 1 3

5 1 3 6 1 3

7 2 3 8 2

{ , , }{ , , }, {1, 2, 3}, 1,2,3

{ , } 1,3 , { , } 3,3 ,

{ , } 2,1 , { , } 2,3 ,

{ , } 3,1 , { , } 3,3 ,

{ , } 1,1 , {

i

x x xD D D D i

C x x C x x

C x x C x x

C x x C x x

C x x C x

== = =

= = ∑ ⌡ = ∑ ⌡

= ∑ ⌡ = ∑ ⌡

= ∑ ⌡ = ∑ ⌡

= ∑ ⌡

xD

C

( )( ) ( )( ) }

3

9 2 3 10 2 3

11 2 3

, } 1, 2 ,

{ , } 1,3 , { , } 2,1 ,

{ , } 3,1

x

C x x C x x

C x x

�⌠⌠⌠⌠⌠⌠⟩⌠⌠ = ∑ ⌡⌠⌠ = ∑ ⌡ = ∑ ⌡⌠

= ∑ ⌡⌠∫

(2)

All solutions for this CSP are ⟨1, 2, 2⟩, ⟨1, 2, 3⟩, ⟨2, 2, 2⟩, ⟨2, 3, 2⟩, ⟨3, 2, 2⟩. �

One may be interested in finding one solution, all solutions or in proving that no solution exists. In the last case, one may want to find a partial solution optimizing certain criteria, for example, as many satisfied constraints as possible. We restrict our discussion in finding one solution.

B. Definition of constraint satisfaction agents An agent used to solve CSPs is represented as a constraint

satisfaction agent (CSAgent), and is defined as follows: Definition 1: A constraint satisfaction agent, a, represents an

element in the search space, S, with energy being equal to ∀ a∈ S, 1( ) ( , )m

i iEnergy Cχ== −�a a (3)

where 1 violates

( , )0 otherwise

ii

CCχ

=

aa . The purpose of each

CSAgent is to maximize the energy by the behaviors it can take.

Page 3: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 3

As can be seen, the energy of each CSAgent is the negative value of the number of violated constraints. Therefore, the higher the energy is, the better the CSAgent is. When the energy of a CSAgent increases to 0, the CSAgent becomes a solution.

The domain for each variable is finite and discrete, thus the elements can be numbered by natural numbers, that is to say, the domain D={d1, d2, …, d|D|} is equivalent to D={1, 2, …, |D|}. When all domains are transformed to the sets of natural numbers, the solutions of some CSPs take on specific characteristics. On the basis of them, CSPs can be divided into two types:

Definition 2: Let S be the set of solutions for a CSP and all domain sets be represented by the sets of natural numbers. If ∀ s∈ S is a permutation of 1, 2, …, n, the CSP is defined as a permutation CSP; otherwise it is defined as a non-permutation CSP.

For a non-permutation CSP, the search space is still S. Since permutation CSPs are a special case of non-permutation CSPs, the search space can be reduced to the set of all permutations of 1, 2, …, n, that is,

{ }{ }

{ } ( )

1 21 2 !, , ..., and , , ..., , 1 !

and 1, 2, ..., , 1

and , 1, 2, ..., ,

P nn i i i i

ji

k lk l

i i

P P P i n

P n j n

k l n k lP P

= = ⟨ ⟩ ≤ ≤

∈ ≤ ≤

≠∀ ∈ ≠ ⇒ ≠

S P P P P

P P

(4) When one uses EAs to solve problems, the search space must

be encoded such that individuals can be represented in a uniform encoding. For example, GAs usually encode the search space by the binary coding, thus, each individual is a bit sequence. For permutation CSPs, it is natural to represent each individual as a permutation of 1, 2, …, n. The search space is SP with size of n!.

For non-permutation CSPs, the most straightforward encoding method is to represent each individual as an element of S, which is used by most existing algorithms. Under this method, the search space is |D1|×|D2|×…×|Dn|. Meanwhile, some methods use a permutation coding with a corresponding decoder. For example, in [27], [28], each individual is represented by a permutation of the variables, and the permutation is transformed to a partial instantiation by a simple decoder that considers the variables in the order they occur in the permutation and assigns the first possible domain value to that variable. If no value is possible without introducing a constraint violation, the variable is left uninstantiated. In what follows, this decoder is labeled as Decoder1 and all permutations of the variables is labeled as P

xS . In fact,

( ) ( )1 2 1 2, , , , , ,

n

P PP P P x nx x x P P P∀ ∑ ⌡ ∈ ∑ ⌡ ∈� �S S

(5)

Therefore, PxS is equivalent to SP, and the size of the search

space is also n!. Decoder1 uses a greedy algorithm, thus a serious problem exists. That is, for some CSPs, Decoder1 cannot decode any permutation to a solution. As a result, the algorithms based on Decoder1 may not find solutions at all. Here is a simple example:

Example 2: The CSP is given in Example 1 with the following P

xS

{}

1 2 3 1 3 2 2 1 3

2 3 1 3 1 2 3 2 1

, , , , , , , , ,

, , , , , , , ,

Px x x x x x x x x x

x x x x x x x x x

= ∑ ⌡ ∑ ⌡ ∑ ⌡

∑ ⌡ ∑ ⌡ ∑ ⌡

S. (6)

According to Decoder1, each element in PxS can be

transformed to a partial instantiation of the variables, namely,

1 2 3 1 3 2

2 1 3 2 3 1

3 1 2 3 2 1

, , 1, 1, * , , 1, 1, *

, , 1, 1, * , , 1, *, 1

, , 1, 1, * , , 1, *, 1

x x x x x x

x x x x x x

x x x x x x

→ →

→ → → →

(7)

where the “*” represents that the corresponding variable is left uninstantiated. As can be seen, no element in P

xS can be transformed to a solution for the CSP by Decoder1. �

For CSAgents, we want to design an encoding method that not only covers solutions for any CSPs, but also maintains a manageable search space. Therefore, on the basis of Decoder1, we propose minimum conflict encoding (MCE). In MCE, each CSAgent is represented as not only an element of P

xS , but also an element of S, so that some behaviors can be designed to deal with the values of the variables directly. As a result, each CSAgent must record some information, and is represented by the following structure:

CSAgent = Record

P: A permutation, P∈ SP for permutation

CSPs, and Px∈ SP for non-permutation

CSPs; V: V∈ S, the value for each variable, and

for non-permutation CSPs only; E: The energy of the CSAgent, E=Energy(P)

for permutation CSPs, and E=Energy(V)for non-permutation CSPs;

SL: The flag for the self-learningbehavior, which will be defined later.If SL is True, the self-learningbehavior can be performed on theCSAgent; otherwise cannot;

End. In the following text, CSAgent(• ) is used to represent the

corresponding component in the above structure. By MCE, each CSAgent includes both a permutation and an assignment to all variables. In fact, the assignment is what we need. So minimum conflict decoding (MCD) is designed corresponding to MCE, which transforms P to V. The main idea of MCD is to assign the value violating minimum number of constraints to a variable. The variables are considered in the order they occur in the permutation, and only the constraints between the variable and those previously assigned values are taken into account. After MCD is performed, no variable is left uninstantiated. The details of MCD are shown in Algorithm 1.

Algorithm 1 Minimum conflict decoding

Page 4: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 4

Input: CSAgent: the CSAgent needs to bedecoded;

Pos: the position to startdecoding;

Output: CSAgent(V);Let

1 2( ) , , ,

nP P Px x x= ∑ ⌡�CSAgent P , CSAgent(V)=⟨v1,

v2, …, vn⟩, and

1( ) ( , )m

i i jjConflicts v v Cχ

==∑ (8)

where1 violates

( , )0 otherwise

i ji j

v Cv Cχ

=

. Conflicts(vi) only

considers the variables assigned values. MinC

and MinV are the current minimum number ofconflicts and the corresponding value.

beginif (Pos=1) thenbegin

1: 1Pv = ;

i := 2;endelse i := Pos;repeat

: 1iPv = ;

: ( )iC PMin Conflicts v= ;

MinV := 1; j := 2;repeat

:iPv j= ;

if ( ( )iP CConflicts v Min< ) then

begin

: ( )iC PMin Conflicts v= ;

MinV := j; end;j := j+1;

until ( | |iPj D> );

:iP Vv Min= ;

i := i+1;until (i>n);

end.

Two kinds of behaviors are designed for CSAgents, namely, the behaviors performing on P (P-behavior) and the behaviors performing on V (V-behavior). When V-behaviors are performed on a CSAgent, the energy can be updated directly. But when P-behaviors are performed on a CSAgent, it must be decoded by MCD before updating the energy. If MCD starts to decode from the first variable in the permutation for any CSAgent, the information generated by V-behaviors would loss. Therefore, we set a parameter, Pos, for MCD, which is the first position changed by P-behaviors. Thus, the value of the variables before Pos is left untouched such that some

information generated by V-behaviors can be preserved. For a new CSAgent, Pos is set to 1. MCD(CSAgent, Pos) represents MCD is performed on CSAgent.

In fact, Algorithm 1 is just a general implementation of the idea of MCD, and can be used to all kinds of CSPs. However, for specific CSPs, the idea of MCD can be combined with the knowledge of the problems, with a more efficient implementation obtained. For example, the degree of each vertex can be used when dealing with GCPs, and the details will be shown in Section IV.B.1.

C. Environment of constraint satisfaction agents In order to realize the local perceptivity of agents, the

environment is organized as a latticelike structure, which is similar to our previous work in [34].

Definition 3: All CSAgents live in a latticelike environment, L, which is called an agent lattice. The size of L is Lsize×Lsize, where Lsize is an integer. Each CSAgent is fixed on a lattice-point and can only interact with the neighbors. Suppose that the CSAgent located at (i, j) is represented as Li,j, i, j=1,2,…,Lsize, then the neighbors of Li,j, Neighborsi,j, are defined as follows:

{ }, , , , ,, , , i j i j i j i j i j′ ′ ′′ ′′=Neighbors L L L L (9)

where 1 1

1size

i ii

L i− ≠′ = =

, 1 1 1size

j jj

L j− ≠′ = =

, 1

1 size

size

i i Li

i L+ ≠′′ = =

,

1 1

size

size

j j Lj

j L+ ≠′′ = =

.

The agent lattice can be represented as the one in Fig.1. Each circle represents a CSAgent, the data represent the position in the lattice, and two CSAgents can interact with each other if and only if there is a line connecting them.

In traditional EAs, those individuals used to generate offspring are usually selected from all individuals according to their fitness. Therefore, the global fitness distribution of a population must be determined. But in nature, a global selection does not exist, and the global fitness distribution cannot be determined either. In fact, the real natural selection only occurs in a local environment, and each individual can only interact with those around it. That is, in each phase, the natural evolution is just a kind of local phenomenon. The information can be shared globally only after a process of diffusion.

In the aforementioned agent lattice, to achieve their purposes, CSAgents will compete with others so that they can gain more resources. Since each CSAgent can only sense the local environment, the behaviors can only take place between the CSAgent and the neighbors. There is no global selection at all, so the global fitness distribution is not required. A CSAgent interacts with the neighbors so that information is transferred to them. In such a manner, the information is diffused to the whole agent lattice. As can be seen, the model of the agent lattice is closer to the real evolutionary mechanism in nature than the model of the population in traditional EAs.

Page 5: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 5

D. Behaviors of constraint satisfaction agents The purpose of an algorithm for solving CSPs is to find

solutions at a low computational cost as much as possible. So the computational cost can be considered as the resources of the environment in which all CSAgents live. Since the resources are limited and the behaviors of the CSAgents are driven by their purposes, a CSAgent will compete with others to gain more resources. On the bases of this, three behaviors are designed for CSAgents to realize their purposes, that is, the competitive behavior, the self-learning behavior, and the mutation behavior. The former two belong to P-behaviors, whereas the last belongs to V-behaviors.

Competitive behavior: In this behavior, the energy of a CSAgent is compared with those of the neighbors. The CSAgent can survive if the energy is maximum; otherwise the CSAgent must die, and the child of the one with maximum energy among the neighbors will take up the lattice-point.

Suppose that the competitive behavior is performed on the CSAgent located at (i, j), Li,j, and Maxi,j is the CSAgent with maximum energy among the neighbors of Li,j, namely, Maxi,j∈ Neighborsi,j and ∀ CSAgent∈ Neighborsi,j, then CSAgent(E)≤Maxi,j(E). If Li,j(E) ≤ Maxi,j(E), then Maxi,j generates a child CSAgent, Childi,j, to replace Li,j, and the method is shown in Algorithm 2; otherwise Li,j is left untouched.

Algorithm 2 Competitive behavior

Input: Maxi,j: Maxi,j(P)=⟨m1, m2, …, mn⟩ forpermutation CSPs, and

1 2, ( ) , , , ni j m m mx x x= ∑ ⌡�Max P for

non-permutation CSPs;pc: A predefined parameter, and a

real number between 0-1;Output: Childi,j: Childi,j(P)=⟨c1, c2, …, cn⟩ for

permutation CSPs, and

1 2, ( ) , , , ni j c c cx x x= ∑ ⌡�Child P for

non-permutation CSPs; Swap(x, y) exchanges the values of x and y. U(0,

1) is a uniform random number between 0 and1. Random(n, i) is a random integer among 1, 2, …, n and is not equal to i. Min(i, j) is the smallerone between i and j.

begin

Childi,j(P) := Maxi,j(P);i := 1;Pos := n+1; repeat if (U(0, 1)<pc) then begin

l := Random(n, i); if (non-permutation CSPs) then begin

( , )i lc cSwap x x ;

if (Min(l, i)<Pos) then Pos := Min(l, i);

endelse Swap(ci, cl);

end;i := i+1;

until (i>n);if (non-permutation CSPs) then

MCD(Childi,j, Pos);Childi,j(SL) := True;

end.

In fact, Childi,j is generated by exchanging a small part of Maxi,j, and is equivalent to performing a local search around Maxi,j. The purpose of the competitive behavior is to eliminate the CSAgents with low energy, and give more chances to the potential CSAgents.

Self-learning behavior: As well-known, integrating local searches with EAs can improve the performance. Moreover, agents have the knowledge relating to the problems. Therefore, inspired by the min-conflicts heuristic [11], [12] and taking into account the encoding methods of the CSAgents, we design the self-learning behavior. Suppose that this behavior is performed on Li,j. The details are shown in Algorithm 3.

Algorithm 3 Self-learning behavior Input: Li,j: Li,j(P)=⟨m1, m2, …, mn⟩ for

permutation CSPs, and

1 2, ( ) , , , ni j m m mx x x= ∑ ⌡�L P for

non-permutation CSPs; Li,j(V)=⟨v1, v2, …, vn⟩;

Output: Li,j;

begin repeat

Repeat := False;k := 1;Iteration := 1; while (k≤n) do begin if ( ( ) 0

kmConflicts v ≠ ) then begin

Energyold := Li,j(E);l := Random(n, k);if (non-permutation CSPs) thenbegin

( , )k lm mSwap x x ;

MCD(Li,j, Min(k, l));endelse Swap(mk, ml);Energynew := Li,j(E);if (Energynew≤Energyold) thenbegin

Swap(mk, ml) ( ( , )k lm mSwap x x );

if (non-permutation CSPs) thenMCD(Li,j, Min(k, l));

Page 6: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 6

endelse Repeat := True;if (Iteration<n-1) then

Iteration := Iteration+1 else begin

Iteration := 1; k := k+1;

end;endelse k := k+1;

end;until (Repeat=True);Li,j(SL) := False;

end.

The purpose of Algorithm 3 is to find a swap for the components violating constraints in the permutation, such that the energy of Li,j increases after the swap is performed. For a component, the algorithm iteratively performs a swap until no constraint is violated or the predefined iterative count, Iteration=(n-1), is achieved. Then, the algorithm goes on to deal with the next component. Iteration can prevent the algorithm from repeating infinitely. After the self-learning behavior has been performed on a CSAgent, the probability that the energy of the CSAgent can be increased by this behavior again is very low, thus Li,j(SL) is set to False in the last step.

Although the self-learning behavior is inspired by the min-conflicts heuristic [11], [12], they are different in essence. First, the min-conflicts heuristic performs on the values of the variables directly, whereas the self-learning behavior performs on the permutation. Thus the main operation of the self-learning behavior is swapping. Second, the min-conflicts heuristic wants to find a value that minimizes the number of conflicts, whereas the self-learning behavior just improves the energy of the CSAgent as much as possible. Since the self-learning behavior aims at a permutation, an example of the performing process will be given in Section V.A.1 for permutation CSPs.

Mutation behavior: This behavior is similar to the mutation operator used in traditional GAs. The function is to assist the above behaviors. It can enlarge the search area so as to make up the disadvantage of the decoding method. Suppose that the mutation behavior is performed on Li,j, and Li,j(V)=⟨v1, v2, …, vn⟩. Then, the following operation is performed on Li,j. if Uk(0, 1)<pm, then vk←Random(|Dk|) (10) Where k=1, 2, …, n, and Uk(0, 1) represents the new random number for each k. pm is a predefined parameter, and is a real number between 0-1. Random(|Dk|) is a random integer in 1, 2, …, |Dk|.

III. MULTIAGENT EVOLUTIONARY ALGORITHM FOR CONSTRAINT SATISFACTION PROBLEMS

A. Implementation of MAEA-CSPs To solve CSPs, all CSAgents must orderly adopt the three

behaviors. Here these behaviors are controlled by means of evolution so that the agent lattice can evolve generation by

generation. At each generation, the competitive behavior is performed on each CSAgent first. As a result, the CSAgents with low energy are cleaned out from the agent lattice so that there is more space developed for the CSAgents with higher energy. Then, the self-learning behavior and the mutation behavior are performed according to the type of the problems and the state of the CSAgent. In order to reduce the computational cost, the two behaviors are only performed on the best CSAgent in the current agent lattice. The process is performed iteratively until a CSAgent with zero energy is found or the maximum computational cost is reached.

Algorithm 4 Multiagent evolutionary

algorithm for constraintsatisfaction problems

Input: EvaluationMax: The maximum number ofevaluations for theenergy;

Lsize: The scale of the agent lattice;pc: The parameter used in the

competitive behavior;pm: The parameter used in the

mutation behavior, and onlyfor non-permutation CSPs;

Output: A solution or an approximatesolution;

Lt represents the agent lattice in the tthgeneration. t

BestCSAgent is the best in L0, L1, …,

Lt, and ttBestCSAgent is the best in Lt.

beginfor i:=1 to Lsize dofor j:=1 to Lsize dobeginGenerate a permutation randomly and

assign it to 0, ( )i jL P ;

if (non-permutation CSPs) then0,MCD( , 1)i jL ;

Compute 0, ( )i j EL ;

0, ( ) :i j SL True=L ;

end;

Evaluations := Lsize× Lsize;

Update 0BestCSAgent ;

t := 0;repeatfor i:=1 to Lsize dofor j:=1 to Lsize dobegin

if ( ,ti jL wins in the competitive

behavior) then1

, ,:t ti j i j+ =L L

else 1, :t

i j i, j+ =L Child (generated according

to Algorithm 2);

Page 7: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 7

Compute 1, ( )t

i j E+L ;

Evaluations := Evaluations+1;end;

Update 1( 1)tt Best++CSAgent ;

if ( )1( 1) ( )tt Best SL True++ =CSAgent then

Perform the self-learning behavior on1

( 1)tt Best++CSAgent

else if (non-permutation CSP) thenbeginPerform the mutation behavior on

1( 1)tt Best++CSAgent ;

Compute 1( 1) ( )tt Best E++CSAgent ;

Evaluations := Evaluations+1;end;

if ( 1( 1) ( ) ( )t tt Best BestE E++ <CSAgent CSAgent ) then

begin1 :t t

Best Best+ =CSAgent CSAgent ;

1 :t tRandom Best+ =CSAgent CSAgent ( 1t

Random+CSAgent is

randomly selected from Lt and is

different from 1( 1)tt Best++CSAgent );

end

else 1 1( 1):t t

Best t Best+ +

+=CSAgent CSAgent ;

t := t+1;until ( )1 ( ) 0t

Best E+ =CSAgent or

(Evaluations ≥ EvaluationMax);end.

B. Space complexity of MAEA-CSPs When dealing with a large-scale problem, the memory

required by an algorithm must be taken into account. For example, although the method proposed in [33] obtained good performance, it needs to store a n×n lattice to record the number of collisions in each grid and the space complexity is O(n2). Thus, even if each grid is recorded by an integer with 4 bytes, 38,147M memory locations are still required for a 105-queen problem. Therefore, the method proposed in [33] has resource requirements exceeding those currently available from PC computers.

Theorem 1: The space complexity of multiagent evolutionary algorithm for constraint satisfaction problems is O(n).

Proof: The main contribution to the space complexity is from the storage for the agent lattices in the current generation and the next generation and the best CSAgent. So the number of CSAgents recorded is NumCSAgent=2×Lsize×Lsize+1 (11)

For non-permutation CSPs, P, V, E, and SL should be recorded for each CSAgent. For permutation CSPs, P, E, and SL should be recoded for each CSAgent. So the number of space units each CSAgent required is

2 2 for non-permutation CSPs2 for permutation CSPs

nUnits

n+

= +CSAgent (12)

The number of space units required by MAEA-CSPs in total is

2 2

2 2

(4 2) (4 2) for non-permutation CSPs

(2 1) (4 2) for permutation CSPs

Units

size size

size size

Num Num Units

L n LL n L

= × =

+ + +

+ + +

CSAgent CSAgent

(13)

Since 2(2 1)sizeL + is independent of n, the space complexity of multiagent evolutionary algorithm for constraint satisfaction problems is O(n). �

Theorem 1 shows that the space required by MAEA-CSPs increases linearly with the scale of the problem. If a unit requires 4 bytes and Lsize is set to 3, MAEA-CSPs requires 725M for a 107-queen problem since n-queen problems belong to non-permutation CSPs. C. Convergence of MAEA-CSPs

In practical cases, one is interested in knowing the likelihood of an algorithm in finding solutions for CSPs. Most existing algorithms based on local searches do not guarantee that solutions can be found. When they are trapped in local optima, they always restart. MAEA-CSPs starts from a randomly generated agent lattice, and evolves the agent lattice generation by generation with three behaviors. It has no restarting step. Can MAEA-CSPs find solutions?

The search space of non-permutation CSPs with n variables has |D1|×|D2|×…×|Dn| elements, and that of permutation CSPs with n variables has n! elements. Let E={Energy(a) | a∈ S}. Since the energy describes the negative value of the number of constraints not satisfied by a CSAgent and the number of all constraints is m, |E| is equal to (m+1) and E can be represented as E={-m, -m+1, …, -2, -1, 0}. This immediately gives us the opportunity to partition S into a collection of nonempty subsets, {Si | i=-m, -m+1, …, -2, -1, 0}, where Si={a | a∈ S and Energy(a)=i} (14)

0

-0

-

| | | |; ;

, ;

i ii m

i j ii m

i j=

=

= ≠ ∅

= ∅ ∀ ≠ =

� �

S S S

S S S S (15)

Obviously, any CSAgent belonging to S0 is a solution. Suppose that the energy of an agent lattice, L, is equal to

Energy(L), which is determined by, Energy(L)=max{Li,j(E) | i,j=1, 2, …, Lsize} (16)

Let L stand for the set of all agent lattices. Thus, ∀ L∈ L, -m≤Energy(L)≤0. Therefore, L can be partitioned into a collection of nonempty subsets {Li | i=-m, -m+1, …, -2, -1, 0}, where Li={L | L∈ L and Energy(L)=i} (17)

0

-

0

-

| | | |; ;

, ;

i ii m

i j ii m

i j=

=

= ≠ ∅

= ∅ ∀ ≠ =

� �

L L L

L L L L (18)

L0 consists of all agent lattices with zero energy. Let Lij, i=-m, -m+1, …, -2, -1, 0, j=1, 2, …, |Li|, stand for the

jth agent lattice in Li. In any generation, the three behaviors

Page 8: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 8

transform the agent lattice, Lij, to another one, Lkl, and this process can be viewed as a transition from Lij to Lkl. Let pij.kl be the probability of transition from Lij to Lkl, pij.k be the probability of transition from Lij to any agent lattice in Lk, and pi.k be the probability of transition from any agent lattice in Li to any agent lattice in Lk. It is obvious that

| |. .1

k

ij k ij kllp p

==∑

L , 0.-

1ij kk mp

==∑ , pi.k≥pij.k (19)

On the basis of the concepts above and [51], [52], the convergence of MAEA-CSPs is proved.

Theorem 2: [53] Let P′′′′ : n′×n′ be a reducible stochastic matrix which means that by applying the same permutations to rows

and columns, P′′′′ can be brought into the form

0CR T

, where C:

m′×m′ is a primitive stochastic matrix and R, T≠0. Then

1

=0

lim limk

kk -

i k i kk k

i

∞∞

∞−→∞ →∞

′ ′ = = ∑

0 00

C CP = P

RT RC T (20)

is a stable stochastic matrix with P′∞=1′p∞, where p∞=p0P′∞ is unique regardless of the initial distribution, and p∞ satisfies:

0ip∞ > for 1≤i≤m′ and 0ip∞ = for m′<i≤ n′. Since the competitive behavior performs swap operations on

permutations for both non-permutation and permutation CSPs, the probability of transforming a permutation to another one by the this behavior is analyzed in theorem 3.

Theorem 3: Let P=⟨P1, P2, …, Pn⟩∈ SP, Q=⟨Q1, Q2, …, Qn⟩∈ SP, and P≠Q. Let pQ→P be the probability of transforming Q to P by Algorithm 2. Suppose that

{ }| {1, 2, , } and i i iDiff Q i n P Q= ∈ ≠�

QP and

{ }| {1, 2, , } and i i iSame Q i n P Q= ∈ =�

QP . Then

| | 1 | | 11(1 ) ( )Same Diffc c np p p+ −

→ ≥ − ⋅ ⋅Q QP P

Q P . (21) Proof: When performing Algorithm 2 on Q, the probability of

the elements in SameQP are left untouched is

| |(1 ) Samesame cp p= −

QP (22)

For each element in Diff QP , if U(0, 1) is smaller than pc, it

would be swapped with another elements in Q. If for

iQ Diff∀ ∈ QP , the swap operation selects jQ Diff∈ Q

P and Qj=Pi,

SameQP will be left untouched. Such swaps can let the last two

elements in Diff QP become identical with the corresponding

ones in P by one swap. Therefore, the maximum number of swap operations is 1Diff −Q

P . Then, | | 11

| | 1 | | 11

( ) (1 )

(1 ) ( )

Diffsame c cn

Same Diffc c n

p p p p

p p

−→

+ −

≥ ⋅ ⋅ ⋅ −

= − ⋅ ⋅

QP

Q QP P

Q P (23)�

Theorem 4: In multiagent evolutionary algorithm for

constraint satisfaction problems, ∀ i, k∈ E, ,

0,0,i k

k ip

k i> ≥

= = <.

Proof: Let Lij, i∈ E, j=1, 2, …, |Li| be the agent lattice in the tth generation, and be labeled as Lt. The best CSAgent among Lt be

ttBestCSAgent , and ( )t

tBest E i=CSAgent . Then, (24) can be obtained according to Algorithm 4. Energy(Lt+1) ≥ Energy(Lt) ⇒ ∀ k<i, pij.kl=0

⇒ ∀ k<i, pij.k=| |

.1

k

ij kllp

=∑L

=0 ⇒ ∀ k<i, pi.k=0 (24)

Suppose that ∃ CSAgent′, CSAgent′(E)=k>i. For permutation CSPs, since t

tBestCSAgent is the best CSAgent in Lt, it must win over the neighbors in the competitive behavior, and will generate a child according to Algorithm 2. According to theorem 3, the probability of generating CSAgent′ from

ttBestCSAgent is

| | 1 | | 11 (1 ) ( ) 0

ttBest

t ttBest tBestSame Diff

c c n

p

p p′ ′

′→

+ −

− ⋅ ⋅ >CSAgent CSAgentCSAgent CSAgent

CSAgent CSAgent (25)

For non-permutation CSPs, the self-learning behavior or the mutation behavior will perform on t

tBestCSAgent . Obviously, 0t

tBestp ′→

>CSAgent CSAgent

if it is the self-learning behavior to

perform on ttBestCSAgent . If it is the mutation behavior to

perform on ttBestCSAgent , suppose that there are n1 variables in

( )ttBestCSAgent V which are different from the corresponding

ones in CSAgent′(V). Then, the probability of generating CSAgent′ from t

tBestCSAgent by the mutation behavior is: 1 1(1 ) 0t

tBest

n n nm mp p p −

′→= ⋅ − >

CSAgent CSAgent (26)

So the probability of transition from Lij to any agent lattice in Lk is

. 0ttBest

ij kp p ′→≥ >

CSAgent CSAgent (27)

Therefore, ∀ k≥i, pi.k≥pij.k>0. � It follows from this theorem that there is always a positive

probability to transit from an agent lattice to the one with identical or higher energy and a zero probability to the one with lower energy. Thus, once MAEA-CSPs enters L0, it will never escape.

Theorem 5: Multiagent evolutionary algorithm for constraint satisfaction problems converges to the global optimum as time tends to infinity.

Proof: It is clear that one can consider each Li, i∈ E, as a state in a homogeneous finite Markov chain. According to theorem 4, the transition matrix of the Markov chain can be written as follows:

0.0

-1.0 -1.-1

- .0 - .-1 - .-

0 00

m m m m

pp p

p p p

′ =

0�

� � � �

CP =

R T (28)

Obviously, R=(p-1.0 p-2.0 … p-m.0)T>0, T≠0, C=(p0.0)=(1)≠0. According to theorem 2, P′′′′∞ is given by,

1

=0

lim limk

kk -

i k i kk k

i

∞∞

∞−→∞ →∞

′ ′ = = ∑

0 00

C CP = P

RT RC T (29)

where C∞=(1), R∞=(1 1 … 1)T. Thus, P′′′′∞∞∞∞ is a stable stochastic

Page 9: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 9

matrix, and 1 0 01 0 0

1 0 0

� � � �

P = (30)

Therefore, lim { ( ) 0} 1t

tPr Energy

→∞= =L (31)

where Pr stands for the probability. This implies that multiagent evolutionary algorithm for constraint satisfaction problems converges to the global optimum as time tends to infinity. �

IV. EXPERIMENTAL STUDIES ON NON-PERMUTATION CONSTRAINT SATISFACTION PROBLEMS

Reference [54] indicates that any CSP can be equivalently transformed to a binary CSP. Since binary CSPs belong to non-permutation CSPs, they are used to test the performance of MAEA-CSPs in the section. Systematic analyses are also made to check the effect of the parameters, pc and pm. Afterwards, MAEA-CSPs is used to solve a more practical case, graph coloring problems. Four parameters must be assigned before MAEA-CSPs is used to solve problems. Lsize×Lsize is equivalent to the population size in traditional EAs, so Lsize is generally selected from 3 to 10, and is set to 5 in the following experiments. pc and pm control the competitive behavior and the mutation behavior, and are set to 0.2 and 0.05. EvaluationMax is set to 105 for binary CSPs, and 104 for GCPs. All experiments are executed on a 2.4-GHz Pentium IV PC with 1G RAM.

A. Binary constraint satisfaction problems The experiments are conducted on the test suite [78] used to

compare the performances of 11 available algorithms in [29]. The suite consists of 250 solvable problem instances. Each instance has 20 variables, and the domain set is D={Di | Di={1, 2, …, 20} and 1≤i≤20}. The 250 instances are divided into 10 groups according to their difficulty, p={0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.30, 0.31, 0.32, 0.33}, and each group has 25 instances. The higher the value of p, the more difficult the instance is. We perform 10 independent runs on each of the 25 instances belonging to a given p value, amounting to 250 data points used for calculating our measures.

The method used to measure the performance of MAEA-CSPs is similar to that of [29], namely, the effectiveness and the efficiency are considered. The effectiveness is measured by the success rate (SR) and the mean error (ME). The success rate is the percentage of runs finding a solution. Since all instances in the test suite are solvable, the highest value of the SR is 100%. The error is defined for a single run as the number of constraints not satisfied by the best CSAgent in the agent lattice when the run terminates. For any given set of runs, the ME is the average of these error values. This measure provides information on the quality of partial solutions. This can be a useful way of comparing algorithms having equal SR values lower than 100%.

The efficiency is measured by the average number of

evaluations to solution (AES), which has been widely used to measure the computational cost of EAs. For a given algorithm, it is defined as the average number of evaluations of successful runs. Consequently, when the SR=0, the AES is undefined. Note also, when the number of successful runs is relatively low, the AES is not statistically reliable. Therefore, it is important to consider the SR and AES together to make a clear interpretation of the results. Among the three measures, the most important is the success rate since we ultimately want to solve problems. The second is the AES, in the case of comparable SR figures, we prefer the faster EA. The ME is only used as a third measure as it might suggest preferences between EAs failed.

A.1 Comparison between MAEA-CSPs and the existing algorithms

Reference [29] has made a comparison among 11 available algorithms, and the results show that the best four algorithms are H-GA.1 [20], [21], H-GA.3 [20], [21], SAW [27], [28], and Glass-Box [19]. Therefore, we reimplemented the four algorithms according to the descriptions and parameters given in [29], and made a comparison between them and MAEA-CSPs. The results of the four algorithms are consistent with those of [29]. The comparison results are shown in Fig.2. In order to facilitate other researchers to make a comparison between MAEA-CSPs and their algorithms, the results of MAEA-CSPs are also shown in Table I.

As can be seen, the SR of MAEA-CSPs is the highest among the five algorithms with 100% SR for p=0.24-0.26. Since the ME of SAW is much larger than those of the other algorithms, the ME of SAW and MAEA-CSPs is compared in top-right of Fig.2(b). Fig.2(b) shows that the ME of MAEA-CSPs is also the best among the five algorithms. Since the AES is not statistically reliable when the SR is very low, only the AES of the instances whose SR is larger than 10% is plotted in Fig.2(c). All the SR of the five algorithms for p=0.30-0.33 is very low. But the ME of MAEA-CSPs is small, and only 1 to 3 constraints are not satisfied. To summarize, MAEA-CSPs outperforms the four other algorithms.

A.2 Parameter analyses of MAEA-CSPs In this experiment, pc and pm are increased from 0.05 to 1 in

steps of 0.05, with 20×20=400 groups of parameters obtained. MAEA-CSPs with the 400 groups of parameters is used to solve the 10 groups of instances. According to the SR, the 10 groups of instances can be divided into 3 classes. The first class includes the instances with low p values, that is, p=0.24-0.26. Since the SR for this class is higher than 90% for most of the parameters and the ME is very low, the graphs for the SR and the AES are shown in Fig.3. The second class includes the instances with p varying from 0.27 to 0.29. Since the SR for this class is lower, the graphs for the SR and the ME are shown in Fig.4. The last class includes the instances with high p values. Since the instances in this class are very difficult, only the graphs for the ME are shown in Fig.5.

As can be seen, pc has a larger effect on the performance of MAEA-CSPs. For the first class, the SR is higher than 90%

Page 10: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 10

when pc is larger than 0.1. Although the AES increases with pc when pc is larger than 0.2, the AES is smaller when pc is in 0.1-0.3. For the second and the last class, the results are similar, namely, when pc is in 0.1-0.3, the SR is higher, and the ME is smaller. Although pm does not affect the performance of MAEA-CSPs obviously, Fig.4 and Fig.5 show the SR is slightly higher and the ME is slightly smaller when pm is small.

To summarize, it is better to choose pc from 0.1-0.3 and pm from 0.05-0.3. In addition, although the performance of MAEA-CSPs with above parameters is better, MAEA-CSPs still performs stably when pc is larger than 0.2. It shows that the performance of MAEA-CSPs is not sensitive to the parameters, and MAEA-CSPs is quite robust and easy to use.

B. Graph coloring problems Many problems of practical interest can be modeled as graph

coloring problems. The purpose is to color each vertex of a graph by a certain color, such that no two vertices incident to any edge have the same color. Given an undirected graph G={V, E}, where V={V1, V2, …, Vn} is the set of vertices, and E={ei,j | Vi and Vj are connected} is the set of edges. An m-coloring problem can be described as follows when representing each vertex as a variable, and the domain corresponds to the given set of m colors:

( ){}

1 2

1 2

,

{ , , , }, represents { , , , },

{1, 2, , }, 1, 2, ,

{ , } , | , {1, 2, ..., },

{1, 2, ..., },

n i i

n

i

i j

i j

x x x x VD D DD m i n

C x x d d i j n

d m e

= ∈

= = =

= = ⟨ ⟩ ∀ ∈

∀ ∈ ∈

� �

x VD

C

E

(32)

In what follows, MCD for GCPs is described first, and then MAEA-CSPs is tested on the official benchmarks from DIMACS graph coloring challenge [79]. Finally, a comparison is made between MAEA-CSPs and two recent algorithms.

B.1 Minimum conflict decoding for graph coloring problems Since the constraints only occur between two connected

vertices, and the degree of a vertex is far smaller than |V| in general, we can record the degree and the list of vertices connected to an identical vertex in advance, and only check the lists when performing MCD.

Algorithm 5 Minimum conflict decoding for

graph coloring problemsInput: CSAgent: the CSAgent needs to be

decoded;Pos: the position to start decoding;

Output: CSAgent(V);Let

1 2( ) , , ,

nP P Px x x= ∑ ⌡�CSAgent P , CSAgent(V)=⟨v1,

v2, …, vn⟩, iPdegree stands for the degree of

vertexiPx , and iP

jvertex the jth vertex

connected toiPx .

beginif (Pos=1) thenbegin

1: 1Pv = ;

i := 2;endelse i := Pos;repeatfor j:=1 to m do Conflictsj:=0;for j:=1 to

iPdegree do

if ( iPjvertex has been assigned value)

then begin

Color := the color of iPjvertex ;

ConflictsColor := ConflictsColor + 1; end;

Color := j, where Conflictsj is minimum amongConflicts1, Conflicts2, …, Conflictsm;

:iPv Color= ;

i := i+1;until (i>n);

end.

In Algorithm 5, the second “for” only considers the vertices connected to

iPx and previously assigned values. When this

“for” scans once, the number of vertices with each color are computed out. So the algorithm is more efficient.

B.2 Experimental results of MAEA-CSPs Presently, the DIMACS challenge has 79 problems. These

problems include various graphs, such as random graphs (DSJCx.y), geometric random graphs (DSJRx.y), “quasirandom” graphs (flatx_y_0), graphs from real-life applications (fpsol2.i.x, inithxi.x, mulsol.i.1, and zeroin.i.1). To investigate the performance of MAEA-CSPs extensively, all the 79 problems are considered, and the results averaged over 10 independent runs are shown in Table II. In the DIMACS challenge, some problems counted each edge once whereas some ones counted twice. Since the graphs are undirected, |E| in Table II counts each edge once. The value of m is set to the number of optimal colors when known, and the smallest value we can find in the literature (values in parentheses) when unknown. In Table II, “c” stands for the number of constraints not satisfied. Since each constraint corresponds to an edge, “c” is the number of edges whose two vertices have the same color.

As can be seen, MAEA-CSPs finds exact solutions in all 10 runs for 33 out of 79 problems, and c of these problems is 0. Moreover, the running times of these problems are very short. “(1-c/|E|)×100” indicates the percentage of edges colored properly. Among the 79 problems, there are 66 problems that MAEA-CSPs colors more than 99% of the edges properly, 9 problems that 98-99% of the edges properly, 2 problems that 97-98% of the edges properly, and 2 problems that 94-95% of

Page 11: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 11

the edges properly. The running times of MAEA-CSPs are shorter than 1 second for 46 problems, between 1 to 4 seconds for 22 problems, between 5 to 10 seconds for 3 problems, and longer than 10 seconds for only 8 problems.

B.3 Comparison between MAEA-CSPs and two recent algorithms

Reference [55] proposed an algorithm, labeled as SSC, for GCPs based on energy functions and neural networks. Reference [55] tested SSC by a part of the DIMACS challenge, and good performance was obtained. Table III compares SSC with MAEA-CSPs, where the results of SSC are those reported in [55]. SSC counted each violated edge twice, which has been confirmed by the author of [55]. So we correct the “c” of SSC in Table III. MAEA-CSPs finds better solutions than SSC for 74 out of 79 problems.

Reference [33] has also designed an agent evolutionary model, ERA, and ERA was tested on 11 problems out of the DIMACS challenge. Table IV compares ERA with MAEA-CSPs, where the results of ERA are those reported in [33]. What should be noted is [33] gives the number of vertices having no conflicts, which is labeled as “v” in Table IV, whereas MAEA-CSPs gives the number of edges having conflicts. For all the 11 problems, both the running times of ERA and of MAEA-CSPs are 0.00s. Moreover, MAEA-CSPs colors 100% of the edges properly for these 11 problems, that is to say, 100% of the vertices are colored properly. So the performance of MAEA-CSPs is better than that of ERA.

In conclusion, Table III and Table IV show that MAEA-CSPs outperforms the two algorithms.

V. EXPERIMENTAL STUDIES ON PERMUTATION CONSTRAINT SATISFACTION PROBLEMS

A. n-queen problems n-queen problems are typical problems of CSPs, and have

been studied in depth [20], [33], [56]-[59]. A solution to an n-queen problem places n queens on an n×n chessboard such that no two queens can attack each other. The purpose of this study is twofold. First, a general search algorithm to n-queen problems is a useful indicator of the potential for solving other CSPs [57]. Second, an n-queen problem is itself a model of the maximum coverage problems [58]. A solution to n-queen problems guarantees that each object can be accessed by the outside world from any one of the eight neighboring directions (two vertical, two horizontal, and four diagonal directions) without a conflict with other objects. This result can directly apply to VLSI testing, air traffic control, modern communication systems, data/message routing in a multiprocessor computer.

In what follows, the description of n-queen problems and an example for the self-learning behavior are given first, and then the experimental results.

A.1 Problem description In any solution of n-queen problems, no two queens are on

the same row or on the same column, because it would cause collisions. Therefore, a solution for n-queen problems can be represented by a permutation of 1, 2, …, n, P=⟨P1, P2, …, Pn⟩. It denotes to put the ith queen on the ith row, the Pith column. Thus, n-queen problems belong to permutation CSPs, and the search space is SP. When we encode each candidate solution by a permutation, the collisions on rows and columns are avoided naturally, and what left is to find the permutations satisfying the constraints on the diagonal lines. An n×n grid has (2n-1) positive diagonal lines and (2n-1) negative diagonal lines, and they have the following characteristics: the difference between the row index and the column index is constant on any positive diagonal lines, and the sum of both indexes is constant on any negative diagonal lines (see Fig.6). Consequently, an n-queen problem can be described as follows:

( ){

}

1 2

1 2

{ , , , }{ , , , },

{1, 2, , }, 1, 2, ,

{ , } , | , , , {1, 2, , },

( ) or ( ) or ( )

n

n

i

i j k l k l

k l k l

l k

x x xD D DD n i n

C x x d d i j d d n

d d i j d di j d d

= = = = = = ⟨ ⟩ ∀ ∈ = − = − − = −

� �

xD

C (33)

To explain the self-learning behavior explicitly, an example of the performing process is given.

Example 3: Let the problem under consideration be an 8-queen problem, and the CSAgent on which the self-learning behavior performed be ⟨1, 2, 8, 4, 5, 7, 3, 6⟩. Then the performing process is given in Fig.7.

For more clarity, Fig.7 is explained further. Since the queen in the first row has collisions, the algorithm first searches for a swap, see Fig.7(b). Suppose that the selected l is 3, then after the swap is performed, we have Energynew=-6>Energyold=-7. So the swap is successful, Repeat is set to True, and Iteration increases 1. Here the queen in the first row has no collisions, so the algorithm goes on to deal with the queen in the second row, see Fig.7(c). Suppose that 3 is chosen for l. Although the swap is successful, the queen in the second row still has collisions. Therefore, the algorithm still deals with it, see Fig.7(d). Suppose that 4 is chosen for l, then the swap is successful and the algorithm goes on to deal with the queen in the third row, see Fig.7(e). Suppose that 5,7,1,2 are chosen for l in turn, but all swaps failed. At the moment, Iteration is equal to 7, so Iteration is set to 1 and the algorithm goes on to deal with the queen in the fourth row, see Fig.7(f). Suppose that 6 is chosen for l, then the swap is successful. Presently, the queen in the fourth row has no collisions, thus the algorithm goes on to deal with the queens in the following rows, see Fig.7(g). Since the queens in the sixth and eighth row have no collisions, the algorithm deals with the queens in the fifth and seventh rows. But all swaps failed. However, Repeat is True, hence the algorithm restarts from the beginning. During this time, the collisions of the queens in the fifth and seventh rows cannot be eliminated. At which point Repeat is equal to False and the algorithm halts. The final state of the CSAgent is shown in Fig.7(h) and the energy increases

Page 12: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 12

from –7 to –1. �

A.2 Experimental results for n-queen problems There exist solutions for n-queen problems with n greater

than or equal to 4 [58]. So the termination criterion of MAEA-CSPs is set to find a solution, namely, set EvaluationMax to ∞. Thus, all the following experimental results have SR=100% and ME=0. Then, what should be studied further is the computational cost, which is measured by the running times.

Reference [20] designed a GA that can solve 104-queen problems, but the result was reported in a figure, and cannot be compared with that of MAEA-CSPs directly. The agent evolutionary model, ERA [33], can solve 7000-queen problems. Therefore, the results of MAEA-CSPs for 103-104-queen problems are given first, and then a comparison is made between MAEA-CSPs and ERA [33]. The results are shown in Table V. The results of ERA are obtained by running the software [80] on the same environment. The software restrains n in 4-7000. As can be seen, the performance of MAEA-CSPs is much better than that of ERA, and MAEA-CSPs only uses 9 milliseconds to solve the 104-queen problems.

In order to study the scalability of MAEA-CSPs along the problem scale, MAEA-CSPs is used to solve 5×104-107-queen problems. In this experiment, n increases from 5×104 to 107 in steps of 5×104. At each sampled value of n, 50 trials are carried out, and the average running times are shown in Fig.8. As can be seen, the running times of MAEA-CSPs can be approximated by the function, (5.04×10-6×n1.07). That is to say, MAEA-CSPs has a linear time complexity for n-queen problems.

For more clarity, the average running times and the standard deviation of MAEA-CSPs are shown in Table VI for the problems with 1×106, 2×106, …, 1×107 queens. MAEA-CSPs only uses 13s to solve 106-queen problems, and 150 seconds to solve 107-queen problem. Moreover, all standard deviations are very small, and the maximum is only 1.15. So MAEA-CSPs not only has a fast convergence rate, but also has a stable performance.

B. Job-shop scheduling problems Since modern manufacturing environments are very complex,

making it very difficult and time-consuming for people to create good schedules, it is a great advantage to have the scheduling process performed automatically by a computer system. There has been a considerable research effort on scheduling, such as the methods based on EAs [60]-[65], multiagents [66], simulated annealing [67], neural networks [68], hybrid heuristic technique [69], fuzzy logic [70]-[72]. The focus of this section is on job-shop scheduling problems [73]. A small modification has been made on MAEA-CSPs so that JSPs can be solved, which is introduced in Section V.B.1. Then, the experimental results are reported in Section V.B.2.

B.1 Multiagent evolutionary algorithm for job-shop scheduling problems

A JSP of size n×m consists of n jobs and m machines. For each job Ji, a sequence of m operations Oi=(oi,1, oi,2, …, oi,m)

describing the processing order of the operations of Ji is given. Each operation oi,j is to be processed on a specific machine and has a processing time τi,j. When the operations are processed each machine can process only one operation at a time, each job can only have one operation processed at a time, and no preemption can take place. A solution to a JSP is a schedule specifying when to process each of the operations, not violating any of the constraints.

One encoding method in common use for JSPs is permutation with repetition, where a schedule is described as a sequence of all n×m operations, and each operation in the sequence is described by the job-number. CSAgent(P) is represented as this format, namely,

1 2, , ..., and ( (1) ) and ( (2) ) and ... and ( ( ) )

n mP P P mm n m

×= == =

P PP P

(34)

Where Pi∈ {1, 2, …, m}, i=1, 2, …, n×m, and P(j), j=1, 2, …, n stand for the number of j in P. When transforming P into a schedule, Pi stands for oj,k if Pi=j and the number of j among P1, P2, …, Pi is equal to k. The schedule is obtained by considering the operations in the order they occur in P and assigning the earliest allowable time to that operation. The energy of a CSAgent is equal to the negative value of the makespan, where the makespan of a schedule is defined as the time elapsed from the beginning of processing until the last job has finished.

Such encoding method has the advantage that no infeasible solutions can be represented, and each CSAgent corresponds to a feasible schedule. Therefore, the problem is transformed to a constraint optimization problem in fact. Although MAEA-CSPs is designed for constraint satisfaction problems, it can also deal with JSPs by making a small modification.

Since a number in P occurs many times, the algorithm must be prevented from swapping two identical values. So the sixth line in Algorithm 2, “l := Random(n, i)”, should be changed to “l := Random(n, i) and ci≠cl”.

Let P=⟨P1, P2, …, Pn×m⟩, ipo represent the operation

corresponding to Pi, and ipM be the machine on which

ipo is to

be processed. Suppose that Pi and Pj (Pi≠Pj and i<j) is swapped and P′ is obtained. Based on the method transforming P to a schedule, we obtain that the two schedules corresponding to P and P′ are identical if Pi and Pj satisfy (35).

( ) ( )

( ) ( ) ( ), , and and

and and

i j

k i k j

k P P k i

P P k j P P

P i k j M M P P

M M P P M M

∀ < < ≠ ≠

≠ ≠ ≠

(35)

This observation can be used to improve the efficiency of the self-learning behavior, which is shown in Algorithm 6. Algorithm 6 Self-learning behavior for JSPs Input: Li,j: Li,j(P)=⟨P1, P2, …, Pn×m⟩; Output: Li,j;

begin repeat

Repeat := False;

Page 13: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 13

k := 1;Iteration := 1; while (k≤n×m) do begin

Energyold := Li,j(E); l := Random(n×m, k), Pk≠Pl, and Pk, Pl do not

satisfy (35); Swap(Pk, Pl);Energynew := Li,j(E); if (Energynew<Energyold) then Swap(Pk, Pl) else begin

if (Energynew>Energyold) thenRepeat := True;

k := k+1; end;

if (Iteration<n×m-1) then Iteration := Iteration+1 else begin

Iteration := 1; k := k+1;

end; end;

until (Repeat=True); Li,j(SL) := False;

end.

In the current form, the problem is not a CSP, so the

termination criterion in Algorithm 4 should be adjusted. Let MaxGens be the maximum number of generation. The last “until” in Algorithm 4 should be changed to “until ( 1 ( )t

Best E+CSAgent satisfies the requirements) or (t≥MaxGens)”. That is to say, the termination criterion is to run fixed number generations or the quality of the solution satisfies some conditions.

MAEA-CSPs with the above three changes is labeled as MAEA-JSPs, and will be used to do the following experiments. In addition, when determining the makespan for a CSAgent, MAEA-JSPs should find a right timeslice for each operation in the order they occur in the permutation, and this is the main computational cost. Because a vacant timeslice on a machine is between two arranged operations, or before the first or after the last arranged operation, MAEA-JSPs records the vacant timeslices on each machine and updates them after inserting a new operation. Thus, when looking for a right timeslice for a new operation, only the recorded vacant timeslices on the specific machine are considered, so that the computational cost is reduced.

B.2 Experimental results for job-shop scheduling problems Three test suites [81] are used to investigate the performance

of MAEA-JSPs, namely, FT [74], LA [75], and ORB [76], which have been widely used in the field of JSPs. FT consists of 3 problems, LA 40 problems, and ORB 10 problems. The optimal makespan, labeled as Makespan*, of the 53 problems are known [77]. Therefore, to study the computational cost, the termination criterion of MAEA-JSPs is set to find the optimal

makespan or run no more than 5000 generations. Lsize is set to 10. The experimental results averaged over 100 independent runs of MAEA-JSPs are shown in Table VII. They include the best (Best) and the average (Aver) makespan found, the standard deviation (StDev) of the 100 makespans obtained, and the percentage of gap between Aver and Makespan* (Gap),

( )*

* 100%Aver Makespan

GapMakespan

−= × (36)

The computational cost is represented in two forms, the average running times (Times) and the average number of evaluations (Evals).

From Table VII we can see that the best solutions for 45 out of 53 problems are equal to Makespan*, and the ones for the 8 other problems are also very close to Makespan*. For instance, those of LA24, LA27, LA40, and ORB02 are only larger 1 or 2 than Makespan*. The average solutions for 21 out of 53 problems are equal to Makespan*, that is to say, MAEA-JSPs finds the optimal makespan in all 100 runs for these problems. There are 7 problems, LA02, LA03, LA04, LA16, LA18, LA26, and ORB10, whose average solutions are very close to Makespan*, and the Gap is smaller than 0.5%. For the 25 other problems, only the Gap of LA29 and LA38 are 4.33 and 3.88, and all remainder is smaller than 2.6%. The standard deviations for 43 out of 53 problems are smaller than 10, and only those of LA29 and LA38 are larger than 15. The running times of MAEA-JSPs are shorter than 1 second for 18 problems, between 1 to 10 seconds for 23 problems, and longer than 10 seconds for 19 problems, where the maximum is 60.67s for LA27.

Reference [73] indicates that a JSP can be considered hard if the number of operations is no smaller than 200 and n≥15, m≥10, n<2.5m. On the basis of this observation, LA26-LA30 and LA36-LA40, whose names are shown in boldface in Table VII, are more difficult than the other problems. For these 10 problems, MAEA-JSPs finds Makespan* for 4 problems, and the Gantt charts of the corresponding schedules are shown in Fig.9.

In general, both the solution quality and the computational cost of MAEA-JSPs are appropriate. This illustrates that although MAEA-CSPs is designed for CSPs, it is also a promising algorithm for constraint optimization problems.

VI. CONCLUSION In this paper, multiagent systems and evolutionary algorithms

are combined to form a new algorithm to solve constraint satisfaction problems. The space complexity and convergence are analyzed theoretically. Parameter analyses show that MAEA-CSPs is quite robust and easy to use. It is better to choose pc from 0.1-0.3 and pm from 0.05-0.3. The comparison between MAEA-CSPs and six well-defined algorithms, H-GA.1, H-GA.3, SAW, Glass-Box, SSC, and ERA, indicates that MAEA-CSPs outperforms the six algorithms. In order to study the scalability of MAEA-CSPs along the problem dimension, 104-107-queen problems are used to test the performance of

Page 14: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 14

MAEA-CSPs. The experimental results show that the time complexity of MAEA-CSPs is only O(n1.07). The results of graph coloring problems and job-shop scheduling problems show that MAEA-CSPs is competent for more practical cases.

To summarize, MAEA-CSPs obtains good performance for both non-permutation CSPs and permutation CSPs. This benefits mainly from the following three aspects. First, we classify the CSPs into two different types on the basis of their properties, thus, the behaviors of agents can be designed aiming at different types of CSPs. Second, the minimum conflict encoding overcomes the disadvantages of the general encoding method. Although MCE is also a kind of greedy method, the mutation behavior can prevent the algorithm from being trapped in local optima. Last, in nature, the real natural selection only occurs in a local environment, and each individual can only interact with those around it. In the agent lattice, each agent can only sense the local environment. As can be seen, the model of the agent lattice is closer to the real evolutionary model in nature than the population in traditional EAs. In the agent lattice, no global control is needed at all, and each agent is independent to some degree, which is in favor of maintaining the diversity.

ACKNOWLEDGMENT The authors are grateful to the reviewers for their helpful

comments and valuable suggestions.

REFERENCES [1] V. Kumar, “Algorithm for constraint satisfaction problem: a survey,” AI

Magazine, vol. 13, no. 1, 1992, pp.32-44. [2] P. Van Hentenryck, The OPL Optimization Programming Language. The

MIT Press, 1999. [3] I. P. Gent and T. Walsh, “CSPLib: a benchmark library for constraints,”

in Proceedings of the Fifth International Conference on Principles and Practice of Constraint Programming, Volume 1713 of Lecture Notes in Computer Science., Springer-Verlag, 1999, pp.480-481.

[4] M. C. Cooper, “An optimal k-consistency algorithm,” Artificial Intelligence, 41, 1989, pp.89-95.

[5] Y. Deville and P. Van Hentenryck, “Efficient arc consistency algorithm for a class of CSP problems,” in Proceedings of the International Joint Conference on Artificial Intelligence, v1, 1991, pp.325.

[6] R. Stallman and G. J. Sussman, “Forward reasoning and dependency directed backtracking,” Artificial Intelligence, 9(2), 1977, pp.135-196.

[7] M. Bruynooghe, “Solving combinational search problems by intelligent backtracking,” Inform. Process. Lett. 12(1), 1981, pp.36-39.

[8] B. Selman, H. Levesque, and D. Mitchell, “A new method for solving hard satisfiability problems,” in Proceedings of the Tenth National Conference on Artificial Intelligence, 1992, pp.440-446.

[9] B. Selman, H. Kautz, and B. Cohen, “Noise strategies for improving local search,” in Proceedings of the National Conference on Artificial Intelligence, 1994, pp.337-343.

[10] J. Walser, Integer Optimization by Local Search. Springer-Verlag, 1998. [11] S. Minton, M. D. Johnston, and A. B. Philips, “Solving large-scale

constraint-satisfaction and scheduling problems using a heuristic repair method,” in Proceedings of the National Conference on Artificial Intelligence, 1990.

[12] S. Minton, M. D. Johnston, A. B. Philips, and P. Laird, “Minimizing conflicts: a heuristic repair method for constraint-satisfaction and scheduling problems,” Artificial Intelligence, Volume 58, 1992, pp.161-205.

[13] B. Selman, H. Kautz, and B. Cohen, “Local search strategies for satisfiability testing,” in DIMACS Series in Discrete Mathematics and Theoretical Computer Science, Vol. 26, American Mathematical Society, Providence, RI, 1996, pp.521-532.

[14] A. Eiben, “Evolutionary algorithms and constraint satisfaction: Definitions, survey, methodology, and research directions,” in Theoretical Aspects of Evolutionary Computing, L. Kallel, B. Naudts, and A. Rogers, Eds. New York: Springer-Verlag, 2001, pp.13-58. ser. Natural Computing.

[15] M. Riff-Rojas, “Using the knowledge of the constraint network to design an evolutionary algorithm that solves CSP,” in Proc. 3rd IEEE Conf. Evolutionary Computation, 1996, pp.279-284.

[16] M. Riff-Rojas, “Evolutionary search guided by the constraint network to solve CSP,” in Proc. 4th IEEE Conf. Evolutionary Computation, 1997, pp.337-348.

[17] H. Handa, C. O. Katai, N. Baba, and T. Sawaragi, “Solving constraint satisfaction problems by using coevolutionary genetic algorithms,” in Proc. 5th IEEE Conf. Evolutionary Computation, 1998, pp.21-26.

[18] H. Handa, N. Baba, O. Katai, T. Sawaragi, and T. Horiuchi, “Genetic algorithm involving coevolution mechanism to search for effective genetic information,” in Proc. 4th IEEE Conf. Evolutionary Computation, 1997, pp.709-714.

[19] E. Marchiori, “Combining constraint processing and genetic algorithms for constraint satisfaction problems,” in Proc. 7th Int. Conf. Genetic Algorithms, T. Bäck, Ed., 1997, pp.330-337.

[20] A. Eiben, P.-E. Raué, and Z. Ruttkay, “Solving constraint satisfaction problems using genetic algorithms,” in Proc. 1st IEEE Conf. Evolutionary Computation, 1994, pp.542-547.

[21] B. Craenen, A. Eiben, and E. Marchiori, “Solving constraint satisfaction problems with heuristic-based evolutionary algorithms,” in Proc. Congress Evolutionary Computation, 2000, pp.1571-1577.

[22] J. Paredis, “Coevolutionary constraint satisfaction,” in Proc. 3rd Conf. Parallel Problem Solving from Nature, Y. Davidor, H.-P. Schwefel, and R. Männer, Eds., 1994, pp.46-55. ser. Lecture Notes in Computer Science.

[23] J. Paredis, “Co-evolutionary computation,” Artif. Life, vol. 2, no. 4, 1995, pp.355-375.

[24] G. Dozier, J. Bowen, and D. Bahler, “Solving randomly generated constraint satisfaction problems using a micro-evolutionary hybrid that evolves a population of hill-climbers,” in Proc. 2nd IEEE Conf. Evolutionary Computation, 1995, pp.614-619.

[25] P. J. Stuckey and V. Tam, “Improving evolutionary algorithms for efficient constraint satisfaction,” Int. J. Artif. Intell. Tools, vol. 8, no. 4, 1999, pp.363-384.

[26] G. Dozier, J. Bowen, and A. Homaifar, “Solving constraint satisfaction problems using hybrid evolutionary search,” IEEE Trans. Evol. Comput., vol. 2, Apr., 1998, pp.23-33.

[27] A. Eiben and J. I. Van Hemert, “SAW-ing EAs: Adapting the fitness function for solving constrained problems,” in New Ideas in Optimization, D. Corne, M. Dorigo, and F. Glover, Eds. New York: McGraw-Hill, 1999, pp.389-402.

[28] B. Craenen and A. Eiben, “Stepwise adaption of weights with refinement and decay on constraint satisfaction problems,” in Proc. Genetic and Evolutionary Computation Conf., GECCO-2001, L. Spector, E. Goodman, A. Wu, W. Langdon, H.-M. Voigt, M. Gen, S. Sen, M. Dorigo, S. Pezeshk, M. Garzon, and E. Burke, Eds., 2001, pp.291-298.

[29] B. G. W. Craenen, A. E. Eiben, and J. I. Van Hemert, “Comparing evolutionary algorithms on binary constraint satisfaction problems,” IEEE Trans. Evol. Comput., vol. 7, Oct., 2003, pp.424-444.

[30] J. Ferber, Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence. New York: Addison-Wesley, 1999.

[31] J. Liu, Autonomous Agents and Multi-Agent Systems: Explorations in Learning, Self-Organization, and Adaptive Computation. Singapore: World Scientific, 2001.

[32] J. Liu, Y. Y. Tang, and Y. C. Cao, “An evolutionary autonomous agents approach to image feature extraction,” IEEE Trans. Evol. Comput., vol.1, no.2, pp.141-158, 1997.

[33] J. Liu, H. Jing, and Y. Y. Tang, “Multi-agent oriented constraint satisfaction,” Artif. Intell., vol. 136, no. 1, 2002, pp.101-144.

[34] W. Zhong, J. Liu, M. Xue, and L. Jiao, “A multiagent genetic algorithm for global numerical optimization,” IEEE Trans. Syst., Man, and Cybern. B, vol. 34, no. 2, 2004, pp.1128-1141.

[35] P. T. Sandanayake and D. J. Cook, “ONASI: Online agent modeling using a scalable Markov model,” International Journal of Pattern Recognition and Artificial Intelligence, 17(5), 2003, pp. 757-779.

Page 15: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 15

[36] M. G. Schleuter, “ASPARAGOS: an asynchronous parallel genetic optimization strategy,” in Proceedings of 3rd International Conference on Genetic Algorithms, 1989, pp.422-427.

[37] Y. Davidor, “A naturally occuring niche & species phenomenon: the model and first results,” in Proceedings of 4th International Conference on Genetic Algorithms, 1991, pp.257-263.

[38] P. Spiessens and B. Manderick, “A massively parallel genetic algorithm: implementation and first analysis,” in Proceedings of 4th International Conference on Genetic Algorithms, 1991, pp.279-286.

[39] S. Baluja, “Structure and performance of fine-grain parallelism in genetic search,” in Proceedings of 5th International Conference on Genetic Algorithms, 1993, pp.155-162.

[40] R. J. Chen, R. R. Meyer, and J. Yackel, “A genetic algorithm for diversity minimization and its parallel implementation,” in Proceedings of 5th International Conference on Genetic Algorithms, 1993, pp.163-170.

[41] Y. Davidor, T. Yamada, and R. Nakano, “The ECOlogcal framework II: improving GA performance at virtually zero cost,” in Proceedings of 5th International Conference on Genetic Algorithms, 1993, pp.171-176.

[42] T. Maruyama, T. Hirose, and A. Konagaya, “A fine-grained parallel genetic algorithm for distributed parallel systems,” in Proceedings of 5th International Conference on Genetic Algorithms, 1993, pp.184-190.

[43] K. De Jong and J. Sarma, “On decentralizing selection algorithms,” in Proceedings of 6th International Conference on Genetic Algorithms, 1995, pp.17-23.

[44] C. C. Pettey, “Diffusion (cellular) models,” in Evolutionary Computation 2: Advanced Algorithms and Operators, Institute of Physics Publishing, 2000, pp.125-133.

[45] A. E. Eiben and J. E. Smith, “Multimodal problems and spatial distribution,” in Introduction to Evolutionary Computing, Springer-Verlag, 2003.

[46] H. Muhlenbein, “Parallel genetic algorithms, population genetics and combinatorial optimization,” in Proceedings of 3rd International Conference on Genetic Algorithms, 1989, pp.416-421.

[47] H. Muhlenbein, M. Schomisch, and J. Born, “The parallel genetic algorithm as function optimizer,” in Proceedings of 4th International Conference on Genetic Algorithms, 1991, pp.271-278.

[48] V. S. Gordon and D. Whitley, “Serial and parallel genetic algorithms as function optimizers,” in Proceedings of International Conference on Genetic Algorithms, 1993, pp.177-183.

[49] F. F. Eastonm and N. Mansour, “A distributed genetic algorithm for employee staffing and scheduling problems,” in Proceedings of 5th International Conference on Genetic Algorithms, 1993, pp.360-367.

[50] G. Folino, C. Pizzuti, and G. Spezzano, “Parallel hybrid method for SAT that couples genetic algorithms and local search,” IEEE Trans. Evol. Comput., vol.5, Aug., 2001, pp.323-334.

[51] G. Rudolph, “Convergence analysis of canonical genetic algorithms,” IEEE Trans. Neural Networks, vol. 5, no. 1, 1994, pp.96-101.

[52] B. Dinabandhu, C. A. Murthy, and K. P. Sankar, “Genetic algorithm with elitist model and its convergence,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 10, no. 6, 1996, pp.731-747.

[53] M. Iosifescu, Finite Markov Processes and Their Applications. Wiley, Chichester, 1980.

[54] F. Rossi, C. Petrie, and V. Dhar, “On the equivalence of constraint satisfaction problems,” in Proc. 9th European Conf. Artificial Intelligence, L. C. Aiello, Ed., 1990, pp.550-556.

[55] A. D. Blas, A. Jagota, and R. Hughey, “Energy function-based approaches to graph coloring,” IEEE Trans. Neural Networks, vol. 13, no. 1, 2002, pp.81-91.

[56] P. Morris, “On the density of solutions in equilibrium points for the queens problem,” in Proceedings of the 10th National Conference on Artificial Intelligence, 1992, pp.428-433.

[57] J. Gu, Constraint-based search. Cambridge University Press, New York, 1992.

[58] R. Sosič and J. Gu, “Efficient local search with conflict minimization: a casestudy of the n-queen problem,” IEEE Trans. on Knowledge and Data Engineering, vol. 6, no. 5, 1994, pp.661-668.

[59] G. Dozier, J. Bowen, and D. Bahler, “Solving small and large scale constraint satisfaction problems using a heuristic-based microgenetic algorithm,” in Proceedings of the International Conference on Evolutionary Computation, 1994, pp.306-311.

[60] R. S. Lee and J. S. Michael, “A genetic algorithm-based approach to

flexible flow-line scheduling with variable lot sizes,” IEEE Trans. Syst., Man, and Cybern. B, vol. 27, no. 1, 1997, pp.36-54.

[61] C. Dimopoulos and A. M. S. Zalzala, “Recent developments in evolutionary computation for manufacturing optimization: problems, solutions, and comparisons,” IEEE Trans. on Evolutionary Computation, vol. 4, no.2, 2000, pp.93-113.

[62] S. Hajri, N. Liouane, S. Hammadi, and P. Borne, “A controlled genetic algorithm by fuzzy logic and belief functions for job-shop scheduling,” IEEE Trans. Syst., Man, and Cybern. B, vol. 30, no. 5, 2000, pp.812-818.

[63] I. Kacem, S. Hammadi, and P. Borne, “Approach by localization and multiobjective evolutionary optimization for flexible job-shop scheduling problems,” IEEE Trans. Syst., Man, and Cybern. C, vol. 32, no. 1, 2002, pp.1-13.

[64] M. T. Jensen, “Generating robust and flexible job shop schedules using genetic algorithms,” IEEE Trans. on Evolutioanry Computation, vol. 7, no. 3, 2003, pp.275-288.

[65] P. Walsh and P. Fenton, “A high-throughtput computing environment for job shop scheduling genetic algorithms,” in Proceedings of IEEE Congress on Evolutionary Computation, vol. 2, 2004, pp.1554-1560.

[66] L. B. Valencia and G. Rabadi, “A multiagents approach for the job shop scheduling problem with earliness and tardiness,” in Proceedings of IEEE International Conference on Systems, Man and Cybernetics, vol. 2, 2003, pp.1217-1222.

[67] K. Krishna, K. Ganeshan, and D. J. Ram, “Distributed simulated annealing algorithms for job shop scheduling,” IEEE Trans. Syst., Man, and Cybern. vol. 25, no. 7, 1995, pp.1102-1109.

[68] P. B. Luh, X. Zhao, Y. Wang, and L. S. Thakur, “Lagrangian relaxation neural networks for job shop scheduling,” IEEE Trans. Robotics and Automation, vol. 16, no. 1, 2000, pp.78-88.

[69] C. F. Tsai and F. C. Lin, “A new hybrid heuristic technique for solving job-shop scheduling problem,” in Proceedings of IEEE International Workshop on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications, 2003, pp.53-58.

[70] P. Fortemps, “Jobshop scheduling with imprecise durations: a fuzzy approach,” IEEE Trans. on Fuzzy Systems, vol. 5, no. 4, 1997, pp.557-569.

[71] F. T. Lin, “Fuzzy job-shop scheduling based on ranking level (λ, 1) interval-valued fuzzy numbers,” IEEE Trans. on Fuzzy Systems, vol. 10, no. 4, 2002, pp.510-522.

[72] S. Moon, K. H. Lee, and D. Lee, “Fuzzy branching temporal logic,” IEEE Trans. Syst., Man, and Cybern. B, vol. 34, no. 2, 2004, pp.1045-1055.

[73] A. S. Jain and S. Meeran, “Deterministic job-shop scheduling: past, present and future,” European Journal of Operational Research, 113, 1999, pp.390-434.

[74] H. Fisher and G. L. Thompson, “Probabilistic learning combinations of local job-shop scheduling rules,” in Industrial Scheduling, Prentice-Hall, Englewood Cliffs, NJ, 1963, pp.225-251.

[75] S. Lawrence, “Resource constrained project scheduling: An experimental investigation of heuristic scheduling techniques (Supplement),” Graduate School Ind. Adm., Carnegie-Mellon Univ., Pittsburg, PA, 1984.

[76] D. Applegate and W. Cook, “A computational study of the job-shop scheduling problem,” ORSA Journal on Computing, 3(2), 1991, pp.149-156.

[77] L. Wang, Shop Scheduling with Genetic algorithms. Tsinghua University Press, Beijing, China, 2003.

[78] http://www.xs4all.nl/~bcraenen/resources/csps_modelE_v20_d20.tar.gz. [79] http://mat.gsia.cmu.edu/COLOR/color.html [80] http://hjworm.edu.chinaren.com/myresearch.htm [81] ftp://mscmga.ms.ic.ac.uk/pub/jobshop1.txt

Weicai Zhong was born in Jiangxi, China, on Sept. 26, 1977. He received the B.S. degree in computer science and technology from Xidian University, Xi’an, China, in 2000, and received the Ph.D. degree in pattern recognition and intelligent information system from the Institute of Intelligent Information Processing of Xidian University in 2004. Now he is a Postdoctoral Fellow in Xidian University.

Page 16: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 16

His research interests include evolutionary computation, multiagent systems, image and video compression, pattern recognition, and data mining.

Jing Liu was born in Xi’an, China, on Mar. 5, 1977. She received the B.S. degree in computer science and technology from Xidian University, Xi’an, China, in 2000, and received the Ph.D. degree in circuits and systems from the Institute of Intelligent Information Processing of Xidian University in 2004. Now she is a teacher in Xidian University. Her research interests include evolutionary computation, multiagent systems, and data mining. Licheng Jiao (SM’89) was born in Shaanxi, China, on Oct. 15, 1959. He received the B.S. degree from Shanghai Jiaotong University, Shanghai, China, in 1982. He received the M.S. and Ph.D. degrees from Xi’an Jiaotong University, Xi’an, China, in 1984 and 1990, respectively. From 1984 to 1986, he was an Assistant Professor in Civil Aviation Institute of China, Tianjing, China. During 1990 and 1991, he was a Postdoctoral Fellow in the National Key Lab for

Radar Signal Processing, Xidian University, Xi’an, China. Now he is the Dean of the electronic engineering school and the director of the Institute of Intelligent Information Processing of Xidian University. His current research interests include signal and image processing, nonlinear circuit and systems theory, learning theory and algorithms, optimization problems, wavelet theory, and data mining. He is the author of there books: Theory of Neural Network Systems (Xi’an, China: Xidian Univ. Press, 1990), Theory and Application on Nonlinear Transformation Functions (Xi’an, China: Xidian Univ. Press, 1992), and Applications and Implementations of Neural Networks (Xi’an, China: Xidian Univ. Press, 1996). He is the author or coauthor of more than 150 scientific papers.

Fig. 8. The average running times of MAEA-CSPs

Fig. 1. The model of the agent lattice.

TABLE I THE EXPERIMENTAL RESULTS OF MAEA-CSPS FOR BINARY CSPS

TABLE VI THE AVERAGE RUNNING TIMES AND THE STANDARD DEVIATION OF

MAEA-CSPS

TABLE V THE AVERAGE RUNNING TIMES OF MAEA-CSPS AND ERA (S)

TABLE IV COMPARISON BETWEEN ERA AND MAEA-CSPS

Page 17: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 17

Fig. 3. The SR and the AES of MAEA-CSPs for p=0.24-0.26.

Fig. 2. The comparison between MAEA-CSPs and the four existing algorithms.

Page 18: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 18

Fig. 4. The SR and the ME of MAEA-CSPs for p=0.27-0.29.

Page 19: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 19

Fig. 5. The ME of MAEA-CSPs for p=0.30-0.33.

Fig. 6. The characteristics of the diagonal lines, the number that each arrow points to is the difference or sum on the diagonal line. (a) The positive diagonal lines;

(b) The negative diagonal lines

Page 20: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 20

TABLE III COMPARISON BETWEEN SSC AND MAEA-CSPS

Fig. 7. The performing process of the self-learning behavior for the CSAgent, ⟨1, 2, 8, 4, 5, 7, 3, 6⟩. (a) and (h) are the states before and after performing the

self-learning behavior, (b)-(g) are the states in each step.

Page 21: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 21

TABLE II THE EXPERIMENTAL RESULTS OF MAEA-CSPS FOR THE DIMACS CHALLENGE

Page 22: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 22

TABLE VII THE EXPERIMENTAL RESULTS OF MAEA-JSPS

Page 23: A Multiagent Evolutionary Algorithm for Constraint Satisfaction …see.xidian.edu.cn/faculty/liujing/pdf/TSMCB_MAGACSP_JingLiu06.pdf · SMCB-E-03172004-0141.R2 2 computational cost

SMCB-E-03172004-0141.R2 23

Fig. 9. The Gantt charts of the schedules corresponding to the best solutions of (a) LA26 (1218), (b) LA28 (1216), (c) LA30 (1355), and (d) LA37 (1397).