PAPER Hybrid Consultant-Guided Search for the Traveling ...

11
1728 IEICE TRANS. FUNDAMENTALS, VOL.E97–A, NO.8 AUGUST 2014 PAPER Hybrid Consultant-Guided Search for the Traveling Salesperson Problem Hiroyuki EBARA a) , Member, Yudai HIRANUMA †∗ , and Koki NAKAYAMA , Nonmembers SUMMARY Metaheuristic methods have been studied for combina- tional optimization problems for some time. Recently, a Consultant-Guided Search (CGS) has been proposed as a metaheuristic method for the Travel- ing Salesperson Problem (TSP). This approach is an algorithm in which a virtual person called a client creates a solution based on consultation with a virtual person called a consultant. In this research, we propose a parallel algorithm which uses the Ant Colony System (ACS) to create a solution with a consultant in a Consultant-Guided Search, and calculate an approx- imation solution for the TSP. Finally, we execute a computer experiment using the benchmark problems (TSPLIB). Our algorithm provides a solu- tion with less than 2% error rate for problem instances using less than 2000 cities. key words: Consultant-Guided Search, Traveling Salesperson Problem, Combinatorial Optimization Problem, Parallel Algorithm, Ant Colony Op- timization 1. Introduction In recent years, as the speed and ability of computers has continued to improve, the amount of calculation required for new problems has increased. When problems require huge calculations on a computer, researchers may apply known methods to solving the combinatorial optimization problem. The combinatorial optimization problem is a problem to determine the minimum or maximum valued combination based on the constraints given to evaluate the value of ob- jective functions. It is a rare case when the result one wants to find for an optimal value for must be strictly a combinatorial optimiza- tion problem. An approximate solution with a certain degree of accuracy is often acceptable instead, because in general, the solution can be obtained in a short period of time. It is generally known that the execution time for determining the exact solution increases exponentially with the size of the problem. Therefore, obtaining an approximate solution suf- ficiently close to the optimal solution is faster than finding the exact value. Methods for solving approximate solutions have been studied [1]. Metaheuristics, among the many ap- proximate solution methods, for example, have been exten- sively studied in particular, because this method produces a general solution that can be adapted for many problems [2]. Swarm Intelligence is a metaheuristics that has been extensively studied as a method for solving optimization Manuscript received October 16, 2013. Manuscript revised March 5, 2014. The authors are with Kansai University, Suita-shi, 564-8680 Japan. Presently, with NTT Comware Corporation. a) E-mail: [email protected] DOI: 10.1587/transfun.E97.A.1728 problems in recent years [3]. In addition, the Consultant- Guided Search (CGS) algorithm has recently been proposed using a Swarm Intelligence algorithm [4]–[6]. This algo- rithm is inspired by the way real people make decisions based on advice received from consultants. Human behav- ior is complex, but CGS uses virtual people that follow only simple rules. Also, there is no leadership role in which to organize people; all people act on their own. Each virtual person is responsible for being both a client and a consul- tant. The consultant builds a strategy (solution) to lead the client to create a solution, and the client creates a solution based on the strategy that the consultant builds. In this research, we find a better approximate solution for the traveling salesperson problem (TSP), which is a typ- ical example of a combinatorial optimization problem, than the existing CGS algorithm, by using a hybrid CGS method with an Ant Colony System (ACS). ACS [7] is one of the extended methods of Ant Colony Optimization (ACO) [8]. ACS is dierent from ACO in that it updates the pheromone every time virtual ants choose a city. Our proposed method in this research uses the ACS solution as a consultant in CGS to build a strategy, and the pheromone in ACS is car- ried over in CGS. In addition, the proposed method performs parallelization using MPI communication. Computer exper- iments were carried out on a PC cluster, where we verify the eectiveness of our method. 2. Traveling Salesperson Problem The Traveling Salesperson Problem (TSP) is the problem of finding the shortest possible distance in a cyclic path from a starting city, to each n city once, and back to the starting city. When C ij is the distance between city i and city j, and V = {1, ..., n} is the set of n cities, the formula to minimize the objective function is as follows: f ( x) = n1 k=1 C x(k) x(k+1) + C x(n) x(0) (1) x(k) = i indicates that the kth city visited is i. 3. Consultant-Guided Search Consultant-Guided Search (CGS) is one of the recently pro- posed methods based on metaheuristics which can directly exchange information between humans [4]–[6]. When a Copyright c 2014 The Institute of Electronics, Information and Communication Engineers

Transcript of PAPER Hybrid Consultant-Guided Search for the Traveling ...

Page 1: PAPER Hybrid Consultant-Guided Search for the Traveling ...

1728IEICE TRANS. FUNDAMENTALS, VOL.E97–A, NO.8 AUGUST 2014

PAPER

Hybrid Consultant-Guided Search for the Traveling SalespersonProblem

Hiroyuki EBARA†a), Member, Yudai HIRANUMA†∗, and Koki NAKAYAMA†, Nonmembers

SUMMARY Metaheuristic methods have been studied for combina-tional optimization problems for some time. Recently, a Consultant-GuidedSearch (CGS) has been proposed as a metaheuristic method for the Travel-ing Salesperson Problem (TSP). This approach is an algorithm in which avirtual person called a client creates a solution based on consultation witha virtual person called a consultant. In this research, we propose a parallelalgorithm which uses the Ant Colony System (ACS) to create a solutionwith a consultant in a Consultant-Guided Search, and calculate an approx-imation solution for the TSP. Finally, we execute a computer experimentusing the benchmark problems (TSPLIB). Our algorithm provides a solu-tion with less than 2% error rate for problem instances using less than 2000cities.key words: Consultant-Guided Search, Traveling Salesperson Problem,Combinatorial Optimization Problem, Parallel Algorithm, Ant Colony Op-timization

1. Introduction

In recent years, as the speed and ability of computers hascontinued to improve, the amount of calculation required fornew problems has increased. When problems require hugecalculations on a computer, researchers may apply knownmethods to solving the combinatorial optimization problem.The combinatorial optimization problem is a problem todetermine the minimum or maximum valued combinationbased on the constraints given to evaluate the value of ob-jective functions.

It is a rare case when the result one wants to find for anoptimal value for must be strictly a combinatorial optimiza-tion problem. An approximate solution with a certain degreeof accuracy is often acceptable instead, because in general,the solution can be obtained in a short period of time. It isgenerally known that the execution time for determining theexact solution increases exponentially with the size of theproblem. Therefore, obtaining an approximate solution suf-ficiently close to the optimal solution is faster than findingthe exact value. Methods for solving approximate solutionshave been studied [1]. Metaheuristics, among the many ap-proximate solution methods, for example, have been exten-sively studied in particular, because this method produces ageneral solution that can be adapted for many problems [2].

Swarm Intelligence is a metaheuristics that has beenextensively studied as a method for solving optimization

Manuscript received October 16, 2013.Manuscript revised March 5, 2014.†The authors are with Kansai University, Suita-shi, 564-8680

Japan.∗Presently, with NTT Comware Corporation.

a) E-mail: [email protected]: 10.1587/transfun.E97.A.1728

problems in recent years [3]. In addition, the Consultant-Guided Search (CGS) algorithm has recently been proposedusing a Swarm Intelligence algorithm [4]–[6]. This algo-rithm is inspired by the way real people make decisionsbased on advice received from consultants. Human behav-ior is complex, but CGS uses virtual people that follow onlysimple rules. Also, there is no leadership role in which toorganize people; all people act on their own. Each virtualperson is responsible for being both a client and a consul-tant. The consultant builds a strategy (solution) to lead theclient to create a solution, and the client creates a solutionbased on the strategy that the consultant builds.

In this research, we find a better approximate solutionfor the traveling salesperson problem (TSP), which is a typ-ical example of a combinatorial optimization problem, thanthe existing CGS algorithm, by using a hybrid CGS methodwith an Ant Colony System (ACS). ACS [7] is one of theextended methods of Ant Colony Optimization (ACO) [8].ACS is different from ACO in that it updates the pheromoneevery time virtual ants choose a city. Our proposed methodin this research uses the ACS solution as a consultant inCGS to build a strategy, and the pheromone in ACS is car-ried over in CGS. In addition, the proposed method performsparallelization using MPI communication. Computer exper-iments were carried out on a PC cluster, where we verify theeffectiveness of our method.

2. Traveling Salesperson Problem

The Traveling Salesperson Problem (TSP) is the problem offinding the shortest possible distance in a cyclic path froma starting city, to each n city once, and back to the startingcity.

When Ci j is the distance between city i and city j, andV = {1, ..., n} is the set of n cities, the formula to minimizethe objective function is as follows:

f (x) =n−1∑k=1

Cx(k)x(k+1) + Cx(n)x(0) (1)

x(k) = i indicates that the kth city visited is i.

3. Consultant-Guided Search

Consultant-Guided Search (CGS) is one of the recently pro-posed methods based on metaheuristics which can directlyexchange information between humans [4]–[6]. When a

Copyright c© 2014 The Institute of Electronics, Information and Communication Engineers

Page 2: PAPER Hybrid Consultant-Guided Search for the Traveling ...

EBARA et al.: HYBRID CONSULTANT-GUIDED SEARCH FOR THE TRAVELING SALESPERSON PROBLEM1729

client decides an action, that action is sometimes based onadvice from a consultant. CGS obtains the solution basedon the relationship between the consultant and the client re-ceiving the advice. The virtual person in this method playsthe role of both the consultant and the client. The advice theconsultant gives to the client is a solution, called a strat-egy. Since the virtual person acts as both the client andthe consultant, the method is actually divided into modesto build a strategy as a consultant and to create a solutionas a client. These modes are called sabbatical modes andnormal modes, respectively.

First, the consultant needs to build a strategy, which isa solution that will give advice to the client. Therefore, all ofthe virtual persons start the search from a sabbatical mode.In the sabbatical mode, the virtual person, as the consultant,builds a solution to guide the client. The solution is built onthe basis of the distance between the cities. The sabbaticalmode continues until it creates plural solutions to build asgood a strategy as possible. When the sabbatical mode iscompleted, the algorithm moves to the normal mode. In thenormal mode, the virtual person, as the client, selects a con-sultant and creates a new solution based on the solution theconsultant produced. If the solution the client creates is bet-ter than the strategy the consultant created, the reputation ofthe consultant rises and the consultant is more likely to beselected by clients. The reputation of the consultant goesdown gradually. If the consultant’s reputation falls belowa certain value, it migrates to the sabbatical mode and thenstarts to rebuild a strategy. CGS can find the solution usingthe following algorithm.

1. The algorithm creates a virtual person, and sends thatperson to the sabbatical mode.

2. In the sabbatical mode, each virtual person creates a so-lution according to a formula of strategy construction.In the normal mode, each virtual person creates a solu-tion according to a formula of solution creation.

3. The algorithm updates strategies after each virtual per-son generates a solution.If the solution is better than before in the sabbaticalmode, it replaces a previous strategy.If the solution is better than the strategy the consultantsused to create the solution in normal mode, it replacesthe previous strategy.

4. The algorithm updates the strategies.5. If the consultant’s reputation falls below a certain

value, the consultant moves to the sabbatical mode.If the consultant builds a strategy the predeterminednumber of times, it changes to normal mode.

6. If the algorithm meets the criteria, it terminates thesearch.

When a virtual person builds a strategy in sabbaticalmode, the mode basically selects the city which is the short-est possible distance between the city where a virtual personis now and the next city. Also, in some cases, the virtual per-son decides the next city probabilistically. The formula thevirtual person uses in the sabbatical mode is as follows.

j =

{argminl∈Nk

i{dil} i f a ≤ a0

J otherwise(2)

pki j =

(1/di j)β∑l∈Nk

i(1/di j)β

(3)

l ∈ Nki represents that city l is included in the exe-

cutable neighborhood around a virtual person k in city i. dil,a(0 ≤ a ≤ 1), a0(0 ≤ a0 ≤ 1) represents the shortest pos-sible distance between the city i and l, the random variable,and the parameter, respectively. J is the random variable se-lected in the probability distribution obtained by pk

i j, and βis a parameter.

In normal mode, the client selects the next city based onthe strategy of the consultant. More specifically, the clientselects the consultant from the normal mode, including it-self, based on the reputations of the consultants. Next, theclient creates the new solution by the following formula, us-ing the strategy that the selected consultant created in thesabbatical mode.

j =

⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩

v i f v � null ∧ q ≤ q0

argminl∈Nki{dil} i f (v = null ∨ q > q0)

∧b ≥ b0

J otherwise

(4)

v represents the city that is connected to the city wherethe consultant’s solution is now. q, b(0 ≤ q, b ≤ 1) is arandom variable, q0, b0(0 ≤ q0, b0 ≤ 1) is a parameter.

The reputation is changed by the obtained solutionbased on the consultant’s advice and the reputation valueof the consultant. An update of the reputation is performedaccording to the following formula.

pk =reputationαk resultγk∑

c∈C reputationαc resultγc(5)

C is a set of consultants in normal mode, α and γ areparameters. reputation is the reputation value of the con-sultant; result represents the reciprocal of the consultant’ssolution. In addition, reputation is gradually reduced by thefollowing formula;

reputationk ← reputationk(1 − r) (6)

r represents the reduction rate.

4. Related Works

4.1 Consultant-Guided Search

CGS, used in this study, is a new metaheuristic approach thatwas developed recently through Lordache’s study [4]. In ad-dition to TSP, CGS has also been applied to the QuadraticAssignment Problem (QAP) [6] by the same author, andCGS has put out good results compared to the Max-MinAnt System (MMAS) [9]. In this study, the author positionsCGS as a hybrid metaheuristic approach that combines thisnew approach with the concepts of other optimization tech-niques. The construction of the solution in CGS is similar

Page 3: PAPER Hybrid Consultant-Guided Search for the Traveling ...

1730IEICE TRANS. FUNDAMENTALS, VOL.E97–A, NO.8 AUGUST 2014

to the method in which the set of ants in Ant Programming(AP) [10], [11] builds the solution. The solution is selectedfrom the candidate set probabilistically, and desirability isgiven by the amount of pheromone in AP and the reputa-tion and strategy of the consultant in CGS. The author alsoinsists that the operation of the client selecting a consultantis similar to the waggle dance in Bee Colony Optimization(BCO).

4.2 Ant Colony Optimization

In the study [12] by Manfrin et al., a comparative experimentwas performed for 4 parallel algorithms of MMAS. Bothsearch processes for synchronous communication and asyn-chronous communication were examined. The authors alsotested the communications independently in parallel, suchthat communication is not performed. The parallel algo-rithms were: Completely-Connected, Replace-Worst, Hy-percube and Ring. Completely-Connected is the method inwhich the solution from the PC with the best solutions in theentire PC cluster are sent to all other PCs. Replace-Worst isthe method in which the PC with the best solution sends itsown solution to the PC with the worst solution in the entirePC cluster. Hypercube is the method in which each PC is lo-cated on the vertices of a hypercube, and only share the bestsolution with PCs on vertices that are directly connected toit. Ring is the method in which each PC is connected in aring, and sends the best solution to the next PC. The methodof running independently without sending the solution in-formation gave the best solution based on the experimen-tal result. The 4 parallel approaches reported that all thePCs quickly converged on the same solution. The 4 paral-lel approaches used the best solution when they did globalpheromone updates every iteration. Therefore, each PC isestimated to respectively explore the nearest search space,the amount of pheromone increases that occur rapidly at thesame place, and the processes converged at an early stage.

Hassan et al. [13] has done research on the number ofvirtual ants in ACS. In this study, he compares the qual-ity of the solution at each iteration. If it is worse than theprevious iteration, his process increases the number of ants.In addition, his process does an adaptive local search eachtime. The number of ants will not necessarily increase in-definitely; the maximum value is set to 1.5 times the initialvalue at first. However, if the number of ants is increasedup to the maximum value, the maximum value is increaseda little over time. An adaptive local search is used λ-opt(λ isbetween 2 and 5). This study performs better than ACS andMMAS based on the comparison experiment.

There has also been research on a hybrid metaheuris-tic approach with parallelization such as the combinationof ACS and the Genetic Algorithm (GA). In the study ofChen et al. [14], the solution obtained by ACS is improvedby GA. Studies that combine ACS and GA are frequent. Inthis study, each ant of ACS performs the search as a chro-mosomes of GA. From the result, this search speeds up theconvergence by exchanging the information of the best solu-

tion with the other group, and updating the entire pheromoneinformation. This study tried 7 different ways of exchang-ing the solution information. The first way was to share thebest solution in all groups. In the second way, group num-bers were represented by binary values, and the algorithmexchanged solutions among the groups that differed in thelast 1 bit. The third way was to pass the solution to thenext group. In the fourth way, group numbers were repre-sented by binary values, and the algorithm exchanged solu-tions among groups which differed by 1 bit. The fifth, sixth,and seventh ways combined the first and second, the firstand third, and the first and fourth, respectively.

Additional studies using Ant Colony Optimizationother than those mentioned above include [15]–[19].

4.3 Other Metaheuristic Approaches

In the study [20] of Shi et al., the authors propose a hybridalgorithm combining Particle Swarm Optimization (PSO),which is a kind of swarm intelligence, and GA. PSO andGA created the solution using the individuals independently.This hybrid focused on this similarity. First, PSO andGA searched multiple times independently. After that, themethod exchanged the individuals between PSO and GA,and repeated the search again. This algorithm has producedgood results compared to the traditional PSO.

There are several other metaheuristic approaches thathave also been recently proposed in addition to CGS, suchas the Migrating Birds Optimization [21] based on the be-havior of a swarm of migratory birds flying in a V shape,the Bat Algorithm [22], based on the behavior of echo posi-tion measurements in bats, and Ray Optimization [23] usingrefracted light based on Snell’s law.

5. Proposed Method

5.1 Experimental System

The experimental environment used in this study has beenconstructed using SCore, which is a free software MPI envi-ronment distributed by a PC Cluster Consortium [24]. Thereare 10 PCs for calculation processing and 1 server in the lab-oratory, connected to the same LAN over a Gigabit Ethernetnetwork switch, and the cluster can communicate with eachother MPI environment (Fig. 1). Figure 1 shows the detailsof the experimental system used in this study. We used 1server and the PC for calculation processing is 10 units. Ta-bles 1, 2 shows the performance.

5.2 Proposed Method

In this study, we aimed at obtaining a better approximate so-lution for TSP by using the solution in ACS for CGS. Themain proposed method uses the strategy of the consultantand carries over the pheromone information. The methodenables a more effective search by using the solution ob-tained in ACS for the strategy of the consultant in CGS. The

Page 4: PAPER Hybrid Consultant-Guided Search for the Traveling ...

EBARA et al.: HYBRID CONSULTANT-GUIDED SEARCH FOR THE TRAVELING SALESPERSON PROBLEM1731

Fig. 1 Experiment system.

Table 1 Performance of server.

CPU Intel Xeon X5470 3.33 GHzMemory 4 GBOS CentOS 5.6MPI SCore version 7.0.1

Table 2 Performance of calculating PCs.

CPU Intel Core2Duo E6850 3.00 GHzMemory 4 GBOS CentOS 5.6MPI SCore version 7.0.1

carryover of the pheromone information makes it possible toexplore a wide range by reusing the pheromone informationin the next ACS.

The following is a description of the characteristics ofthe proposed method.

1. The strategy of the consultantThe client creates the solution using the strategy of theconsultant in CGS. The formula for construction of thestrategy of the consultant is Eq. (2), which only usesthe distance information between cities. Therefore, itis thought that similar solutions are created often, al-though there is some randomness. In this case, weavoid such a situation by using the solution obtainedby other algorithms.In this proposed method, a virtual ant in ACS does asearch as a virtual person in CGS. In the phases ofACS, each virtual ant always keeps the best solutionof all the solutions obtained. After this phase, eachvirtual ant and its best solution is taken to CGS. CGSstarts to search for the solution using the strategy of theconsultant. Thus, the client can create a variety of so-lutions because the advice for the solution is obtainedindirectly by pheromones and the probabilistic selec-tion of the city using the distance between the cities.

2. The carryover of the pheromone informationIn this proposed method, CGS searches after converg-ing on ACS. At the end of ACS, a large difference ex-

ists in the amount of pheromones between cities in-cluded in the best solution at that time, as opposed tothe amount between other cities. In conventional CGS,the distance between the cities is used both in the con-struction of the strategy of the consultant and in thecreation of the solution of the client. Because con-ventional CGS does not use the indirect information,such as pheromones, it does not affect the search di-rectly or even take it over. However, CGS can searchintensively in the close range of the previous search,although slightly differently in ACS, by taking over thepheromone information from ACS after this search andupdating the information from time to time.

5.3 Parallelization

The proposed method uses the parallelization of MPI. Thisparallelization can be used to carry out a more extensivesearch than in the case of a single PC to share informationbetween the plural number of PCs.

The calculation process of the proposed method isstarted by the server that manages the PC cluster by turn-ing on the job for each PC, for all calculations. Each job isassigned to 1 unit of the PC for the calculation. The manage-ment server performs both the work to be assigned to the PCfor the calculation of the job and the task of collecting thecalculated result from the PC after all the calculations havebeen done. After the calculation is started, each PC for thecalculation shares the information by communicating withthe MPI for the PC. In this way, the calculation is performedin a multi-process, the solution space is not divided, and thecalculation process is performed independently, except forthe information to be shared. In addition, the communica-tion is performed asynchronously; the calculation process isnot interrupted by the communication process. The contentof the information that the PC for the calculation communi-cates and shares in the entire PC cluster is the best solutioneach PC has calculated.

In the proposed method, the PC for the calculationholds the pheromone information of each calculation. Whenthe PC updates the entire pheromone information, it usesthe solutions that have been created. Normally, the PC up-dates with the best solution in the solutions made by thevirtual ants of each PC. However, when the solution is notupdated for a while, early convergence might possibly raisethe pheromone on the path which constitutes the best solu-tion. Therefore the PC can search a more diverse range byslowing convergence to use the solution of another PC oc-casionally.

A schematic representation of parallelism of the pro-posed method is shown in Fig. 2.

5.4 Proposed Algorithm

The proposed algorithm consists of Phase1 (ACS) andPhase2 (CGS) as shown below.

Page 5: PAPER Hybrid Consultant-Guided Search for the Traveling ...

1732IEICE TRANS. FUNDAMENTALS, VOL.E97–A, NO.8 AUGUST 2014

Fig. 2 Parallization of the proposed method.

1. The creation of virtual ants (persons)The number of virtual agents is decided randomly be-tween 20 and 30.

2. Initialize the pheromoneThe initial value is

1(the number o f cities N)

× 1(the solution created by the nearest neighborhood method)

(7)

3. Phase1This phase is continued repeatedly through a search us-ing the virtual ants until the solution can no longer im-prove within a specified number of iterations.

4. Set the virtual persons to normal modeSet the virtual persons to the normal mode to use thesolution created by each of the virtual ants as the strat-egy of the consultant in the proposed method.

5. Phase2Start the search after assigning the solutions from thevirtual ants (persons) found in Phase 1 to the strategiesof each consultant.Update the pheromone in this phase.This phase is continually repeatedly through a searchusing the virtual persons until the solution can nolonger improve within a specified number of iterations.

6. Iterate the searchAfter the end of Phase 2, go back to Phase 1. In Phase 1after Phase 2, if the solution is not improved, initializethe pheromone.

7. Termination conditionThe termination condition is determined by the searchtime. This condition will terminate the search after aninterval is set in advance. It outputs the best solution atthe end.

The flowchart of the proposed method is shown in Fig. 3.

Fig. 3 Flowchart of the proposed method.

6. Experimental Result

6.1 Experimental Environment

TSP instances treated in this study are obtained from theTSPLIB [25], by distributing the TSP benchmark. Wecreated an experiment for the following 7 problem in-stances, rat575, rat783, pr1002, u1060, u2152, pr2392,and pcb3038.

We do an experiment for more than one item in orderto measure the accuracy of the proposed method. First, wecompare the proposed method (ACCGS), ACS (only Phase

Page 6: PAPER Hybrid Consultant-Guided Search for the Traveling ...

EBARA et al.: HYBRID CONSULTANT-GUIDED SEARCH FOR THE TRAVELING SALESPERSON PROBLEM1733

Table 3 Parameters (common).

Parameter Value

nthread 2npc 10nagent low 20nagent high 30lph 0.10gph 0.10

Table 4 Parameters (ACS).

Parameter Value

ACS α 1.0ACS β 3.0ACS q0 0.99

Table 5 Parameters (CGS).

Parameter Value

a0 0.9CGS α 7.0CGS β 12.0γ 7.0CGS q0 0.98b0 0.98leave 100repini 6repmax 40repmin 1bonus 8f adingranks 3r0 3.0×10−7

w 1000kw 3.0

1), and CGS (only Phase 2) in order to verify the effective-ness of the proposed method (Experiment 1). Second, wecarried out a experiment for the case of changing the num-ber of agents in order to verify the number of virtual agents(ants or persons) in the proposed algorithm (Experiment 2).Finally, we compare ACCGS , as the proposed method, andACCGS PHINIT , which initializes the pheromone infor-mation after every Phase 2, in order to verify the influenceon the results when the ACCGS carries over the pheromoneinformation from Phase 1 to Phase 2 and created the updateof the pheromone information in Phase 2.

We show the various parameters of the experiment inTables 3, 4, and 5.

In Table 3, the parameter values in common, such asthe number of virtual agents, and the search time are shown.nthread is the number of threads running on a single PC.npc is the number of PCs. nagent low is the lower limitof the number of virtual agents to perform the search, andnagent high is the upper limit of the number of virtualagents to perform the search. The number of virtual agents

is determined randomly by each process between the lowerand upper limits. lph is the rate of increase of the pheromonein the local pheromone update. gph is the rate of increase ofthe pheromone in the global pheromone update.

Table 4 shows the parameter values concerned in Phase1 of ACS. ACS α, ACS β are the weight of the distancebetween the cities based on a pseudo-random proportionalrule, and the weight of the pheromone amount, respectively.ACS q0 is the parameter to judge how to determine the nextcity based on a pseudo-random proportional rule.

Table 5 shows the parameter value concerned in Phase2 of CGS. a0 is the same as ACS q0 of Table 4. CGS α isthe weight of the reputation of the consultant for use whencalculating the probability to select the consultant. CGS βis the weight of the distance based on the pseudo-randomproportional rule. γ is the weight of the personal prefer-ence for use in calculating the probability to select the con-sultant. b0, CGS q0 is the parameter to judge whether todetermine the next city based on the city the consultant rec-ommends. leave is the count to construct the strategy as theconsultant in sabbatical mode. repini, repmax, repmin arethe initial, maximum, and minimum values of reputation,respectively. The reputation increases or decreases betweenrepmin and repmax, and does not exceed this range. bonusis a parameter to add specially to the reputation of the con-sultant, who advised the client to seek the best solution. r0,f adingranks are the parameters needed to calculate the per-centage for reducing the reputation, and the number of con-sultants needed, respectively. w is the period needed to allowthe count to succeed. kw is the parameter for the differencebetween the rate of decrease when the algorithm does notsucceed, and the rate of decrease when it does succeed.

In each experiment, the same parameters are used forall problem instances except the search time. The searchtime is determined by the number of cities: 2 hours if un-der 2000 cities, 3 hours if between 2000 and 3000 cities,and 5 hours if over 3000 cities. In addition, these parame-ter values are determined with reference to preliminary ex-periments. The average value and the minimum value arecalculated from the search results performed 10 times. Weperformed simulations 15 times for some problem instancesas a trial. The simulation result hardly changed. Therefore,we consider 10 simulations adequate.

6.2 Experiment1 (Performance Evaluation of the Pro-posed Method)

We compared ACCGS (the proposed method), ACS (onlyPhase 1), and CGS (only Phase 2) in Experiment 1.

The algorithm of ACS initializes the pheromone infor-mation every time if the termination condition is met. Allvirtual agents are started in the sabbatical mode in CGS as isthe case in the conventional Consultant-Guided Search. Thecalculation cost decides each problem’s instances: rat575,rat783, pr1002 and u1060 were set to 2 hours search timeand u2152 and pr2392 were set to 3 hours; pcb3038 wasset to 5 hours. We show these experimental results in the

Page 7: PAPER Hybrid Consultant-Guided Search for the Traveling ...

1734IEICE TRANS. FUNDAMENTALS, VOL.E97–A, NO.8 AUGUST 2014

Table 6 Result of experiment 1 (compare ACS , CGS , ACCGS ).

Problem Algorithm Average Error rate[%] Minimum Error rate[%]

rat575 ACS 6862.87 1.33 6835.71 0.93CGS 6861.01 1.30 6850.02 1.14

ACCGS 6853.19 1.18 6828.37 0.82rat783 ACS 8945.97 1.59 8924.31 1.34

CGS 8994.58 2.14 8948.63 1.62ACCGS 8920.45 1.30 8882.73 0.87

pr1002 ACS 262337.06 1.27 261647.84 1.00CGS 264797.42 2.22 264132.46 1.96

ACCGS 261909.04 1.11 260981.47 0.75u1060 ACS 229791.05 2.54 228346.92 1.90

CGS 229125.39 2.25 227798.83 1.65ACCGS 228114.18 1.79 227333.66 1.45

u2152 ACS 65809.63 2.42 65628.97 2.14CGS 67446.90 4.97 66761.69 3.90

ACCGS 65621.88 2.22 65380.55 1.88pr2392 ACS 388383.71 2.74 384362.46 1.71

CGS 405769.24 7.34 402506.64 6.47ACCGS 386072.60 2.13 384499.13 1.67

pcb3038 ACS 159241.46 15.65 158343.06 15.00CGS 158333.16 14.99 157919.31 14.69

ACCGS 156661.70 13.78 155358.11 12.83

Fig. 4 Result of experiment 1 (compare ACS ,CGS ,ACCGS )(Average).

Fig. 5 Result of experiment 1 (compare ACS ,CGS ,ACCGS ) (Mini-mum).

following: Table 6 and Figs. 4 and 5.Various settings are shown in Tables 3, 4, and 5. In

Table 6, the results obtained by ACCGS , ACS , and CGSfor each problem instance are shown. The bold values rep-resent the best. These are derived average values, and min-imum values using 10 search results. The error rate is thevalue obtained by the following equation from the resultsderived from this experiment and the known optimum valuepublished in the TSPLIB [25].

Table 7 Optimum values of TSPLIB problem instances.

Problem Best known solution

rat575 6773rat783 8806pr1002 259045u1060 223094u2152 64253pr2392 378032pcb3038 137694

p =x − xknownbest

xknownbest× 100 (8)

p is the error rate, xknownbest is optimum, x is the result.In Experiment 1, the optimum solution shown in Table 7could not be found in all problem instances.

Figures 4 and 5 is a graphs of Table 6. In Figs. 4, 5, thehorizontal axes represents problem instances and the verticalaxes is the error rate of a known optimal value.

From Table 6 and Figs. 4 and 5, it can be seen that AC-CGS is better than both the CGS and the ACS algorithmsthat make up the proposed method in all instances. SinceCGS uses only distance information between cities, numer-ous similar strategies are built. Using ACS with both thepheromone information and the distance information diver-sifies the strategy of the consultant, and therefore the searchrange of Phase 2 spreads, and a good solution is obtained. Inaddition, since the algorithm carries over the pheromone in-formation, the algorithm was able to intensively explore theclose space from the range in the previous search in Phase1.

As shown in Table 6, ACS is good at solving rat783,pr1002, u2152, and pr2392, and CGS is good at slovingrat575, u1060, and pcb3038. From this, we believe thatthere are strong and weak points in each problem. There-fore, ACCGS can achieve a variety of searches by combin-ing both CGS and ACS in ACCGS.

6.3 Experiment2 (Measurement of the Effectiveness ofChanging the Number of Virtual Agents)

We created an experiment to verify whether the number ofvirtual agents really uses the optimal range. Specifically, theexperiment looks at how much change appears by changingthe range on the number of virtual agents. We conductedexperiments on the range of the number of virtual agents ona total of 4 patterns of 3 to 10, 10 to 20, 20 to 30, and 30 to40. The calculation cost decides each problem’s instances:rat575, rat783, pr1002, and u1060 were set to 2 hours searchtime and u2152, and pr2392 were set to 3 hours; pcb3038was set to 5 hours as well as Experiment 1. We show theexperimental results in Table 8 to Table 14 and Fig. 6 toFig. 12. In Experiment 2, the optimum solution shown inTable 7 was not able to be found in all problem instances.

From the results of Table 8 to Table 14 and Fig. 6 toFig. 12, we see that the error rate is minimum using therange of 20 to 30 virtual agents, although there is a dif-

Page 8: PAPER Hybrid Consultant-Guided Search for the Traveling ...

EBARA et al.: HYBRID CONSULTANT-GUIDED SEARCH FOR THE TRAVELING SALESPERSON PROBLEM1735

Table 8 Result of experiment 2 (comparison of the number of virtualagents in rat575).

Number of virtual agents Average Error rate[%] Minimum Error rate[%]

3to10 6864.96 1.36 6843.24 1.0410to20 6857.37 1.25 6839.94 0.9920to30 6853.19 1.18 6828.37 0.8230to40 6862.99 1.33 6842.96 1.03

Table 9 Result of experiment 2 (comparison of the number of virtualagents in rat783).

Number of virtual agents Average Error rate[%] Minimum Error rate[%]

3to10 8997.28 2.17 8967.23 1.8310to20 8929.31 1.40 8904.30 1.1220to30 8920.45 1.30 8882.73 0.8730to40 8945.81 1.59 8913.62 1.22

Table 10 Result of experiment 2 (comparison of the number of virtualagents in pr1002).

Number of virtual agents Average Error rate[%] Minimum Error rate[%]

3to10 268527.30 3.66 266944.66 3.0510to20 267237.61 3.16 262022.70 1.1520to30 261909.04 1.11 260981.47 0.7530to40 264722.44 2.19 263820.92 1.84

Table 11 Result of experiment 2 (comparison of the number of virtualagents in u1060).

Number of virtual agents Average Error rate[%] Minimum Error rate[%]

3to10 233159.44 4.05 232084.95 3.5710to20 234290.12 4.55 231018.83 3.0920to30 228114.18 1.79 227333.66 1.4530to40 229636.09 2.47 228032.18 1.76

Table 12 Result of experiment 2 (comparison of the number of virtualagents in u2152).

Number of virtual agents Average Error rate[%] Minimum Error rate[%]

3to10 67615.59 5.23 67362.93 4.8410to20 67104.60 4.44 65572.57 2.0520to30 65621.88 2.22 65380.55 1.8830to40 65913.38 2.58 65671.55 2.21

Table 13 Result of experiment 2 (comparison of the number of virtualagents in pr2392).

Number of virtual agents Average Error rate[%] Minimum Error rate[%]

3to10 408516.71 8.07 406290.32 7.4810to20 404778.15 7.07 387133.62 2.4120to30 386072.60 2.13 384499.13 1.6730to40 389824.76 3.12 385014.89 1.85

ference for each instance, as the number of times to selectthe short distance branch increases. If the number of virtualagents increases, it would lead to a local optima immedi-ately, as the case to select short edges increases.

6.4 Experiment3 (Evaluation If the Algorithm Does notCarry over Pheromone Information)

The results of Experiment 2 found the number of virtualagents is appropriate in the proposed method. In this sec-tion, we make an experiment to determine the influence thatcarryover of the pheromone information has on the search.

In the proposed method, the pheromone information iscarried over to Phase 2. In Phase 2, the pheromone infor-

Table 14 Result of experiment 2 (comparison of the number of virtualagents in pcb3038).

Number of virtual agents Average Error rate[%] Minimum Error rate[%]

3to10 156930.45 13.97 156335.89 13.5410to20 156052.67 13.33 155485.25 12.9220to30 156661.70 13.78 155358.11 12.8330to40 157284.31 14.23 156070.22 13.35

Fig. 6 Result of experiment 2 (change the number of virtual agents inrat575).

Fig. 7 Result of experiment 2 (change the number of virtual agents inrat783).

Fig. 8 Result of experiment 2 (change the number of virtual agents inpr1002).

mation is updated. In this experiment, ACCGS PHINIT ,which performs the initialization of pheromone informa-tion after the end of each Phase 2 is compared with theproposed method, and the effectiveness of the carryover ofthe pheromone information was confirmed. The calcula-

Page 9: PAPER Hybrid Consultant-Guided Search for the Traveling ...

1736IEICE TRANS. FUNDAMENTALS, VOL.E97–A, NO.8 AUGUST 2014

Fig. 9 Result of experiment 2 (change the number of virtual agents inu1060).

Fig. 10 Result of experiment 2 (change the number of virtual agents inu2152).

Fig. 11 Result of experiment 2 (change the number of virtual agents inpr2392).

Fig. 12 Result of experiment 2 (change the number of virtual agents inpcb3038).

Table 15 Result of experiment 3 (comparison of ACCGS , ACCGSPHINIT ).

Problem Algorithm Average Error rate[%] Minimum Error rate[%]

rat575 ACCGS 6853.19 1.18 6828.37 0.82ACCGS PHINIT 6854.00 1.20 6839.38 0.98

rat783 ACCGS 8920.45 1.30 8882.73 0.87ACCGS PHINIT 8936.33 1.48 8898.46 1.05

pr1002 ACCGS 261909.04 1.11 260981.47 0.75ACCGS PHINIT 262231.25 1.23 261480.02 0.94

u1060 ACCGS 228114.18 1.79 227333.66 1.45ACCGS PHINIT 229434.49 2.38 227993.24 1.74

u2152 ACCGS 65621.88 2.22 65380.55 1.88ACCGS PHINIT 65852.90 2.49 65576.61 2.06

pr2392 ACCGS 386072.60 2.13 384499.13 1.67ACCGS PHINIT 388325.22 2.72 384571.95 1.73

pcb3038 ACCGS 156661.70 13.78 155358.11 12.83ACCGS PHINIT 157644.27 14.49 156540.29 13.69

Fig. 13 Result of experiment 3 (comparison of ACCGS withACCGS PHINIT ) (Average).

Fig. 14 Result of experiment 3 (comparison of ACCGS withACCGS PHINIT ) (Minimum).

tion cost decides each problem’s instances: rat575, rat783,pr1002, and u1060 were set to 2 hours search time andu2152, and pr2392 were set to 3 hours; pcb3038 was set to5 hours as well as Experiment 1. We show the experimentalresults in Table 15 and Figs. 13, 14.

From the results in Table 15 and Figs. 13 and 14, theproposed method of carrying over the pheromone informa-tion was superior in all cases. In Experiment 3, the optimumsolution shown in Table 7 could not be found in any probleminstances.

The difference between the amount of pheromone onthe path to configure the best solution and the amount ofpheromone of the other edges is large. However, the differ-ence becomes small by continuing to update in Phase 2. This

Page 10: PAPER Hybrid Consultant-Guided Search for the Traveling ...

EBARA et al.: HYBRID CONSULTANT-GUIDED SEARCH FOR THE TRAVELING SALESPERSON PROBLEM1737

indicates that the algorithm intensively explored the solutionnear the last search in Phase 1. In CGS, creating a searchwithout being dependent on the carryover of the pheromoneinformation is possible. As a result, the algorithm can re-duce the difference in the amount of the pheromone betweencities. In addition, examining the nearby areas searchedin the previous iteration to increase the possibility of mov-ing to cities that have not been previously selected is possi-ble. MMAS has a smoothing function called PTS, and thisadds to the pheromone for edges with a small amount ofpheromone. The carryover of the pheromone information inthe proposed method resembles PTS.

It is important that a heuristic keep a balance betweenthe exploration of new areas in the search space and the ex-ploitation of promising areas already searched. In ACS,the heuristic enhances the convergence of the search re-gions by the pheromone information. On the other hand,the convergence in the search for the solution is discarded inACCGS PHINIT . However, it is held in ACCGS . In CGS,the heuristic enhances the diversity by sabbatical leave,which allows regions of the search space to abandoned thatare no longer promising and allows new regions to be ex-plored. Thereby, the proposed method (ACCGS) not onlyimproves convergence, but also maintains diversity.

7. Conclusion

In this study we proposed the Hybrid Consultant-GuidedSearch for the Traveling Salesperson Problem in a paral-lel environment. The features of the proposed method in-clude the following 3 points. First, we used the best solutionshared in parallel when updating the global pheromone ofACS periodically in order to increase the diversity of the so-lution. Second, in CGS, we used the best solution each vir-tual ant obtained in ACS. Third, we updated the pheromonein CGS in addition to ACS.

The results of Experiment 1 indicate that the searchperformance of ACCGS outperforms ACS and CGS. AC-CGS got better solutions by setting the solutions obtainedusing the pheromone information as the strategy of consul-tant.

The results of Experiment 3 indicate the effectivenessof carrying over the pheromone information in ACCGS.By carrying over the pheromone information, the searchmethod intensively searches the solution space where thepheromone information has accumulated in Phase 1. InPhase 2, ACCGS sets the solution obtained in Phase 1 asthe strategy of consultants. In addition, CGS has sabbat-ical leave that allows to regions of the search space to beabandoned that are not promising and allows new regionsto be explored. The proposed method, therefore can searchmore diversely and intensively, because it obtained betterapproximate solutions using the pheromone information asthe strategy of consultants more than the existing CGS andACS.

In future research, we will compare the performance ofthe proposed method with conventional methods, for exam-

ple, the one with ACS and GA [14].

References

[1] S.M. Sait and H. Youssef, Iterative Computer Algorithms with Ap-plications in Engineering, IEEE Computer Society Press, 1999.

[2] C.R.R.B. MPhil, Modern Heuristic Techniques for CombinatorialProblems, Halsted Press, 1993.

[3] A. Abraham, C. Grosan, and V. Ramos, (eds.), Swarm Intelligencein Data Mining, Springer-Verlag New York, 2006.

[4] S. Iordache, “Consultant-guided search: A new metaheuristic forcombinatorial optimization problems,” Proc. 12th Annual Confer-ence on Genetic and Evolutionary Computation, pp.225–232, 2010.

[5] P. Pop and S. Iordache, “A hybrid heuristic approach for solving thegeneralized traveling salesman problem,” LANZI, Pier L.(HRSG.):GECCO, pp.481–488, 2011.

[6] S. Iordache, “Consultant-guided search algorithms for the quadraticassignment problem,” Hybrid Metaheuristics, pp.148–159, 2010.

[7] M. Dorigo and L. Gambardella, “Ant colony system: A cooperativelearning approach to the traveling salesman problem,” IEEE Trans.Evol. Comput., vol.1, no.1, pp.53–66, 1997.

[8] M. Dorigo and G. Di Caro, “Ant colony optimization: A newmeta-heuristic,” Proc. 1999 Congress on Evolutionary Computation,vol.2, pp.1470–1477, 1999.

[9] T. Stutzle and H. Hoos, “Max-min ant system,” Future GenerationComputer Systems, vol.16, no.8, pp.889–914, 2000.

[10] O. Roux and C. Fonlupt, “Ant programming: Or how to use ants forautomatic programming,” Proc. ANTS, pp.121–129, 2000.

[11] M. Birattari, G.D. Caro, and M. Dorigo, “Toward the formal founda-tion of ant programming,” Ant Algorithms, Lecture Notes in Com-puter Science, vol.2463, pp.188–201, 2002.

[12] M. Manfrin, M. Birattari, T. Stutzle, and M. Dorigo, “Parallelant colony optimization for the traveling salesman problem,” AntColony Optimization and Swarm Intelligence, pp.224–234, 2006.

[13] M. Hassan, M. Islam, and K. Murase, “A new local search basedant colony optimization algorithm for solving combinatorial opti-mization problems,” IEICE Trans. Inf. & Syst., vol.E93-D, no.5,pp.1127–1136, May 2010.

[14] S. Chen and C. Chien, “Parallelized genetic ant colony systems forsolving the traveling salesman problem,” Expert Systems with Ap-plications, vol.38, no.4, pp.3873–3883, 2011.

[15] S.M. Chen and C.Y. Chien, “Solving the traveling salesman problembased on the genetic simulated annealing ant colony system withparticle swarm optimization techniques,” Expert Syst. Appl., vol.38,no.12, pp.14439–14450, Nov. 2011.

[16] L. Wang, R. Cai, L. Jing, and H. Zhang, “Object-guided antcolony optimization algorithm with enhanced memory for travelingsalesman problem,” Research Journal of Applied Sciences, vol.4,pp.3999–4006, 2012.

[17] G. Dong, W.W. Guo, and K. Tickle, “Solving the traveling salesmanproblem using cooperative genetic ant systems,” Expert Syst. Appl.,vol.39, no.5, pp.5006–5011, April 2012.

[18] R. Takahashi, “A hybrid method of genetic algorithms and antcolony optimization to solve the traveling salesman problem,” Inter-national Conference on Machine Learning and Applications, 2009.ICMLA’09, pp.81–88, 2009.

[19] H. Duan and X. Yu, “Hybrid ant colony optimization using memeticalgorithm for traveling salesman problem,” IEEE International Sym-posium on Approximate Dynamic Programming and ReinforcementLearning, ADPRL 2007, pp.92–95, 2007.

[20] X. Shi, Y. Lu, C. Zhou, H. Lee, W. Lin, and Y. Liang, “Hybrid evo-lutionary algorithms based on PSO and GA,” The 2003 Congress onEvolutionary Computation, vol.4, pp.2393–2399, 2003.

[21] E. Duman, M. Uysal, and A.F. Alkaya, “Migrating birds optimiza-tion: A new meta-heuristic approach and its application to thequadratic assignment problem,” Proc. 2011 International Confer-ence on Applications of Evolutionary Computation, vol.1, pp.254–

Page 11: PAPER Hybrid Consultant-Guided Search for the Traveling ...

1738IEICE TRANS. FUNDAMENTALS, VOL.E97–A, NO.8 AUGUST 2014

263, 2011.[22] X. Yang, “A new metaheuristic bat-inspired algorithm,” Nature In-

spired Cooperative Strategies for Optimization, pp.65–74, 2010.[23] A. Kaveh and M. Khayatazad, “A new meta-heuristic method: Ray

optimization,” Computers & Structures, vol.112, pp.283–294, 2012.[24] “PC Cluster Consortium,” http://www.pccluster.org/[25] “TSPLIB,” http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/

Hiroyuki Ebara received the B.S., M.S.,and Ph.D. degrees in Communications Engi-neering from Osaka University, Osaka, Japan, in1982, 1984, and 1987, respectively. In 1987 hebecame Assistant Professor at Osaka University.Since 1994 he has been with Kansai University,where he is currently an Associate Professor.His main research interests are in computationalgeometry, combinatorial optimization, and par-allel computing. He is a member of IEEE (TheInstitute of Electrical and Electronics Engineers,

Inc.), ACM (Association for Computing Machinery), SIAM (Society forIndustrial and Applied Mathematics), IPSJ (Information Processing Soci-ety of Japan), and OR Society of Japan.

Yudai Hiranuma received the B.S., andM.S. degrees in the Faculty of Engineering Sci-ence from Kansai University, Osaka, Japan, in2011 and 2013, respectively. Currently, he isworking to NTT COMWARE CORPORATION.He is engaged in system development for corpo-rate customers.

Koki Nakayama received the B.S. degree inthe Faculty of Engineering Science from KansaiUniversity, Osaka, Japan, in 2013. Currently, heis a M.S. student in the Algorithm EngineeringLab. at kansai University. His research interestsare in finding an algorithm for solving combina-torial optimization problems.