Download - COMPARING FIREFLY ALGORITHM, IMPERIALIST COMPETITIVE ... · COMPARING FIREFLY ALGORITHM, IMPERIALIST COMPETITIVE ALGORITHM AND QUANTUM PARTICLE SWARM OPTIMIZATION ON ANALYTIC OBJECTIVE

Transcript
Page 1: COMPARING FIREFLY ALGORITHM, IMPERIALIST COMPETITIVE ... · COMPARING FIREFLY ALGORITHM, IMPERIALIST COMPETITIVE ALGORITHM AND QUANTUM PARTICLE SWARM OPTIMIZATION ON ANALYTIC OBJECTIVE

722

GNGTS 2018 SeSSione 3.3

COMPARING FIREFLY ALGORITHM, IMPERIALIST COMPETITIVE ALGORITHM AND QUANTUM PARTICLE SWARM OPTIMIZATION ON ANALYTIC OBJECTIVE FUNCTIONS AND RESIDUAL STATICS CORRECTIONS M. Aleardi, S. PieriniEarth Sciences Department, University of Pisa, Italy

Introduction. The solution of non-linear geophysical inverse problems presents several challenges mainly related to the possibility of convergence toward a local minimum of the objective function. For this reason, global optimization methods are often preferred over linearized approaches especially in case of objective functions with complex topology (i.e. many local minima). In particular, over the last decade the continuously increasing power of modern parallel architectures has considerably encouraged the application of these global methods to solve many geophysical optimization problems (Sen and Stoffa, 2013). Since the ‘70s, tens of global algorithms have been implemented (for a synthetic list see Hosseini and Al Khaled 2014). However, many of these methods have found applications in engineering optimizations (i.e. computer engineering, industrial engineering, mechanical engineering) and only a small subset of them has been employed to tackle geophysical exploration problems. In this context, genetic algorithms, simulated annealing and particle swarm are undoubtedly the most popular (Sajeva et al., 2017). Notably, the performances of these global methods are different for different optimization problems and strongly depend on the shape of the objective function and on the model space dimension (i.e. the number of unknowns). For this reason, in this work we are interested in comparing the performances of three global optimization algorithms: The Firefly Algorithm (FA), the Imperialist Competitive Algorithm (ICA) and the Quantum Particle Swarm Optimization (QPSO). These methods have been introduced in the last few years and have found very limited popularity in the geophysical community so far. In particular, as the authors are aware of, there are no applications of ICA and FA approaches in the context of seismic exploration problems.

The three methods are first tested on two multi-minima analytic objective functions often used to test optimization algorithms. Then, the three algorithms are compared on the residual statics corrections, which is a highly non-linear geophysical optimization problem characterized by an objective function with multiple minima. We remind, that the performances of a stochastic method in solving a particular problem may critically depend on the choice of the control parameters. Generally, it is difficult to give hard and fast rules that may work with a wide range of applications, although some guidelines and rules of thumb can be dictated by experience. The control parameters used in the following tests have been determined from a trial-and-error procedure with the aim of balancing the rate of convergence with the accuracy of the final solution.

A brief overview of FA, ICA and QPSO. The FA is a quite new global swarm intelligence method proposed by Yang (2008) that is inspired by bioluminescence emitted by fireflies. In general, the intensity of the produced lights determines how fast other fireflies move toward the particular firefly. FA utilizes a population of fireflies to obtain the best solution in the model space. In each iteration, the brightness (i.e. the objective function value) of every firefly is compared with any other firefly, and each firefly attracts the other ones depending on its brightness; in other terms each firefly moves toward the fireflies with higher magnitude of brightness (that is a lower objective function value).

The ICA was developed by Atashpaz-Gargari and Lucas (2007) and was inspired by the idea of imperialism in which powerful countries influence other countries through expanding their power and political system. The purpose of imperialism was either to take advantage of the resources of the colonized country or to prevent the other imperialists to take control over the colonies. Similar to other evolutionary algorithms, ICA starts with a random initial ensemble of countries. The power of each country is inversely correlated to the associated objective

Page 2: COMPARING FIREFLY ALGORITHM, IMPERIALIST COMPETITIVE ... · COMPARING FIREFLY ALGORITHM, IMPERIALIST COMPETITIVE ALGORITHM AND QUANTUM PARTICLE SWARM OPTIMIZATION ON ANALYTIC OBJECTIVE

GNGTS 2018 SeSSione 3.3

723

function value. Based on their power, some of the best initial countries become imperialists and start taking control of other countries (called colonies). The main operators of this algorithm are assimilation and revolution. Assimilation makes the colonies of each empire get closer to their imperialist. In addition to the assimilation, the position of some countries is randomly changed by using a parameter called revolution rate. This revolution process is used to preserve randomness of the algorithm. During assimilation and revolution, a colony might reach a better position and has the chance to take the control of the entire empire and replace the current imperialist state of the empire. Imperialist competition is another part of this algorithm, in which all the empires try to take possession of colonies of other empires. In each iteration, based on their power, all the empires have a chance to take control of one or more of the colonies of the weakest empire. These optimization tools are applied in each iteration until a stop criterion is satisfied.

Particle swarm optimization is an evolutionary computation technique developed by Eberhart and Kennedy (1995) that is inspired by social behaviour of bird flocking and fish schooling. In the classical version of PSO the set of candidate solutions to the optimization problem is defined as a swarm of particles which may flow through the parameter space defining trajectories which are driven by their own and neighbours’ best performances. PSO has undergone a plethora of changes since its development. One of the recent developments in PSO is the inclusion of Quantum laws of mechanics within the PSO framework: this leads to the QPSO (Sun et al. 2004). In PSO, a particle is stated by its position and velocity vectors, which determine the trajectory of the particle. However, if we consider quantum mechanics, the term trajectory is meaningless, because position and velocity of a particle cannot be determined simultaneously according to uncertainty principle. Indeed, each single particle in QPSO is treated as a spin-less one moving in quantum space and the probability of the particle’s appearing at position xi in the model space is determined from a probability density function. Simply speaking, the QPSO algorithm allows all particles to move under quantum-mechanical rules rather than the classical Newtonian random motion. The QPSO algorithm has simpler evolutional equation forms and fewer parameters than the standard PSO, substantially facilitating the control and convergence in the search space.

Tests on analytic objective functions. We now compare the three algorithms on analytic test functions as the number of model parameters increases. We consider 2, 4, 6, 8 and 10 unknowns and for each dimension and for each method we perform 20 different runs from which we count the number of model evaluations requested to find the global minimum within a maximum number of evaluated model (106) and within a previously selected accuracy (Sajeva et al., 2017). The first test function we use is the Rastrigin function equals to:

(1)

where n is the dimension of the model space. The global minimum is located at [0, …, 0]n and is surrounded by regularly distributed local minima. For this function the initial ensemble of candidate solutions in each method is 10 times the model space dimension, whereas the accuracy is set at 0.1. In Fig. 1a we observe that all the methods successfully converge for all the dimensions, although as expected the number of model evaluations requested increases as the dimension of the model space increases. FA and QPSO show very similar performances, whereas ICA requires a much higher number of model evaluations to attain convergence. In other words, ICA seems characterized by a slower convergence rate than the other two algorithms.

The second test function is the Schwefel function:

(2)

This is an extremely difficult function to optimize, because the local minima are irregularly

Page 3: COMPARING FIREFLY ALGORITHM, IMPERIALIST COMPETITIVE ... · COMPARING FIREFLY ALGORITHM, IMPERIALIST COMPETITIVE ALGORITHM AND QUANTUM PARTICLE SWARM OPTIMIZATION ON ANALYTIC OBJECTIVE

724

GNGTS 2018 SeSSione 3.3

distributed, and other local minima are distant from the non-centred global minimum (located at [420.9687, …,420.9687]n), or are even located at the opposite edge of the model space. For this function the initial ensemble of candidate solutions in each method is 100 times the model space dimension, whereas the accuracy is set at 1. Fig. 1b shows that in this case all the algorithms have a 100% convergence probability for a 2D case, whereas this percentage progressively decreases as the number of dimensions increases. The ICA is characterized by the most significant decrease of this percentage as the number of unknowns increases, whereas the FA is still able to successfully converge in 15 out of 20 tests for a 10-D model space. Note, as expected, that for a given model space dimension the number of model evaluations requested to converge in the Schwefel function is much higher than in the Rastrigin function. In this case, both ICA and QPSO require a number of model evaluations that is always one order of magnitude higher than that requested by FA. In other words, in this test the FA clearly outperforms the other two methods because it exhibits a faster convergence rate and successfully identifies the global minimum even in high dimensional model space.

Residual statics corrections. In the following test we compare FA, ICA and QPSO on CMP-consistent residual statics corrections performed on a synthetic CMP gather derived from actual well log information by employing a 1D convolutional forward modelling. Sajeva et al. (2017) showed that this geophysical optimization problem is characterized by an objective function with some similarities to both the Rastrigin and Schwefel functions. To simulate residual statics in the data, we apply to each trace in the reference CMP gaussian-distributed random time shifts uniformly distributed over ±10 ms. In the subsequent optimization process we allow time shifts within the range ±15 ms. Similarly, to the analytic objective function examples we test the three methods for different model space dimensions, that is for 20-, 40- and 60-trace CMP gathers. In this example, the ensemble of solutions for each method is 20

Fig. 1 - Results for the Rastrigin and Schwefel functions (a, and b, respectively). Left: percentage of successful tests for different model space dimensions. Right: Average number of model evaluations requested to converge toward the global minimum within the selected accuracy.

Page 4: COMPARING FIREFLY ALGORITHM, IMPERIALIST COMPETITIVE ... · COMPARING FIREFLY ALGORITHM, IMPERIALIST COMPETITIVE ALGORITHM AND QUANTUM PARTICLE SWARM OPTIMIZATION ON ANALYTIC OBJECTIVE

GNGTS 2018 SeSSione 3.3

725

Fig. 2 - Results for the residual statics corrections for different model space dimension. From left to right: The synthetic reference CMP gather, the trace-shifted CMP, the final QPSO CMP, the final ICA CMP, the final FA CMP. a), b) and c) pertain to the 20-, 40- and 60-trace tests.

times the number of unknowns (i.e. the number of time shifts to be determined), whereas the maximum number of iteration allowed is set to 150, 200 and 300 for the 20-, 40- and 60-trace examples, respectively. For each model space dimension, we perform 5 optimization runs

Page 5: COMPARING FIREFLY ALGORITHM, IMPERIALIST COMPETITIVE ... · COMPARING FIREFLY ALGORITHM, IMPERIALIST COMPETITIVE ALGORITHM AND QUANTUM PARTICLE SWARM OPTIMIZATION ON ANALYTIC OBJECTIVE

726

GNGTS 2018 SeSSione 3.3

for each method from which we select the best results. The results (Fig. 2) show that the FA clearly outperforms the other methods, since this algorithm is always able to provide a final CMP very similar to the reference CMP. In particular, the FA-CMPs could be used as a valid starting model for any local optimization method for a further refinement of the residual statics estimation. Differently, the ICA and QPSO outcomes are affected by many misalignments of the reflections and cycle-skipped traces especially for the 40-and 60-trace examples. In Fig. 3, we observe that the energy of the stack trace associated to the FA-CMP is always very close to the energy of the stack trace pertaining to the reference CMP, and also that the FA shows a rapid convergence toward the optimal solution. Differently, QPSO is always affected by premature convergence and entrapment in local minima, whereas ICA shows a much slower convergence rate than the other two methods.

Conclusions. We tested three relatively new global optimizations methods on analytic objective functions and residual statics corrections. The three methods are the firefly algorithm, imperialist competitive algorithm and quantum particle swarm optimization. The tests on the analytic functions demonstrated that all the approaches are able to find the global minimum in case of regularly distributed minima, whereas the QPSO and particularly the ICA suffer in case of objective functions with complex topology (i.e. irregularly distributed minima). In addition, the ICA is characterized by a much slower convergence rate with respect to the other two algorithms. In the residual statics corrections, the FA clearly outperforms the other two methods. Indeed, this algorithm successfully converges even in high-dimensional model spaces and is also characterized by a very fast convergence rate. Currently, other tests are ongoing to compare the algorithms in other analytic objective functions and in other seismic optimization problems. ReferencesAtashpaz-Gargari, E., and Lucas, C. (2007). Imperialist competitive algorithm: an algorithm for optimization inspired

by imperialistic competition. In Evolutionary computation Congress 2007, 4661-4667.Eberhart, R., and Kennedy, J. (1995). A new optimizer using particle swarm theory. In Proceedings of the sixth

international symposium on micro machine and human science, 39-43.Hosseini, S., and Al Khaled, A. (2014). A survey on the imperialist competitive algorithm metaheuristic: implementation

in engineering domain and directions for future research. Applied Soft Computing, 24, 1078-1094.Sajeva, A., Aleardi, M., Galuzzi, B., Stucchi, E., Spadavecchia, E., and Mazzotti, A. (2017). Comparing the

performances of four stochastic optimisation methods using analytic objective functions, 1D elastic full-waveform inversion, and residual static computation. Geophysical Prospecting, 65, 322-346.

Sen, M. K., and Stoffa, P. L. (2013). Global optimization methods in geophysical inversion. Cambridge University Press.

Sun, J., Feng, B., and Xu, W. (2004). Particle swarm optimization with particles having quantum behavior. In Evolutionary Computation, 2004, 1, 325-331.

Yang, X. S. (2008). Firefly algorithm. Nature-inspired metaheuristic algorithms, 20, 79-90.

Fig. 3. Energy curves representing the evolution of the energy of the stack trace (computed on the predicted CMP) over iterations for the three different algorithms and for the 20-, 40- and 60-trace tests.