Accelerated Particle Swarm Optimization to Solve Large ...

12
Research Article Accelerated Particle Swarm Optimization to Solve Large-Scale Network Plan Optimization of Resource-Leveling with a Fixed Duration Houxian Zhang 1 and Zhaolan Yang 2 1 School of Architecture and Civil Engineering, Nanjing Institute of Technology, Nanjing 211167, China 2 Industrial Center, Nanjing Institute of Technology, Nanjing 211167, China Correspondence should be addressed to Houxian Zhang; [email protected] Received 28 December 2017; Revised 18 March 2018; Accepted 20 March 2018; Published 16 May 2018 Academic Editor: Anna M. Gil-Lafuente Copyright © 2018 Houxian Zhang and Zhaolan Yang. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Large-scale network plan optimization of resource-leveling with a fixed duration is challenging in project management. Particle swarm optimization (PSO) has provided an effective way to solve this problem in recent years. Although the previous algorithms have provided a way to accelerate the optimization of large-scale network plan by optimizing the initial particle swarm, how to more effectively accelerate the optimization of large-scale network plan with PSO is still an issue worth exploring. e main aim of this study was to develop an accelerated particle swarm optimization (APSO) for the large-scale network plan optimization of resource-leveling with a fixed duration. By adjusting the acceleration factor, the large-scale network plan optimization of resource- leveling with a fixed duration yielded a better result in this study than previously reported. Computational results demonstrated that, for the same large-scale network plan, the proposed algorithm improved the leveling criterion by 24% compared with previous solutions. APSO proposed in this study was similar in form to, but different from, particle swarm optimization with contraction factor (PSOCF). PSOCF did not have as good adaptability as APSO for network plan optimization. Accelerated convergence particle swarm optimization (ACPSO) is similar in form to the APSO proposed in this study, but its irrationality was pointed out in this study by analyzing the iterative matrix convergence. 1. Introduction e network plan is considered by the engineering com- munity as a promising management method. A large-scale network plan with many works (such as more than 50) is an effective tool for solving large project management problems [1, 2]. However, the number of possible solutions in large- scale network plan optimization sharply increases with the number of works, and the time of calculation is exponential, far beyond the processing capacity of computing resources, so mathematics and computer science cannot solve the problem known as NP problem [2, 3]. In recent years, genetic algorithm [4, 5], Monte Carlo partition optimization [6], and particle swarm optimization (PSO) [7, 8] have provided an effective means to solve this problem. PSO was proposed in 1995. Although the convergence of PSO is still controversial, its applied research has shown good results [9–13]. Experimental research includes optimiza- tion, biomedicine, communication, control, and so forth. eoretical research includes PSO improvement, parameter selection, stability, convergence, and so forth. Improvement in performance of PSO reported in the literature included adjusting the parameters of PSO (inertial factor) [14–17], adopting the neighborhood topology [18], and combining with other algorithms (genetic algorithm, simulated anneal- ing algorithm, and differential evolution algorithm) [19–22]. It does not include the solution to large-scale network plan optimization problems. Accelerated optimization can be marked by better- optimized solutions with the same number of iterations for iterative optimization. Yang et al. introduced some virtual particles in random directions with random amplitude to enhance the explorative capability of particles in PSO [23]; Qi et al. hybridized an improved estimation of distribution Hindawi Mathematical Problems in Engineering Volume 2018, Article ID 9235346, 11 pages https://doi.org/10.1155/2018/9235346

Transcript of Accelerated Particle Swarm Optimization to Solve Large ...

Page 1: Accelerated Particle Swarm Optimization to Solve Large ...

Research ArticleAccelerated Particle Swarm Optimization toSolve Large-Scale Network Plan Optimization ofResource-Leveling with a Fixed Duration

Houxian Zhang 1 and Zhaolan Yang2

1School of Architecture and Civil Engineering, Nanjing Institute of Technology, Nanjing 211167, China2Industrial Center, Nanjing Institute of Technology, Nanjing 211167, China

Correspondence should be addressed to Houxian Zhang; [email protected]

Received 28 December 2017; Revised 18 March 2018; Accepted 20 March 2018; Published 16 May 2018

Academic Editor: Anna M. Gil-Lafuente

Copyright © 2018 Houxian Zhang and Zhaolan Yang. This is an open access article distributed under the Creative CommonsAttribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work isproperly cited.

Large-scale network plan optimization of resource-leveling with a fixed duration is challenging in project management. Particleswarm optimization (PSO) has provided an effective way to solve this problem in recent years. Although the previous algorithmshave provided a way to accelerate the optimization of large-scale network plan by optimizing the initial particle swarm, how tomore effectively accelerate the optimization of large-scale network plan with PSO is still an issue worth exploring. The main aimof this study was to develop an accelerated particle swarm optimization (APSO) for the large-scale network plan optimization ofresource-leveling with a fixed duration. By adjusting the acceleration factor, the large-scale network plan optimization of resource-leveling with a fixed duration yielded a better result in this study than previously reported. Computational results demonstratedthat, for the same large-scale network plan, the proposed algorithm improved the leveling criterion by 24% compared with previoussolutions. APSO proposed in this study was similar in form to, but different from, particle swarm optimization with contractionfactor (PSOCF). PSOCFdid not have as good adaptability as APSO for network plan optimization. Accelerated convergence particleswarm optimization (ACPSO) is similar in form to the APSO proposed in this study, but its irrationality was pointed out in thisstudy by analyzing the iterative matrix convergence.

1. Introduction

The network plan is considered by the engineering com-munity as a promising management method. A large-scalenetwork plan with many works (such as more than 50) is aneffective tool for solving large project management problems[1, 2]. However, the number of possible solutions in large-scale network plan optimization sharply increases with thenumber of works, and the time of calculation is exponential,far beyond the processing capacity of computing resources,so mathematics and computer science cannot solve theproblem known as NP problem [2, 3]. In recent years, geneticalgorithm [4, 5], Monte Carlo partition optimization [6], andparticle swarm optimization (PSO) [7, 8] have provided aneffective means to solve this problem.

PSO was proposed in 1995. Although the convergence ofPSO is still controversial, its applied research has shown good

results [9–13]. Experimental research includes optimiza-tion, biomedicine, communication, control, and so forth.Theoretical research includes PSO improvement, parameterselection, stability, convergence, and so forth. Improvementin performance of PSO reported in the literature includedadjusting the parameters of PSO (inertial factor) [14–17],adopting the neighborhood topology [18], and combiningwith other algorithms (genetic algorithm, simulated anneal-ing algorithm, and differential evolution algorithm) [19–22].It does not include the solution to large-scale network planoptimization problems.

Accelerated optimization can be marked by better-optimized solutions with the same number of iterations foriterative optimization. Yang et al. introduced some virtualparticles in random directions with random amplitude toenhance the explorative capability of particles in PSO [23];Qi et al. hybridized an improved estimation of distribution

HindawiMathematical Problems in EngineeringVolume 2018, Article ID 9235346, 11 pageshttps://doi.org/10.1155/2018/9235346

Page 2: Accelerated Particle Swarm Optimization to Solve Large ...

2 Mathematical Problems in Engineering

algorithm (EDA) using historic best positions to constructa sample space with PSO both in sequential and in parallelto improve population diversity control and avoid prematureconvergence for optimization of a water distribution network[24]; Zhang et al. added the random velocity operator fromlocal optima to global optima into the velocity updateformula of constriction particle swarm optimization (CPSO)to accelerate the convergence speed of the particles to theglobal optima and reduce the likelihood of being trapped intolocal optima [25]; Zhou et al. adjusted random functions withthe density of the population so as to manipulate the weightof cognition part and social part and executed mutation onboth personal best particle and group best particle to explorenew areas [26]. Zhang and Yang accelerated the optimizationof large-scale network plan resources and analyzed the accel-eration optimization mechanism via stochastic process byoptimizing the initial particle swarm using the Monte Carlomethod under limiting conditions [7, 8, 27]; Ren and Wangproposed a PSO algorithm with accelerated convergence,theoretically proved the fast convergence of the algorithm,and optimized the parameters in the algorithm [28].

Inspired by previous efforts [28] to accelerate the con-vergence of PSO algorithm, this study proposed the methodfor the large-scale network plan optimization of resource-levelingwith a fixed duration through debugging accelerationcoefficient (it might also be described as accelerated PSO, orAPSO for short) and yielded a better solution than reportedin the previous literature.

This paper is organized as follows. Section 2 describesthe experimental research of the large-scale network planoptimization of resource-leveling with a fixed duration usingAPSO. Section 3 analyzes the difference between APSO andPSO with a contraction factor (PSOCF) [29]. Section 4analyzes the irrationality of accelerated convergence PSO(ACPSO) reported in [28].

2. APSO to Solve the Large-Scale NetworkPlan Optimization of Resource-Levelingwith a Fixed Duration

Large-scale network plan optimization of resource-levelingwith a fixed duration achieved the balance of resourcedemand in each period during the entire period of the project.Equilibrium could be marked by the variance of resources.The formula used to calculate the variance was as follows:𝜎2 = (∑𝐽𝑖=1(𝑥𝑖 − 𝜇)2)/𝐽 where the total number of samples𝑥𝑖 is 𝐽, and the arithmetic mean of 𝑥𝑖 is 𝜇. The smaller thevariance, the more balanced the resource.

The evolutionary equation of basic PSO was as follows:

V𝑖𝑗 (𝑡 + 1) = 𝑤V𝑖𝑗 (𝑡) + 𝑐1rand1 (𝑡) (𝑝𝑔𝑗 (𝑡) − 𝑥𝑖𝑗 (𝑡))+ 𝑐2rand2 (𝑡) (𝑝𝑔 (𝑡) − 𝑥𝑖𝑗 (𝑡))

𝑥𝑖𝑗 (𝑡 + 1) = 𝑥𝑖𝑗 (𝑡) + V𝑖𝑗 (𝑡 + 1) ,(1)

where 𝑡 is the number of iterations; 𝑥𝑖𝑗(𝑡 + 1) is 𝑗-dimensionspace coordinates of particle 𝑖 in 𝑡 + 1 iteration; 𝑥𝑖𝑗(𝑡) is 𝑗-dimension space coordinates of particle 𝑖 in 𝑡 iteration; 𝑤 is

inertial factor, usually taking the value of 1 according to theexperience; V𝑖𝑗(𝑡) is the 𝑗-dimension flight speed of particle 𝑖;𝑐1 and 𝑐2 are accelerators evaluated usually between 0 and 2 byexperience; rand1 and rand2 are random functions of value inthe range of [0, 1]; 𝑝𝑔𝑗(𝑡) is the best place to be experiencedby particle 𝑗; and 𝑝𝑔(𝑡) is the best place for all particlesto experience. The convergence condition was adopted bysetting maximum iteration times 𝐺.

The evolutionary equation of accelerated PSO (APSO)was as follows:

V𝑖𝑗 (𝑡 + 1) = 𝑎 (𝑤V𝑖𝑗 (𝑡) + 𝑐1rand1 (𝑡) (𝑝𝑔𝑗 (𝑡) − 𝑥𝑖𝑗 (𝑡))+ 𝑐2rand2 (𝑡) (𝑝𝑔 (𝑡) − 𝑥𝑖𝑗 (𝑡)))

𝑥𝑖𝑗 (𝑡 + 1) = 𝑥𝑖𝑗 (𝑡) + V𝑖𝑗 (𝑡 + 1) ,(2)

where 𝑎 is the acceleration coefficient, and the other signsare the same as earlier. The evolution equation of acceleratedparticle swarm algorithmhas onemore 𝑎 thanwith that of thebasic PSO algorithm and onemore𝑤 than that of the particleswarm algorithm with contraction factor. However, it hasproduced significant results for solving large-scale networkplan optimization of resource-leveling with a fixed durationas follows.

For example, a large network plan with 223 works is thesame as Figure 1 in [27]. The debugging results of change aare shown in Table 1, where the variance of the correspondingoptimization results is 17.58 (better than the variance 22.99quoted in [27]). The start time of each work is shown inTable 2, and the resource requirements of each unit time areshown in Table 3.

As shown in Table 1, for 𝑤 = 1, 𝑐1 = 𝑐2 = 2.05,the number of particles 50, and 𝐺 = 100, the minimumvariance 17.58 could be obtained by adjusting the accelerationcoefficient 𝑎, which was significantly optimized comparedwith the variance quoted in [27] without the accelerationcoefficient (that is 22.99). For 𝑤 = 1, 𝑐1 = 3.5, 𝑐2 = 0.4,the number of particles 50, and 𝐺 = 100, the minimumvariance 18.4 could be obtained by adjusting the accelerationcoefficient 𝑎, which was significantly optimized comparedwith the variance quoted in the literature [27]. For 𝑤 = 0.8,𝑐1 = 𝑐2 = 2.05, the number of particles 50, and 𝐺 = 100, theminimum variance 18.93 could be obtained by adjusting theacceleration coefficient 𝑎, which was significantly optimizedcompared with the variance quoted in [27]. For 𝑤 = 0.729,𝑐1 = 𝑐2 = 1.454, the number of particles 50, and 𝐺 = 100,variance smaller than 17.83 (acceleration coefficient 1) couldnot be obtained by adjusting the acceleration coefficient 𝑎.3. Difference between APSO and PSOCF [29]

APSO proposed in this study was similar in form to PSOCF.The evolution equation of PSOCF was as follows [29]:

V𝑖𝑗 (𝑡 + 1) = 𝜂 (V𝑖𝑗 (𝑡) + 𝑐1rand1 (𝑡) (𝑝𝑔𝑗 (𝑡) − 𝑥𝑖𝑗 (𝑡))+ 𝑐2rand2 (𝑡) (𝑝𝑔 (𝑡) − 𝑥𝑖𝑗 (𝑡)))

𝑥𝑖𝑗 (𝑡 + 1) = 𝑥𝑖𝑗 (𝑡) + V𝑖𝑗 (𝑡 + 1) ,(3)

Page 3: Accelerated Particle Swarm Optimization to Solve Large ...

Mathematical Problems in Engineering 3

Table 1: Optimization parameter’s debugging results of the large-scale network plan optimization of resource-leveling with a fixed durationusing the accelerated particle swarm algorithm (particle number is 50; 𝐺 = 100).

sequence number 𝑤 𝑐1 𝑐2 𝑎 𝜎21 1 2.05 2.05 1 23.972 1 2.05 2.05 0.33 19.313 1 2.05 2.05 0.31 18.974 1 2.05 2.05 0.3 17.585 1 2.05 2.05 0.29 18.616 1 2.05 2.05 3 31.437 1 2.05 2.05 0.1 31.398 1 2.05 2.05 0.03 31.439 1 3.5 0.4 1 22.9910 1 3.5 0.4 0.03 31.4311 1 3.5 0.4 0.5 21.412 1 3.5 0.4 0.8 31.4313 1 3.5 0.4 0.3 23.0214 1 3.5 0.4 0.6 16.35 error15 1 3.5 0.4 0.4 20.816 1 3.5 0.4 0.35 19.917 1 3.5 0.4 0.33 18.418 1 3.5 0.4 0.31 25.519 0.8 2.05 2.05 1 24.5320 0.8 2.05 2.05 0.3 31.4321 0.8 2.05 2.05 1.2 31.4322 0.8 2.05 2.05 0.4 31.4323 0.8 2.05 2.05 0.5 31.0024 0.8 2.05 2.05 0.6 25.8525 0.8 2.05 2.05 0.7 18.9326 0.8 2.05 2.05 0.8 22.3827 0.729 1.454 1.454 1 17.8328 0.729 1.454 1.454 0.8 25.0029 0.729 1.454 1.454 1.05 20.5430 0.729 1.454 1.454 0.9 18.33 errorNote. The value in italics is the optimal value under certain parameter conditions.

where contraction factor 𝜂 = 2𝜅/|2−𝜑−√𝜑(𝜑 − 4)|, 𝜅 ∈ [0, 1],𝜑 = 𝑐1 + 𝑐2. The other signs are the same as earlier.For 𝑐1 = 3.5, 𝑐2 = 0.4, 𝜂 = 2𝜅/|2−𝜑−√𝜑(𝜑 − 4)| = 2𝜅/|2−3.9 − √3.9(3.9 − 4)| does not exist. The PSOCF could not be

used, but APSO in this study was used for optimization ofnetwork plan and the results were good, as shown in Table 1.

For 𝑐1 = 2.05, 𝑐2 = 2.05, 𝜂 = 2𝜅/|2−𝜑−√𝜑(𝜑 − 4)| = 2(0 ∼1)/|2 − 4.1 − √4.1(4.1 − 4)| = 0∼0.73. The acceleration factor𝑎 is outside the scope of the contraction factor 𝜂, and theoptimization of APSO in this study was performed as usual,as shown in Table 1.

Thus, although, in this study, APSO was similar in formto PSOCF, essentially, for network plan optimization, PSOCFdid not have as good adaptability as APSO.

4. Irrationality of ACPSO Reported in [28]

APSO proposed in this study was inspired by the ACPSOalgorithm quoted in [28]. APSO was similar in form to

ACPSO. The evolution equation of ACPSO algorithm pro-posed in [28] was as follows:

V𝑖𝑗 (𝑡 + 1) = (sin (𝛼))𝛽 (𝑤V𝑖𝑗 (𝑡)+ 𝑐1rand1 (𝑡) (𝑝𝑔𝑗 (𝑡) − 𝑥𝑖𝑗 (𝑡))+ 𝑐2rand2 (𝑡) (𝑝𝑔 (𝑡) − 𝑥𝑖𝑗 (𝑡)))

𝑥𝑖𝑗 (𝑡 + 1) = (sin (𝛼))𝛽 𝑥𝑖𝑗 (𝑡) + V𝑖𝑗 (𝑡 + 1) ,(4)

where𝛼 is angle valuewith a distinct optimization effectwhenits value is within [0, 𝜋/8]; 𝛽 is a constant greater than zero,and the optimization effect is good when the value is 3. Theother signs are the same as earlier.

The ACPSO algorithm quoted in [28] was based on oneinference: PSO is iterative. The iterative converges when thespectral radius of iterative matrix 𝐿 (that is the maximumabsolute value of the matrix eigenvalue) is less than 1. Thesmaller the spectral radius of 𝐿, the faster the iteration

Page 4: Accelerated Particle Swarm Optimization to Solve Large ...

4 Mathematical Problems in Engineering

Table 2: The parameters and their optimization solution for the optimization example of the resource-leveling with a fixed duration.

Number Work Duration Resource Quantity ES Optimized ES1 1–3 2 2 0 02 1–4 2 1 0 03 3-4 2 1 2 24 1-2 1 2 0 05 4–6 4 1 4 36 5–8 4 1 7 77 5–9 5 1 7 88 6–8 5 3 13 139 7-8 8 3 15 1510 3–6 3 2 2 311 2–4 3 2 1 112 3–5 5 1 2 213 2–8 3 2 1 114 5-6 6 2 7 615 2–6 1 3 1 116 6-7 2 0 13 1217 8-9 7 3 23 2218 8–11 2 2 23 2519 8–10 2 1 23 2320 7–10 2 1 15 1521 9-10 1 2 30 2922 9–12 4 1 30 2823 18-19 8 3 57 5724 17–20 2 1 51 5225 16–19 4 1 49 4826 18–21 5 1 57 5827 17–19 5 3 51 5128 16-17 2 2 49 4929 15–17 2 1 46 4630 14–18 2 1 42 4131 14–17 1 2 42 4132 13–15 2 1 44 4433 10-11 4 1 31 3034 12-13 5 1 39 3935 10–13 5 3 31 3036 10–12 8 3 31 3037 17-18 6 0 51 5138 15-16 3 2 46 4639 11–15 5 1 35 3440 14-15 3 2 42 4141 11–17 5 1 35 3542 12–14 3 2 39 3843 11–13 6 2 35 3444 12–15 2 0 39 3845 21-22 2 2 70 6946 22–24 2 1 75 8847 20–22 2 1 73 8548 20–24 1 2 73 8549 20–26 2 1 73 84

Page 5: Accelerated Particle Swarm Optimization to Solve Large ...

Mathematical Problems in Engineering 5

Table 2: Continued.

Number Work Duration Resource Quantity ES Optimized ES50 19–21 5 1 65 6451 19–22 5 3 65 6552 19-20 8 3 65 7653 24-25 3 2 77 9054 24–26 5 1 77 9655 26-27 3 2 85 10156 25-26 5 1 80 9357 23–27 3 2 72 9958 23–26 6 2 72 7059 23-24 1 3 72 7160 21–23 2 0 70 6961 21–24 7 3 70 7062 3–28 1 2 2 263 3–30 3 2 2 264 3–31 1 3 2 365 28–30 2 1 3 366 28-29 2 2 3 367 29-30 2 1 5 568 29–31 3 2 5 569 29–34 3 2 5 570 30-31 4 1 7 871 5–31 6 2 7 772 5–34 5 3 7 773 31-32 2 0 13 1474 31–34 4 1 13 1275 32-33 2 1 15 1676 32–34 8 3 15 1677 9–34 7 3 30 2978 9–33 2 1 30 2979 34-35 2 2 44 4480 33-34 1 2 43 4381 33–35 8 3 43 4482 33–36 5 3 43 4283 12–33 4 1 39 3984 12–36 6 2 39 3785 12–37 5 1 39 3786 35-36 5 1 51 5387 35–37 2 0 51 5288 35–39 5 1 51 5389 36-37 2 1 56 5890 14–37 3 2 42 4191 14–39 2 1 42 4192 37-38 3 2 58 6093 37–39 1 2 58 6094 38-39 2 2 61 6395 18–39 6 0 57 5796 18–40 5 3 57 5797 39-40 8 3 63 7798 39–41 2 1 63 6699 40-41 5 1 78 86

Page 6: Accelerated Particle Swarm Optimization to Solve Large ...

6 Mathematical Problems in Engineering

Table 2: Continued.

Number Work Duration Resource Quantity ES Optimized ES100 40–42 5 3 78 84101 21–40 8 3 70 70102 21–42 2 1 70 69103 21–43 1 2 70 69104 41-42 2 2 83 92105 41–43 7 3 83 91106 41–45 2 1 83 91107 42-43 2 1 85 95108 23–43 1 3 72 72109 23–45 5 1 72 72110 43-44 3 2 90 98111 43–45 6 2 90 98112 44-45 5 1 93 101113 27–45 3 2 88 104114 38–40 4 1 61 63115 47-48 2 2 8 9116 48–50 2 1 10 11117 47–50 2 1 8 9118 48-49 1 2 10 11119 50–53 4 1 14 16120 52–57 4 1 49 49121 52–58 5 1 49 61122 53–57 5 3 55 56123 56-57 8 3 57 59124 47–53 3 2 8 9125 49-50 3 2 11 12126 47–52 5 1 8 8127 49–57 3 2 11 13128 52-53 6 2 49 50129 49–53 1 3 11 12130 53–56 2 0 55 56131 57-58 7 3 65 68132 57–61 2 2 65 68133 57–59 2 1 65 67134 56–59 2 1 57 58135 58-59 1 2 72 75136 58–60 4 1 72 76137 71-72 8 3 99 102138 69–75 2 1 93 98139 68–72 4 1 91 94140 71–73 5 1 99 103141 69–72 5 3 93 98142 68-69 2 2 91 95143 67–69 2 1 88 92144 65–71 2 1 84 88145 65–69 1 2 84 87146 64–67 2 1 86 90147 59–61 4 1 73 76148 60–64 5 1 81 85

Page 7: Accelerated Particle Swarm Optimization to Solve Large ...

Mathematical Problems in Engineering 7

Table 2: Continued.

Number Work Duration Resource Quantity ES Optimized ES149 59–64 5 3 73 77150 59-60 8 3 73 77151 69–71 6 0 93 97152 67-68 3 2 88 92153 61–67 5 1 77 80154 65–67 3 2 84 87155 61–69 5 1 77 80156 60–65 3 2 81 85157 61–64 6 2 77 81158 60–67 2 0 81 86159 73–76 2 2 112 115160 76–79 2 1 117 121161 75-76 2 1 115 119162 75–79 1 2 115 118163 75–83 2 1 115 118164 72-73 5 1 107 110165 72–76 5 3 107 110166 72–75 8 3 107 110167 79–82 3 2 119 123168 79–83 5 1 119 123169 83-84 3 2 127 132170 82-83 5 1 122 126171 78–84 3 2 114 118172 78–83 6 2 114 119173 78-79 1 3 114 119174 73–78 2 0 112 116175 73–79 7 3 112 116176 46-47 1 2 7 8177 47–51 3 2 8 9178 47–54 1 3 8 8179 46–51 2 1 7 8180 29–46 2 2 5 6181 29–51 2 1 5 5182 29–54 3 2 5 5183 51–54 4 1 11 12184 52–54 6 2 49 51185 34–52 5 3 44 45186 54-55 2 0 55 57187 34–54 4 1 44 44188 55–62 2 1 57 59189 34–55 8 3 44 43190 34–58 7 3 44 44191 58–62 2 1 72 75192 34–62 1 2 44 45193 35–62 8 3 51 52194 62-63 5 3 85 90195 60–62 4 1 81 86196 60–63 6 2 81 85197 60–66 2 1 81 86

Page 8: Accelerated Particle Swarm Optimization to Solve Large ...

8 Mathematical Problems in Engineering

Table 2: Continued.

Number Work Duration Resource Quantity ES Optimized ES198 35–63 5 1 51 53199 35–66 2 0 51 52200 63–66 2 1 90 95201 65-66 3 2 84 88202 39–65 2 1 63 64203 66–70 3 2 92 97204 39–66 1 2 63 65205 39–70 2 2 63 66206 39–71 6 0 63 66207 71–74 5 3 99 103208 39–74 8 3 63 64209 41–74 5 1 83 92210 74–77 5 3 120 120211 73-74 8 3 112 112212 73–77 2 1 112 115213 73–80 1 2 112 115214 41–77 2 2 83 92215 41–80 7 3 83 91216 77–80 2 1 125 125217 78–80 1 3 114 118218 45–78 5 1 98 107219 80-81 3 2 127 127220 45–80 6 2 98 108221 81–84 5 1 130 130222 45–84 3 2 98 107223 70–74 4 1 95 100Note. “ES” is the early start time of each work. “Optimized ES” is the start time optimized.

Table 3: The resource requirements of each unit time of large-scale network plan optimization of resource-levelling with a fixed durationusing accelerated particle swarm algorithm (duration is 135).

unit time 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17resource requirements 5 10 10 17 9 12 17 16 17 19 18 14 12 9 9 11 12unit time 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34resource requirements 11 7 7 6 6 9 7 4 5 5 3 4 7 12 11 10 10unit time 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51resource requirements 12 10 7 10 9 10 7 13 13 13 16 21 19 16 15 16 15unit time 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68resource requirements 12 12 14 13 13 11 15 15 19 18 13 10 10 15 15 14 11unit time 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85resource requirements 13 16 17 17 15 12 12 12 13 17 14 14 14 16 13 13 14unit time 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102resource requirements 14 14 15 15 12 12 13 20 18 14 13 14 11 11 13 11 11unit time 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119resource requirements 12 13 10 10 9 10 8 8 10 10 12 12 10 11 12 9 14unit time 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135resource requirements 15 11 9 9 8 8 4 3 4 3 3 2 1 3 3 3

Page 9: Accelerated Particle Swarm Optimization to Solve Large ...

Mathematical Problems in Engineering 9

converges. The absolute value of the eigenvalues of 𝐿 is asfollows: |𝜆1,2| = |1 + 𝑤 − (𝑐1 + 𝑐2)/2 ± ((−1 − 𝑤 + (𝑐1 +𝑐2)/2)2 − 4𝑤)1/2/2|, where √(−1 − 𝑤 + (𝑐1 + 𝑐2)/2)2 − 4𝑤 ≥0. The reasoning was problematic, and the analysis was asfollows.

The evolution equation of PSO can be written in thematrix form:

[V (𝑡 + 1)𝑥 (𝑡 + 1)] = 𝐴 𝑡 [V𝑡𝑥𝑡] + [

𝜑1 𝜑2𝜑1 𝜑2][𝑝𝑖,𝑡𝑝𝑔,𝑡] , (5)

where 𝐴 𝑡 = [ 𝑤 −𝜑𝑤 1−𝜑 ], 𝜑 = 𝜑1 + 𝜑2 = 𝑐1𝑟1 + 𝑐2𝑟2, 𝑝𝑖,𝑡 is the bestplace ever found, and 𝑝𝑔,𝑡 is the best location for the wholeparticle swarm to date.The other signs are the same as earlier.

Set,

𝑦 (𝑡) = [V (𝑡)𝑥 (𝑡)] ,

𝜙𝑡 = [𝜑1𝑝𝑖,𝑡 + 𝜑2𝑝𝑔,𝑡𝜑1𝑝𝑖,𝑡 + 𝜑2𝑝𝑔,𝑡] .(6)

Then, (5) can be changed to

𝑦 (𝑡 + 1) = 𝐴 𝑡𝑦 (𝑡) + 𝜙𝑡. (7)

𝐸 is the mathematical expectation.

𝐸 (𝑦 (𝑡 + 1)) = [𝐸 (V (𝑡 + 1))𝐸 (𝑥 (𝑡 + 1))] ,

𝐸 (𝐴 𝑡) = [[[𝑤 −𝑐1 + 𝑐22𝑤 1 − 𝑐1 + 𝑐22

]]],

𝐸 (𝜙𝑡) = [[[[[

𝑐1𝐸 (𝑝𝑖,𝑡) + 𝑐2𝐸 (𝑝𝑔,𝑡)2𝑐1𝐸 (𝑝𝑖,𝑡) + 𝑐2𝐸 (𝑝𝑔,𝑡)2]]]]].

(8)

Set 𝑄𝑡+1 = 𝐸(𝑦(𝑡 + 1)), 𝑀 = 𝐸(𝐴 𝑡), 𝑍 = 𝐸(𝜙𝑡). Thecharacteristic value of𝑀 is𝜆1,2= 1 + 𝑤 − (𝑐1 + 𝑐2) /2 ± √(−1 − 𝑤 + (𝑐1 + 𝑐2) /2)2 − 4𝑤2 .

(9)

As long as 1 − (𝑐1 + 𝑐2)/2 ̸= −𝑤 ± 2√𝑤, matrix 𝐴 is goingto be

𝐴𝑀𝐴−1 = 𝐿 = [𝜆1 00 𝜆2] . (10)

Set 𝐴𝑄𝑡+1 = 𝐻𝑡. It could be deduced that the PSOalgorithm was an iteration:

𝐻𝑡+1 = 𝐿𝐻𝑡 + 𝐴𝑍, (11)

where 𝐿 is an iterative matrix.

The following two equations are equivalent to the infer-ence in [30]:

𝐴𝑥 = 𝑏𝑥 = 𝐵𝑥 + 𝑔, (12)

where 𝐴 is the coefficient matrix, 𝑥 is the unknown columnvector, 𝑏 is a constant number column vector, and 𝑔 is aconstant matrix determined by 𝐴 and 𝑏.

The following iterative matrix 𝐵 could be obtained by theaforementioned system:

𝑥𝑘+1 = 𝐵𝑥𝑘 + 𝑔. (13)

Set 𝑥∗ as the solution of the system. Then,

𝑥∗ = 𝐵𝑥∗ + 𝑔. (14)

The aforementioned two formulas (13) and (14) on sub-traction yield

𝑥𝑘+1 − 𝑥∗ = 𝐵 (𝑥𝑘 − 𝑥∗) = 𝐵 (𝐵 (𝑥𝑘−1 − 𝑥∗))= 𝐵2 (𝑥𝑘−1 − 𝑥∗) = ⋅ ⋅ ⋅ = 𝐵𝑘+1 (𝑥0 − 𝑥∗) . (15)

Because 𝑥0 −𝑥∗ has nothing to do with 𝑘, lim𝑘→∞(𝑥𝑘+1 −𝑥∗) = 0 is equivalent to lim𝑘→∞𝐵𝑘+1 = 0.The theoremquotedin [30] shows lim𝑘→∞𝐵𝑘+1 = 0 equivalent to 𝜌(𝐵) < 1, where𝜌(𝐵) is the spectral radius of matrix 𝐵.

Thus, the iterative matrix did not necessarily converge.Because the particle swarm algorithm did not have a set ofequations to solve 𝑄𝑡+1, the aforementioned reasoning couldnot be executed using the iterative matrix 𝐿 convergence.

In Table 1, for 𝑤 = 1, 𝑐1 = 𝑐2 = 2.05 (or 𝑐1 = 3.5, 𝑐2 = 0.4),the number of particles 50, and 𝐺 = 100, the accelerationcoefficient 𝑎 = (sin(𝛼))𝛽 = sin(𝜋/10)3 = 0.03 reflects the factthat the optimization ofACPSO in [28]was poor.Thiswas theexperimental verification of the problems of ACPSO quotedin [28].

5. Conclusions

This study proposed the method for the large-scale networkplan optimization of resource-leveling with a fixed durationthrough adjusting the coefficient of APSO based on thealgorithm quoted in [27] to obtain a better solution thanpreviously reported. In other words, for the same large-scalenetwork plan, the proposed algorithm improved the levelingcriterion by 24% compared with previous solutions.Thus, theresource variances of 17.58 and 223 of a large-scale networkplan are the best results for the large-scale network planoptimization of resource-leveling with a fixed duration todate in the literature.

Section 3 discusses the difference between APSO pro-posed in this study and PSOCF quoted in [29].The proposedAPSO was similar in form to PSOCF, but, essentially, PSOCFdid not have as good adaptability as APSO for the networkplan optimization.

Section 4 describes the difference between APSO pro-posed in this study and ACPSO quoted in [28]. Through

Page 10: Accelerated Particle Swarm Optimization to Solve Large ...

10 Mathematical Problems in Engineering

analyzing the iterative matrix convergence of equations,it was pointed out that the derivation of iterative matrixconvergence of ACPSO algorithm proposed in [28] wasproblematic, although it experimentally proved APSO wassimilar in form to ACPSO.

The effect of the APSOproposed in this studywas verifiedto be obvious experimentally. However, the internal workingmechanism of APSO is still a core issue worth investigation.

Data Availability

Data generated by the authors or analyzed during the studyare available from the following options: (1) Data generated oranalyzed during the study are available from the correspond-ing author by request. (2) All data generated or analyzedduring the study are included in the published paper.

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper.

References

[1] E. L. Demeulemeester and W. S. Herroelen, Project Scheduling,Kluwer Academic Publishers, Boston, 2002.

[2] S. F. Li, K. J. Zhu, and D. Y. Wang, “Complexity study of theapplication of network plan technique to large project,” Journalof China University of Geosciences (Social Science Edition), no. 9,pp. 90–94, 2010 (Chinese).

[3] X. F. Liu, Application Research of Network plan TechniqueOptimization Methods to Building Construction Management,Tianjin university, Tianjin, 2013.

[4] J.-L. Kim and J. R. D. Ellis, “Permutation-based elitist geneticalgorithm for optimization of large-sized resource-constrainedproject scheduling,” Journal of Construction Engineering andManagement, vol. 134, no. 11, pp. 904–913, 2008.

[5] J.-W. Huang, X.-X.Wang, and R. Chen, “Genetic algorithms foroptimization of resource Allocation in Large Scale Construc-tion Project Management,” Journal of Computers, vol. 5, no. 12,pp. 1916–1924, 2010.

[6] H. X. Zhang, “Resource-Leveling Optimization with FixedDuration for a Large Network Plan Based on the Monte CarloMethod,” Construction Technology, vol. 18, pp. 81–85, 2015.

[7] H. X. Zhang and Z. L. Yang, “Resource Optimization for a LargeNetwork Plan on Particle SwarmOptimization,”Mathematics inPractice andTheory, vol. 12, pp. 125–132, 2015.

[8] H. X. Zhang and Z. L. Yang, “Cost Optimization for a LargeNetwork Plan Based on Particle Swarm Optimization,” Math-ematics in Practice and Theory, vol. 11, pp. 142–148, 2015.

[9] M. Wang and Q. Tian, “Dynamic heat supply prediction usingsupport vector regression optimized by particle swarm opti-mization algorithm,” Mathematical Problems in Engineering,vol. 2016, Article ID 3968324, 10 pages, 2016.

[10] F. Pan, W. X. Li, and Q. Gao, Particle Swarm Optimization andMulti-objective Optimization, Beijing Institute of TechnologyPress, 2013.

[11] A. Meng, Z. Li, H. Yin, S. Chen, and Z. Guo, “Accelerating par-ticle swarm optimization using crisscross search,” InformationSciences, vol. 329, pp. 52–72, 2016.

[12] Y. Fu, Z. L. Xu, and J. L. Cao, “Application of heuristic particleswarm optimization method in power network planning,”Power System Technology, vol. 15, pp. 31–35, 2008.

[13] J. Sun, X. Wu, V. Palade, W. Fang, and Y. Shi, “Random driftparticle swarm optimization algorithm: convergence analysisand parameter selection,”Machine Learning, vol. 101, no. 1-3, pp.345–376, 2015.

[14] A. Nickabadi, M. M. Ebadzadeh, and R. Safabakhsh, “A novelparticle swarm optimization algorithm with adaptive inertiaweight,” Applied Soft Computing, vol. 11, no. 4, pp. 3658–3670,2011.

[15] T. O. Ting, Y. Shi, S. Cheng, and S. Lee, “Exponential inertiaweight for particle swarm optimization,” Lecture Notes inComputer Science (including subseries Lecture Notes in ArtificialIntelligence and Lecture Notes in Bioinformatics): Preface, vol.7331, no. 1, pp. 83–90, 2012.

[16] Y.-T. Juang, S.-L. Tung, andH.-C. Chiu, “Adaptive fuzzy particleswarm optimization for global optimization of multimodalfunctions,” Information Sciences, vol. 181, no. 20, pp. 4539–4549,2011.

[17] A. Ismail and A. P. Engelbrecht, “The self-adaptive compre-hensive learning particle swarm optimizer,” Lecture Notes inComputer Science (including subseries Lecture Notes in ArtificialIntelligence and Lecture Notes in Bioinformatics): Preface, vol.7461, pp. 156–167, 2012.

[18] B. Y. Qu, J. J. Liang, and P. N. Suganthan, “Niching particleswarm optimization with local search for multi-modal opti-mization,” Information Sciences, vol. 197, pp. 131–143, 2012.

[19] Y. Chen, D. Zhang,M. Zhou, andH. Zou, “Multi-satellite obser-vation scheduling algorithm based on hybrid genetic particleswarm optimization,” in Advances in Information Technologyand Industry Applications, vol. 136 of Lecture Notes in ElectricalEngineering, pp. 441–448, Springer, Berlin, Germany, 2012.

[20] S. Gholizadeh and F. Fattahi, “Serial integration of particleswarm and ant colony algorithms for structural optimization,”Asian Journal of Civil Engineering, vol. 13, no. 1, pp. 127–146,2012.

[21] A. Kaveh and S. Talatahari, “Particle swarm optimizer, antcolony strategy and harmony search scheme hybridized foroptimization of truss structures,” Computers & Structures, vol.87, no. 5-6, pp. 267–283, 2009.

[22] M. Khajehzadeh, M. R. Taha, A. El-Shafie, and M. Eslami,“Modified particle swarm optimization for optimum designof spread footing and retaining wall,” Journal of ZhejiangUniversity SCIENCE A, vol. 12, no. 6, pp. 415–427, 2011.

[23] Y. Yang, X. Fan, Z. Zhuo, S. Wang, J. Nan, and W. Chu,“Improved particle swarm optimization based on particles’explorative capability enhancement,” Journal of Systems Engi-neering and Electronics, vol. 27, no. 4, pp. 900–911, 2016.

[24] X. Qi, K. Li, and W. D. Potter, “Estimation of distributionalgorithm enhanced particle swarm optimization for waterdistribution network optimization,” Frontiers of EnvironmentalScience & Engineering, vol. 10, no. 2, pp. 341–351, 2016.

[25] Z. Zhang, L. Jia, and Y. Qin, “Modified constriction particleswarm optimization algorithm,” Journal of Systems Engineeringand Electronics, vol. 26, no. 5, Article ID 07347871, pp. 1107–1113,2015.

[26] Ch. H. Yang, W. H. Gui, and T. X. Dong, “A particle swarmoptimization algorithm with variable random functions andmutation,” Acta Automatica Sinical, vol. 7, pp. 1339–1347, 2014.

[27] H. Zhang and Z. Yang, “Large-Scale Network Plan Optimiza-tion Using Improved Particle Swarm Optimization Algorithm,”

Page 11: Accelerated Particle Swarm Optimization to Solve Large ...

Mathematical Problems in Engineering 11

Mathematical Problems in Engineering, vol. 2017, Article ID3271969, 2017.

[28] Z. H. Ren and J. Wang, “Accelerate convergence particle swarmoptimization algorithm,” Control and Decision, vol. 2, pp. 201–206, 2011.

[29] M. Clerc and J. Kennedy, “The particle swarm-explosion, sta-bility, and convergence in a multidimensional complex space,”IEEE Transactions on Evolutionary Computation, vol. 6, no. 1,pp. 58–73, 2002.

[30] D. S.H.Ma,N.Dong et al.,Numerical calculationmethod, ChinaMachine Press, 2015.

Page 12: Accelerated Particle Swarm Optimization to Solve Large ...

Hindawiwww.hindawi.com Volume 2018

MathematicsJournal of

Hindawiwww.hindawi.com Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwww.hindawi.com Volume 2018

Probability and StatisticsHindawiwww.hindawi.com Volume 2018

Journal of

Hindawiwww.hindawi.com Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwww.hindawi.com Volume 2018

OptimizationJournal of

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Engineering Mathematics

International Journal of

Hindawiwww.hindawi.com Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwww.hindawi.com Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwww.hindawi.com Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwww.hindawi.com Volume 2018

Hindawi Publishing Corporation http://www.hindawi.com Volume 2013Hindawiwww.hindawi.com

The Scientific World Journal

Volume 2018

Hindawiwww.hindawi.com Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com

Di�erential EquationsInternational Journal of

Volume 2018

Hindawiwww.hindawi.com Volume 2018

Decision SciencesAdvances in

Hindawiwww.hindawi.com Volume 2018

AnalysisInternational Journal of

Hindawiwww.hindawi.com Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwww.hindawi.com