Research ArticleAccelerated Particle Swarm Optimization toSolve Large-Scale Network Plan Optimization ofResource-Leveling with a Fixed Duration
Houxian Zhang 1 and Zhaolan Yang2
1School of Architecture and Civil Engineering, Nanjing Institute of Technology, Nanjing 211167, China2Industrial Center, Nanjing Institute of Technology, Nanjing 211167, China
Correspondence should be addressed to Houxian Zhang; [email protected]
Received 28 December 2017; Revised 18 March 2018; Accepted 20 March 2018; Published 16 May 2018
Academic Editor: Anna M. Gil-Lafuente
Copyright Β© 2018 Houxian Zhang and Zhaolan Yang. This is an open access article distributed under the Creative CommonsAttribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work isproperly cited.
Large-scale network plan optimization of resource-leveling with a fixed duration is challenging in project management. Particleswarm optimization (PSO) has provided an effective way to solve this problem in recent years. Although the previous algorithmshave provided a way to accelerate the optimization of large-scale network plan by optimizing the initial particle swarm, how tomore effectively accelerate the optimization of large-scale network plan with PSO is still an issue worth exploring. The main aimof this study was to develop an accelerated particle swarm optimization (APSO) for the large-scale network plan optimization ofresource-leveling with a fixed duration. By adjusting the acceleration factor, the large-scale network plan optimization of resource-leveling with a fixed duration yielded a better result in this study than previously reported. Computational results demonstratedthat, for the same large-scale network plan, the proposed algorithm improved the leveling criterion by 24% compared with previoussolutions. APSO proposed in this study was similar in form to, but different from, particle swarm optimization with contractionfactor (PSOCF). PSOCFdid not have as good adaptability as APSO for network plan optimization. Accelerated convergence particleswarm optimization (ACPSO) is similar in form to the APSO proposed in this study, but its irrationality was pointed out in thisstudy by analyzing the iterative matrix convergence.
1. Introduction
The network plan is considered by the engineering com-munity as a promising management method. A large-scalenetwork plan with many works (such as more than 50) is aneffective tool for solving large project management problems[1, 2]. However, the number of possible solutions in large-scale network plan optimization sharply increases with thenumber of works, and the time of calculation is exponential,far beyond the processing capacity of computing resources,so mathematics and computer science cannot solve theproblem known as NP problem [2, 3]. In recent years, geneticalgorithm [4, 5], Monte Carlo partition optimization [6], andparticle swarm optimization (PSO) [7, 8] have provided aneffective means to solve this problem.
PSO was proposed in 1995. Although the convergence ofPSO is still controversial, its applied research has shown good
results [9β13]. Experimental research includes optimiza-tion, biomedicine, communication, control, and so forth.Theoretical research includes PSO improvement, parameterselection, stability, convergence, and so forth. Improvementin performance of PSO reported in the literature includedadjusting the parameters of PSO (inertial factor) [14β17],adopting the neighborhood topology [18], and combiningwith other algorithms (genetic algorithm, simulated anneal-ing algorithm, and differential evolution algorithm) [19β22].It does not include the solution to large-scale network planoptimization problems.
Accelerated optimization can be marked by better-optimized solutions with the same number of iterations foriterative optimization. Yang et al. introduced some virtualparticles in random directions with random amplitude toenhance the explorative capability of particles in PSO [23];Qi et al. hybridized an improved estimation of distribution
HindawiMathematical Problems in EngineeringVolume 2018, Article ID 9235346, 11 pageshttps://doi.org/10.1155/2018/9235346
2 Mathematical Problems in Engineering
algorithm (EDA) using historic best positions to constructa sample space with PSO both in sequential and in parallelto improve population diversity control and avoid prematureconvergence for optimization of a water distribution network[24]; Zhang et al. added the random velocity operator fromlocal optima to global optima into the velocity updateformula of constriction particle swarm optimization (CPSO)to accelerate the convergence speed of the particles to theglobal optima and reduce the likelihood of being trapped intolocal optima [25]; Zhou et al. adjusted random functions withthe density of the population so as to manipulate the weightof cognition part and social part and executed mutation onboth personal best particle and group best particle to explorenew areas [26]. Zhang and Yang accelerated the optimizationof large-scale network plan resources and analyzed the accel-eration optimization mechanism via stochastic process byoptimizing the initial particle swarm using the Monte Carlomethod under limiting conditions [7, 8, 27]; Ren and Wangproposed a PSO algorithm with accelerated convergence,theoretically proved the fast convergence of the algorithm,and optimized the parameters in the algorithm [28].
Inspired by previous efforts [28] to accelerate the con-vergence of PSO algorithm, this study proposed the methodfor the large-scale network plan optimization of resource-levelingwith a fixed duration through debugging accelerationcoefficient (it might also be described as accelerated PSO, orAPSO for short) and yielded a better solution than reportedin the previous literature.
This paper is organized as follows. Section 2 describesthe experimental research of the large-scale network planoptimization of resource-leveling with a fixed duration usingAPSO. Section 3 analyzes the difference between APSO andPSO with a contraction factor (PSOCF) [29]. Section 4analyzes the irrationality of accelerated convergence PSO(ACPSO) reported in [28].
2. APSO to Solve the Large-Scale NetworkPlan Optimization of Resource-Levelingwith a Fixed Duration
Large-scale network plan optimization of resource-levelingwith a fixed duration achieved the balance of resourcedemand in each period during the entire period of the project.Equilibrium could be marked by the variance of resources.The formula used to calculate the variance was as follows:π2 = (βπ½π=1(π₯π β π)2)/π½ where the total number of samplesπ₯π is π½, and the arithmetic mean of π₯π is π. The smaller thevariance, the more balanced the resource.
The evolutionary equation of basic PSO was as follows:
Vππ (π‘ + 1) = π€Vππ (π‘) + π1rand1 (π‘) (πππ (π‘) β π₯ππ (π‘))+ π2rand2 (π‘) (ππ (π‘) β π₯ππ (π‘))
π₯ππ (π‘ + 1) = π₯ππ (π‘) + Vππ (π‘ + 1) ,(1)
where π‘ is the number of iterations; π₯ππ(π‘ + 1) is π-dimensionspace coordinates of particle π in π‘ + 1 iteration; π₯ππ(π‘) is π-dimension space coordinates of particle π in π‘ iteration; π€ is
inertial factor, usually taking the value of 1 according to theexperience; Vππ(π‘) is the π-dimension flight speed of particle π;π1 and π2 are accelerators evaluated usually between 0 and 2 byexperience; rand1 and rand2 are random functions of value inthe range of [0, 1]; πππ(π‘) is the best place to be experiencedby particle π; and ππ(π‘) is the best place for all particlesto experience. The convergence condition was adopted bysetting maximum iteration times πΊ.
The evolutionary equation of accelerated PSO (APSO)was as follows:
Vππ (π‘ + 1) = π (π€Vππ (π‘) + π1rand1 (π‘) (πππ (π‘) β π₯ππ (π‘))+ π2rand2 (π‘) (ππ (π‘) β π₯ππ (π‘)))
π₯ππ (π‘ + 1) = π₯ππ (π‘) + Vππ (π‘ + 1) ,(2)
where π is the acceleration coefficient, and the other signsare the same as earlier. The evolution equation of acceleratedparticle swarm algorithmhas onemore π thanwith that of thebasic PSO algorithm and onemoreπ€ than that of the particleswarm algorithm with contraction factor. However, it hasproduced significant results for solving large-scale networkplan optimization of resource-leveling with a fixed durationas follows.
For example, a large network plan with 223 works is thesame as Figure 1 in [27]. The debugging results of change aare shown in Table 1, where the variance of the correspondingoptimization results is 17.58 (better than the variance 22.99quoted in [27]). The start time of each work is shown inTable 2, and the resource requirements of each unit time areshown in Table 3.
As shown in Table 1, for π€ = 1, π1 = π2 = 2.05,the number of particles 50, and πΊ = 100, the minimumvariance 17.58 could be obtained by adjusting the accelerationcoefficient π, which was significantly optimized comparedwith the variance quoted in [27] without the accelerationcoefficient (that is 22.99). For π€ = 1, π1 = 3.5, π2 = 0.4,the number of particles 50, and πΊ = 100, the minimumvariance 18.4 could be obtained by adjusting the accelerationcoefficient π, which was significantly optimized comparedwith the variance quoted in the literature [27]. For π€ = 0.8,π1 = π2 = 2.05, the number of particles 50, and πΊ = 100, theminimum variance 18.93 could be obtained by adjusting theacceleration coefficient π, which was significantly optimizedcompared with the variance quoted in [27]. For π€ = 0.729,π1 = π2 = 1.454, the number of particles 50, and πΊ = 100,variance smaller than 17.83 (acceleration coefficient 1) couldnot be obtained by adjusting the acceleration coefficient π.3. Difference between APSO and PSOCF [29]
APSO proposed in this study was similar in form to PSOCF.The evolution equation of PSOCF was as follows [29]:
Vππ (π‘ + 1) = π (Vππ (π‘) + π1rand1 (π‘) (πππ (π‘) β π₯ππ (π‘))+ π2rand2 (π‘) (ππ (π‘) β π₯ππ (π‘)))
π₯ππ (π‘ + 1) = π₯ππ (π‘) + Vππ (π‘ + 1) ,(3)
Mathematical Problems in Engineering 3
Table 1: Optimization parameterβs debugging results of the large-scale network plan optimization of resource-leveling with a fixed durationusing the accelerated particle swarm algorithm (particle number is 50; πΊ = 100).
sequence number π€ π1 π2 π π21 1 2.05 2.05 1 23.972 1 2.05 2.05 0.33 19.313 1 2.05 2.05 0.31 18.974 1 2.05 2.05 0.3 17.585 1 2.05 2.05 0.29 18.616 1 2.05 2.05 3 31.437 1 2.05 2.05 0.1 31.398 1 2.05 2.05 0.03 31.439 1 3.5 0.4 1 22.9910 1 3.5 0.4 0.03 31.4311 1 3.5 0.4 0.5 21.412 1 3.5 0.4 0.8 31.4313 1 3.5 0.4 0.3 23.0214 1 3.5 0.4 0.6 16.35 error15 1 3.5 0.4 0.4 20.816 1 3.5 0.4 0.35 19.917 1 3.5 0.4 0.33 18.418 1 3.5 0.4 0.31 25.519 0.8 2.05 2.05 1 24.5320 0.8 2.05 2.05 0.3 31.4321 0.8 2.05 2.05 1.2 31.4322 0.8 2.05 2.05 0.4 31.4323 0.8 2.05 2.05 0.5 31.0024 0.8 2.05 2.05 0.6 25.8525 0.8 2.05 2.05 0.7 18.9326 0.8 2.05 2.05 0.8 22.3827 0.729 1.454 1.454 1 17.8328 0.729 1.454 1.454 0.8 25.0029 0.729 1.454 1.454 1.05 20.5430 0.729 1.454 1.454 0.9 18.33 errorNote. The value in italics is the optimal value under certain parameter conditions.
where contraction factor π = 2π /|2βπββπ(π β 4)|, π β [0, 1],π = π1 + π2. The other signs are the same as earlier.For π1 = 3.5, π2 = 0.4, π = 2π /|2βπββπ(π β 4)| = 2π /|2β3.9 β β3.9(3.9 β 4)| does not exist. The PSOCF could not be
used, but APSO in this study was used for optimization ofnetwork plan and the results were good, as shown in Table 1.
For π1 = 2.05, π2 = 2.05, π = 2π /|2βπββπ(π β 4)| = 2(0 βΌ1)/|2 β 4.1 β β4.1(4.1 β 4)| = 0βΌ0.73. The acceleration factorπ is outside the scope of the contraction factor π, and theoptimization of APSO in this study was performed as usual,as shown in Table 1.
Thus, although, in this study, APSO was similar in formto PSOCF, essentially, for network plan optimization, PSOCFdid not have as good adaptability as APSO.
4. Irrationality of ACPSO Reported in [28]
APSO proposed in this study was inspired by the ACPSOalgorithm quoted in [28]. APSO was similar in form to
ACPSO. The evolution equation of ACPSO algorithm pro-posed in [28] was as follows:
Vππ (π‘ + 1) = (sin (πΌ))π½ (π€Vππ (π‘)+ π1rand1 (π‘) (πππ (π‘) β π₯ππ (π‘))+ π2rand2 (π‘) (ππ (π‘) β π₯ππ (π‘)))
π₯ππ (π‘ + 1) = (sin (πΌ))π½ π₯ππ (π‘) + Vππ (π‘ + 1) ,(4)
whereπΌ is angle valuewith a distinct optimization effectwhenits value is within [0, π/8]; π½ is a constant greater than zero,and the optimization effect is good when the value is 3. Theother signs are the same as earlier.
The ACPSO algorithm quoted in [28] was based on oneinference: PSO is iterative. The iterative converges when thespectral radius of iterative matrix πΏ (that is the maximumabsolute value of the matrix eigenvalue) is less than 1. Thesmaller the spectral radius of πΏ, the faster the iteration
4 Mathematical Problems in Engineering
Table 2: The parameters and their optimization solution for the optimization example of the resource-leveling with a fixed duration.
Number Work Duration Resource Quantity ES Optimized ES1 1β3 2 2 0 02 1β4 2 1 0 03 3-4 2 1 2 24 1-2 1 2 0 05 4β6 4 1 4 36 5β8 4 1 7 77 5β9 5 1 7 88 6β8 5 3 13 139 7-8 8 3 15 1510 3β6 3 2 2 311 2β4 3 2 1 112 3β5 5 1 2 213 2β8 3 2 1 114 5-6 6 2 7 615 2β6 1 3 1 116 6-7 2 0 13 1217 8-9 7 3 23 2218 8β11 2 2 23 2519 8β10 2 1 23 2320 7β10 2 1 15 1521 9-10 1 2 30 2922 9β12 4 1 30 2823 18-19 8 3 57 5724 17β20 2 1 51 5225 16β19 4 1 49 4826 18β21 5 1 57 5827 17β19 5 3 51 5128 16-17 2 2 49 4929 15β17 2 1 46 4630 14β18 2 1 42 4131 14β17 1 2 42 4132 13β15 2 1 44 4433 10-11 4 1 31 3034 12-13 5 1 39 3935 10β13 5 3 31 3036 10β12 8 3 31 3037 17-18 6 0 51 5138 15-16 3 2 46 4639 11β15 5 1 35 3440 14-15 3 2 42 4141 11β17 5 1 35 3542 12β14 3 2 39 3843 11β13 6 2 35 3444 12β15 2 0 39 3845 21-22 2 2 70 6946 22β24 2 1 75 8847 20β22 2 1 73 8548 20β24 1 2 73 8549 20β26 2 1 73 84
Mathematical Problems in Engineering 5
Table 2: Continued.
Number Work Duration Resource Quantity ES Optimized ES50 19β21 5 1 65 6451 19β22 5 3 65 6552 19-20 8 3 65 7653 24-25 3 2 77 9054 24β26 5 1 77 9655 26-27 3 2 85 10156 25-26 5 1 80 9357 23β27 3 2 72 9958 23β26 6 2 72 7059 23-24 1 3 72 7160 21β23 2 0 70 6961 21β24 7 3 70 7062 3β28 1 2 2 263 3β30 3 2 2 264 3β31 1 3 2 365 28β30 2 1 3 366 28-29 2 2 3 367 29-30 2 1 5 568 29β31 3 2 5 569 29β34 3 2 5 570 30-31 4 1 7 871 5β31 6 2 7 772 5β34 5 3 7 773 31-32 2 0 13 1474 31β34 4 1 13 1275 32-33 2 1 15 1676 32β34 8 3 15 1677 9β34 7 3 30 2978 9β33 2 1 30 2979 34-35 2 2 44 4480 33-34 1 2 43 4381 33β35 8 3 43 4482 33β36 5 3 43 4283 12β33 4 1 39 3984 12β36 6 2 39 3785 12β37 5 1 39 3786 35-36 5 1 51 5387 35β37 2 0 51 5288 35β39 5 1 51 5389 36-37 2 1 56 5890 14β37 3 2 42 4191 14β39 2 1 42 4192 37-38 3 2 58 6093 37β39 1 2 58 6094 38-39 2 2 61 6395 18β39 6 0 57 5796 18β40 5 3 57 5797 39-40 8 3 63 7798 39β41 2 1 63 6699 40-41 5 1 78 86
6 Mathematical Problems in Engineering
Table 2: Continued.
Number Work Duration Resource Quantity ES Optimized ES100 40β42 5 3 78 84101 21β40 8 3 70 70102 21β42 2 1 70 69103 21β43 1 2 70 69104 41-42 2 2 83 92105 41β43 7 3 83 91106 41β45 2 1 83 91107 42-43 2 1 85 95108 23β43 1 3 72 72109 23β45 5 1 72 72110 43-44 3 2 90 98111 43β45 6 2 90 98112 44-45 5 1 93 101113 27β45 3 2 88 104114 38β40 4 1 61 63115 47-48 2 2 8 9116 48β50 2 1 10 11117 47β50 2 1 8 9118 48-49 1 2 10 11119 50β53 4 1 14 16120 52β57 4 1 49 49121 52β58 5 1 49 61122 53β57 5 3 55 56123 56-57 8 3 57 59124 47β53 3 2 8 9125 49-50 3 2 11 12126 47β52 5 1 8 8127 49β57 3 2 11 13128 52-53 6 2 49 50129 49β53 1 3 11 12130 53β56 2 0 55 56131 57-58 7 3 65 68132 57β61 2 2 65 68133 57β59 2 1 65 67134 56β59 2 1 57 58135 58-59 1 2 72 75136 58β60 4 1 72 76137 71-72 8 3 99 102138 69β75 2 1 93 98139 68β72 4 1 91 94140 71β73 5 1 99 103141 69β72 5 3 93 98142 68-69 2 2 91 95143 67β69 2 1 88 92144 65β71 2 1 84 88145 65β69 1 2 84 87146 64β67 2 1 86 90147 59β61 4 1 73 76148 60β64 5 1 81 85
Mathematical Problems in Engineering 7
Table 2: Continued.
Number Work Duration Resource Quantity ES Optimized ES149 59β64 5 3 73 77150 59-60 8 3 73 77151 69β71 6 0 93 97152 67-68 3 2 88 92153 61β67 5 1 77 80154 65β67 3 2 84 87155 61β69 5 1 77 80156 60β65 3 2 81 85157 61β64 6 2 77 81158 60β67 2 0 81 86159 73β76 2 2 112 115160 76β79 2 1 117 121161 75-76 2 1 115 119162 75β79 1 2 115 118163 75β83 2 1 115 118164 72-73 5 1 107 110165 72β76 5 3 107 110166 72β75 8 3 107 110167 79β82 3 2 119 123168 79β83 5 1 119 123169 83-84 3 2 127 132170 82-83 5 1 122 126171 78β84 3 2 114 118172 78β83 6 2 114 119173 78-79 1 3 114 119174 73β78 2 0 112 116175 73β79 7 3 112 116176 46-47 1 2 7 8177 47β51 3 2 8 9178 47β54 1 3 8 8179 46β51 2 1 7 8180 29β46 2 2 5 6181 29β51 2 1 5 5182 29β54 3 2 5 5183 51β54 4 1 11 12184 52β54 6 2 49 51185 34β52 5 3 44 45186 54-55 2 0 55 57187 34β54 4 1 44 44188 55β62 2 1 57 59189 34β55 8 3 44 43190 34β58 7 3 44 44191 58β62 2 1 72 75192 34β62 1 2 44 45193 35β62 8 3 51 52194 62-63 5 3 85 90195 60β62 4 1 81 86196 60β63 6 2 81 85197 60β66 2 1 81 86
8 Mathematical Problems in Engineering
Table 2: Continued.
Number Work Duration Resource Quantity ES Optimized ES198 35β63 5 1 51 53199 35β66 2 0 51 52200 63β66 2 1 90 95201 65-66 3 2 84 88202 39β65 2 1 63 64203 66β70 3 2 92 97204 39β66 1 2 63 65205 39β70 2 2 63 66206 39β71 6 0 63 66207 71β74 5 3 99 103208 39β74 8 3 63 64209 41β74 5 1 83 92210 74β77 5 3 120 120211 73-74 8 3 112 112212 73β77 2 1 112 115213 73β80 1 2 112 115214 41β77 2 2 83 92215 41β80 7 3 83 91216 77β80 2 1 125 125217 78β80 1 3 114 118218 45β78 5 1 98 107219 80-81 3 2 127 127220 45β80 6 2 98 108221 81β84 5 1 130 130222 45β84 3 2 98 107223 70β74 4 1 95 100Note. βESβ is the early start time of each work. βOptimized ESβ is the start time optimized.
Table 3: The resource requirements of each unit time of large-scale network plan optimization of resource-levelling with a fixed durationusing accelerated particle swarm algorithm (duration is 135).
unit time 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17resource requirements 5 10 10 17 9 12 17 16 17 19 18 14 12 9 9 11 12unit time 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34resource requirements 11 7 7 6 6 9 7 4 5 5 3 4 7 12 11 10 10unit time 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51resource requirements 12 10 7 10 9 10 7 13 13 13 16 21 19 16 15 16 15unit time 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68resource requirements 12 12 14 13 13 11 15 15 19 18 13 10 10 15 15 14 11unit time 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85resource requirements 13 16 17 17 15 12 12 12 13 17 14 14 14 16 13 13 14unit time 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102resource requirements 14 14 15 15 12 12 13 20 18 14 13 14 11 11 13 11 11unit time 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119resource requirements 12 13 10 10 9 10 8 8 10 10 12 12 10 11 12 9 14unit time 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135resource requirements 15 11 9 9 8 8 4 3 4 3 3 2 1 3 3 3
Mathematical Problems in Engineering 9
converges. The absolute value of the eigenvalues of πΏ is asfollows: |π1,2| = |1 + π€ β (π1 + π2)/2 Β± ((β1 β π€ + (π1 +π2)/2)2 β 4π€)1/2/2|, where β(β1 β π€ + (π1 + π2)/2)2 β 4π€ β₯0. The reasoning was problematic, and the analysis was asfollows.
The evolution equation of PSO can be written in thematrix form:
[V (π‘ + 1)π₯ (π‘ + 1)] = π΄ π‘ [Vπ‘π₯π‘] + [
π1 π2π1 π2][ππ,π‘ππ,π‘] , (5)
where π΄ π‘ = [ π€ βππ€ 1βπ ], π = π1 + π2 = π1π1 + π2π2, ππ,π‘ is the bestplace ever found, and ππ,π‘ is the best location for the wholeparticle swarm to date.The other signs are the same as earlier.
Set,
π¦ (π‘) = [V (π‘)π₯ (π‘)] ,
ππ‘ = [π1ππ,π‘ + π2ππ,π‘π1ππ,π‘ + π2ππ,π‘] .(6)
Then, (5) can be changed to
π¦ (π‘ + 1) = π΄ π‘π¦ (π‘) + ππ‘. (7)
πΈ is the mathematical expectation.
πΈ (π¦ (π‘ + 1)) = [πΈ (V (π‘ + 1))πΈ (π₯ (π‘ + 1))] ,
πΈ (π΄ π‘) = [[[π€ βπ1 + π22π€ 1 β π1 + π22
]]],
πΈ (ππ‘) = [[[[[
π1πΈ (ππ,π‘) + π2πΈ (ππ,π‘)2π1πΈ (ππ,π‘) + π2πΈ (ππ,π‘)2]]]]].
(8)
Set ππ‘+1 = πΈ(π¦(π‘ + 1)), π = πΈ(π΄ π‘), π = πΈ(ππ‘). Thecharacteristic value ofπ isπ1,2= 1 + π€ β (π1 + π2) /2 Β± β(β1 β π€ + (π1 + π2) /2)2 β 4π€2 .
(9)
As long as 1 β (π1 + π2)/2 ΜΈ= βπ€ Β± 2βπ€, matrix π΄ is goingto be
π΄ππ΄β1 = πΏ = [π1 00 π2] . (10)
Set π΄ππ‘+1 = π»π‘. It could be deduced that the PSOalgorithm was an iteration:
π»π‘+1 = πΏπ»π‘ + π΄π, (11)
where πΏ is an iterative matrix.
The following two equations are equivalent to the infer-ence in [30]:
π΄π₯ = ππ₯ = π΅π₯ + π, (12)
where π΄ is the coefficient matrix, π₯ is the unknown columnvector, π is a constant number column vector, and π is aconstant matrix determined by π΄ and π.
The following iterative matrix π΅ could be obtained by theaforementioned system:
π₯π+1 = π΅π₯π + π. (13)
Set π₯β as the solution of the system. Then,
π₯β = π΅π₯β + π. (14)
The aforementioned two formulas (13) and (14) on sub-traction yield
π₯π+1 β π₯β = π΅ (π₯π β π₯β) = π΅ (π΅ (π₯πβ1 β π₯β))= π΅2 (π₯πβ1 β π₯β) = β β β = π΅π+1 (π₯0 β π₯β) . (15)
Because π₯0 βπ₯β has nothing to do with π, limπββ(π₯π+1 βπ₯β) = 0 is equivalent to limπββπ΅π+1 = 0.The theoremquotedin [30] shows limπββπ΅π+1 = 0 equivalent to π(π΅) < 1, whereπ(π΅) is the spectral radius of matrix π΅.
Thus, the iterative matrix did not necessarily converge.Because the particle swarm algorithm did not have a set ofequations to solve ππ‘+1, the aforementioned reasoning couldnot be executed using the iterative matrix πΏ convergence.
In Table 1, for π€ = 1, π1 = π2 = 2.05 (or π1 = 3.5, π2 = 0.4),the number of particles 50, and πΊ = 100, the accelerationcoefficient π = (sin(πΌ))π½ = sin(π/10)3 = 0.03 reflects the factthat the optimization ofACPSO in [28]was poor.Thiswas theexperimental verification of the problems of ACPSO quotedin [28].
5. Conclusions
This study proposed the method for the large-scale networkplan optimization of resource-leveling with a fixed durationthrough adjusting the coefficient of APSO based on thealgorithm quoted in [27] to obtain a better solution thanpreviously reported. In other words, for the same large-scalenetwork plan, the proposed algorithm improved the levelingcriterion by 24% compared with previous solutions.Thus, theresource variances of 17.58 and 223 of a large-scale networkplan are the best results for the large-scale network planoptimization of resource-leveling with a fixed duration todate in the literature.
Section 3 discusses the difference between APSO pro-posed in this study and PSOCF quoted in [29].The proposedAPSO was similar in form to PSOCF, but, essentially, PSOCFdid not have as good adaptability as APSO for the networkplan optimization.
Section 4 describes the difference between APSO pro-posed in this study and ACPSO quoted in [28]. Through
10 Mathematical Problems in Engineering
analyzing the iterative matrix convergence of equations,it was pointed out that the derivation of iterative matrixconvergence of ACPSO algorithm proposed in [28] wasproblematic, although it experimentally proved APSO wassimilar in form to ACPSO.
The effect of the APSOproposed in this studywas verifiedto be obvious experimentally. However, the internal workingmechanism of APSO is still a core issue worth investigation.
Data Availability
Data generated by the authors or analyzed during the studyare available from the following options: (1) Data generated oranalyzed during the study are available from the correspond-ing author by request. (2) All data generated or analyzedduring the study are included in the published paper.
Conflicts of Interest
The authors declare that there are no conflicts of interestregarding the publication of this paper.
References
[1] E. L. Demeulemeester and W. S. Herroelen, Project Scheduling,Kluwer Academic Publishers, Boston, 2002.
[2] S. F. Li, K. J. Zhu, and D. Y. Wang, βComplexity study of theapplication of network plan technique to large project,β Journalof China University of Geosciences (Social Science Edition), no. 9,pp. 90β94, 2010 (Chinese).
[3] X. F. Liu, Application Research of Network plan TechniqueOptimization Methods to Building Construction Management,Tianjin university, Tianjin, 2013.
[4] J.-L. Kim and J. R. D. Ellis, βPermutation-based elitist geneticalgorithm for optimization of large-sized resource-constrainedproject scheduling,β Journal of Construction Engineering andManagement, vol. 134, no. 11, pp. 904β913, 2008.
[5] J.-W. Huang, X.-X.Wang, and R. Chen, βGenetic algorithms foroptimization of resource Allocation in Large Scale Construc-tion Project Management,β Journal of Computers, vol. 5, no. 12,pp. 1916β1924, 2010.
[6] H. X. Zhang, βResource-Leveling Optimization with FixedDuration for a Large Network Plan Based on the Monte CarloMethod,β Construction Technology, vol. 18, pp. 81β85, 2015.
[7] H. X. Zhang and Z. L. Yang, βResource Optimization for a LargeNetwork Plan on Particle SwarmOptimization,βMathematics inPractice andTheory, vol. 12, pp. 125β132, 2015.
[8] H. X. Zhang and Z. L. Yang, βCost Optimization for a LargeNetwork Plan Based on Particle Swarm Optimization,β Math-ematics in Practice and Theory, vol. 11, pp. 142β148, 2015.
[9] M. Wang and Q. Tian, βDynamic heat supply prediction usingsupport vector regression optimized by particle swarm opti-mization algorithm,β Mathematical Problems in Engineering,vol. 2016, Article ID 3968324, 10 pages, 2016.
[10] F. Pan, W. X. Li, and Q. Gao, Particle Swarm Optimization andMulti-objective Optimization, Beijing Institute of TechnologyPress, 2013.
[11] A. Meng, Z. Li, H. Yin, S. Chen, and Z. Guo, βAccelerating par-ticle swarm optimization using crisscross search,β InformationSciences, vol. 329, pp. 52β72, 2016.
[12] Y. Fu, Z. L. Xu, and J. L. Cao, βApplication of heuristic particleswarm optimization method in power network planning,βPower System Technology, vol. 15, pp. 31β35, 2008.
[13] J. Sun, X. Wu, V. Palade, W. Fang, and Y. Shi, βRandom driftparticle swarm optimization algorithm: convergence analysisand parameter selection,βMachine Learning, vol. 101, no. 1-3, pp.345β376, 2015.
[14] A. Nickabadi, M. M. Ebadzadeh, and R. Safabakhsh, βA novelparticle swarm optimization algorithm with adaptive inertiaweight,β Applied Soft Computing, vol. 11, no. 4, pp. 3658β3670,2011.
[15] T. O. Ting, Y. Shi, S. Cheng, and S. Lee, βExponential inertiaweight for particle swarm optimization,β Lecture Notes inComputer Science (including subseries Lecture Notes in ArtificialIntelligence and Lecture Notes in Bioinformatics): Preface, vol.7331, no. 1, pp. 83β90, 2012.
[16] Y.-T. Juang, S.-L. Tung, andH.-C. Chiu, βAdaptive fuzzy particleswarm optimization for global optimization of multimodalfunctions,β Information Sciences, vol. 181, no. 20, pp. 4539β4549,2011.
[17] A. Ismail and A. P. Engelbrecht, βThe self-adaptive compre-hensive learning particle swarm optimizer,β Lecture Notes inComputer Science (including subseries Lecture Notes in ArtificialIntelligence and Lecture Notes in Bioinformatics): Preface, vol.7461, pp. 156β167, 2012.
[18] B. Y. Qu, J. J. Liang, and P. N. Suganthan, βNiching particleswarm optimization with local search for multi-modal opti-mization,β Information Sciences, vol. 197, pp. 131β143, 2012.
[19] Y. Chen, D. Zhang,M. Zhou, andH. Zou, βMulti-satellite obser-vation scheduling algorithm based on hybrid genetic particleswarm optimization,β in Advances in Information Technologyand Industry Applications, vol. 136 of Lecture Notes in ElectricalEngineering, pp. 441β448, Springer, Berlin, Germany, 2012.
[20] S. Gholizadeh and F. Fattahi, βSerial integration of particleswarm and ant colony algorithms for structural optimization,βAsian Journal of Civil Engineering, vol. 13, no. 1, pp. 127β146,2012.
[21] A. Kaveh and S. Talatahari, βParticle swarm optimizer, antcolony strategy and harmony search scheme hybridized foroptimization of truss structures,β Computers & Structures, vol.87, no. 5-6, pp. 267β283, 2009.
[22] M. Khajehzadeh, M. R. Taha, A. El-Shafie, and M. Eslami,βModified particle swarm optimization for optimum designof spread footing and retaining wall,β Journal of ZhejiangUniversity SCIENCE A, vol. 12, no. 6, pp. 415β427, 2011.
[23] Y. Yang, X. Fan, Z. Zhuo, S. Wang, J. Nan, and W. Chu,βImproved particle swarm optimization based on particlesβexplorative capability enhancement,β Journal of Systems Engi-neering and Electronics, vol. 27, no. 4, pp. 900β911, 2016.
[24] X. Qi, K. Li, and W. D. Potter, βEstimation of distributionalgorithm enhanced particle swarm optimization for waterdistribution network optimization,β Frontiers of EnvironmentalScience & Engineering, vol. 10, no. 2, pp. 341β351, 2016.
[25] Z. Zhang, L. Jia, and Y. Qin, βModified constriction particleswarm optimization algorithm,β Journal of Systems Engineeringand Electronics, vol. 26, no. 5, Article ID 07347871, pp. 1107β1113,2015.
[26] Ch. H. Yang, W. H. Gui, and T. X. Dong, βA particle swarmoptimization algorithm with variable random functions andmutation,β Acta Automatica Sinical, vol. 7, pp. 1339β1347, 2014.
[27] H. Zhang and Z. Yang, βLarge-Scale Network Plan Optimiza-tion Using Improved Particle Swarm Optimization Algorithm,β
Mathematical Problems in Engineering 11
Mathematical Problems in Engineering, vol. 2017, Article ID3271969, 2017.
[28] Z. H. Ren and J. Wang, βAccelerate convergence particle swarmoptimization algorithm,β Control and Decision, vol. 2, pp. 201β206, 2011.
[29] M. Clerc and J. Kennedy, βThe particle swarm-explosion, sta-bility, and convergence in a multidimensional complex space,βIEEE Transactions on Evolutionary Computation, vol. 6, no. 1,pp. 58β73, 2002.
[30] D. S.H.Ma,N.Dong et al.,Numerical calculationmethod, ChinaMachine Press, 2015.
Hindawiwww.hindawi.com Volume 2018
MathematicsJournal of
Hindawiwww.hindawi.com Volume 2018
Mathematical Problems in Engineering
Applied MathematicsJournal of
Hindawiwww.hindawi.com Volume 2018
Probability and StatisticsHindawiwww.hindawi.com Volume 2018
Journal of
Hindawiwww.hindawi.com Volume 2018
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawiwww.hindawi.com Volume 2018
OptimizationJournal of
Hindawiwww.hindawi.com Volume 2018
Hindawiwww.hindawi.com Volume 2018
Engineering Mathematics
International Journal of
Hindawiwww.hindawi.com Volume 2018
Operations ResearchAdvances in
Journal of
Hindawiwww.hindawi.com Volume 2018
Function SpacesAbstract and Applied AnalysisHindawiwww.hindawi.com Volume 2018
International Journal of Mathematics and Mathematical Sciences
Hindawiwww.hindawi.com Volume 2018
Hindawi Publishing Corporation http://www.hindawi.com Volume 2013Hindawiwww.hindawi.com
The Scientific World Journal
Volume 2018
Hindawiwww.hindawi.com Volume 2018Volume 2018
Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in
Nature and SocietyHindawiwww.hindawi.com Volume 2018
Hindawiwww.hindawi.com
DiοΏ½erential EquationsInternational Journal of
Volume 2018
Hindawiwww.hindawi.com Volume 2018
Decision SciencesAdvances in
Hindawiwww.hindawi.com Volume 2018
AnalysisInternational Journal of
Hindawiwww.hindawi.com Volume 2018
Stochastic AnalysisInternational Journal of
Submit your manuscripts atwww.hindawi.com
Top Related