A heuristic-search genetic algorithm for multi-stage hybrid flow shop scheduling with single...

18
J Intell Manuf DOI 10.1007/s10845-014-0874-y A heuristic-search genetic algorithm for multi-stage hybrid flow shop scheduling with single processing machines and batch processing machines Dongni Li · Xianwen Meng · Qiqiang Liang · Junqing Zhao Received: 12 October 2013 / Accepted: 15 January 2014 © Springer Science+Business Media New York 2014 Abstract This paper addresses the scheduling problem for a multi-stage hybrid flow shop (HFS) with single processing machines and batch processing machines. Each stage con- sists of nonidentical machines in parallel, and only one of the stages is composed of batch processing machines. Such a variant of the HFS problem is derived from the actual manu- facturing of complex products in the equipment manufactur- ing industry. Aiming at minimizing the maximum completion time and minimizing the total weighted tardiness, respec- tively, a heuristic-search genetic algorithm (HSGA) is devel- oped in this paper, which selects assignment rules for parts, sequencing rules for machines (including single processing machines and batch processing machines), and batch for- mation rules for batch processing machines, simultaneously. Then parts and machines are scheduled using the obtained combinatorial heuristic rules. Since the search space com- posed of the heuristic rules is much smaller than that com- posed of the schedules, the HSGA results in lower complexity and higher computational efficiency. Computational results indicate that as compared with meta-heuristics that search for scheduling solutions directly, the HSGA has a significant advantage with respect to the computational efficiency. As compared with combinatorial heuristic rules, other heuristic- search approaches, and the CPLEX, the HSGA provides bet- D. Li (B ) · X. Meng · Q. Liang · J. Zhao Beijing Lab of Intelligent Information Technology, School of Computer Science, Beijing Institute of Technology, Beijing 100081, China e-mail: [email protected] X. Meng e-mail: [email protected] Q. Liang e-mail: [email protected] J. Zhao e-mail: [email protected] ter optimizational performance and is especially suitable to solve large dimension scheduling problems. Keywords Hybrid flow shop · Batch processing machine · Single processing machine · Genetic algorithm · Heuristic rule Introduction A hybrid flow shop (HFS) is a typical multi-stage flow shop (Luo et al. 2011). A standard form (Ruiz and Vazquez- Rodriguez 2010) of the HFS problem should satisfy all the conditions listed below: all parts and machines are available at time zero; machines at a given stage are identical; any machine can process only one operation at a time and any part can be processed by only one machine at a time; setup time is negligible; preemption is not allowed; the capacity of buffers between stages is unlimited and problem data is deterministic and known in advance. Since the above assumptions are far from actual produc- tion, the variants of the HFS problem have been attempted in the literature, including allowing dynamic arrival of parts (Kim et al. 2009), allowing stage skipping for parts (Ruiz et al. 2008), or considering setup time for machines (Mir- sanei et al. 2011), etc. Of these variants, the HFS with batch processing machines is of great importance (Sung et al. 2000; Wang et al. 2012). Typical batch processing machines include heat treat- ment furnaces in mechanical manufacturing, thermal cycling chambers in semiconductor manufacturing, and reacting fur- naces in chemical industry, etc. A batch processing machine is usually the bottleneck on the processing route for the following reasons (Allahverdi et al. 2008; Mathirajan et al. 2010; Potts and Kovalyov 2000): (1) batch processing 123

Transcript of A heuristic-search genetic algorithm for multi-stage hybrid flow shop scheduling with single...

J Intell ManufDOI 10.1007/s10845-014-0874-y

A heuristic-search genetic algorithm for multi-stage hybrid flowshop scheduling with single processing machines and batchprocessing machines

Dongni Li · Xianwen Meng · Qiqiang Liang ·Junqing Zhao

Received: 12 October 2013 / Accepted: 15 January 2014© Springer Science+Business Media New York 2014

Abstract This paper addresses the scheduling problem fora multi-stage hybrid flow shop (HFS) with single processingmachines and batch processing machines. Each stage con-sists of nonidentical machines in parallel, and only one ofthe stages is composed of batch processing machines. Such avariant of the HFS problem is derived from the actual manu-facturing of complex products in the equipment manufactur-ing industry. Aiming at minimizing the maximum completiontime and minimizing the total weighted tardiness, respec-tively, a heuristic-search genetic algorithm (HSGA) is devel-oped in this paper, which selects assignment rules for parts,sequencing rules for machines (including single processingmachines and batch processing machines), and batch for-mation rules for batch processing machines, simultaneously.Then parts and machines are scheduled using the obtainedcombinatorial heuristic rules. Since the search space com-posed of the heuristic rules is much smaller than that com-posed of the schedules, the HSGA results in lower complexityand higher computational efficiency. Computational resultsindicate that as compared with meta-heuristics that searchfor scheduling solutions directly, the HSGA has a significantadvantage with respect to the computational efficiency. Ascompared with combinatorial heuristic rules, other heuristic-search approaches, and the CPLEX, the HSGA provides bet-

D. Li (B) · X. Meng · Q. Liang · J. ZhaoBeijing Lab of Intelligent Information Technology, School of ComputerScience, Beijing Institute of Technology, Beijing 100081, Chinae-mail: [email protected]

X. Menge-mail: [email protected]

Q. Liange-mail: [email protected]

J. Zhaoe-mail: [email protected]

ter optimizational performance and is especially suitable tosolve large dimension scheduling problems.

Keywords Hybrid flow shop · Batch processing machine ·Single processing machine · Genetic algorithm ·Heuristic rule

Introduction

A hybrid flow shop (HFS) is a typical multi-stage flow shop(Luo et al. 2011). A standard form (Ruiz and Vazquez-Rodriguez 2010) of the HFS problem should satisfy all theconditions listed below: all parts and machines are availableat time zero; machines at a given stage are identical; anymachine can process only one operation at a time and anypart can be processed by only one machine at a time; setuptime is negligible; preemption is not allowed; the capacityof buffers between stages is unlimited and problem data isdeterministic and known in advance.

Since the above assumptions are far from actual produc-tion, the variants of the HFS problem have been attemptedin the literature, including allowing dynamic arrival of parts(Kim et al. 2009), allowing stage skipping for parts (Ruizet al. 2008), or considering setup time for machines (Mir-sanei et al. 2011), etc. Of these variants, the HFS with batchprocessing machines is of great importance (Sung et al. 2000;Wang et al. 2012).

Typical batch processing machines include heat treat-ment furnaces in mechanical manufacturing, thermal cyclingchambers in semiconductor manufacturing, and reacting fur-naces in chemical industry, etc. A batch processing machineis usually the bottleneck on the processing route for thefollowing reasons (Allahverdi et al. 2008; Mathirajan etal. 2010; Potts and Kovalyov 2000): (1) batch processing

123

J Intell Manuf

machines are high in both prices and setup costs, so theyare very limited in quantity; (2) the processing time requiredby batch processing operations is usually much longer thanother types of operations, therefore the queuing time of theparts is lengthened due to the batch processing machine; and(3) sometimes the batch processing operation occurs at theend of the processing route and thus has a strong influenceon the delivery dates (such as the burn-in operation of semi-conductor final testing).

Nevertheless, this area has not been attempted widely inthe literature as compared to other HFS variants. The relatedresearches can be categorized into two branches with respectto the types of machines in the HFS.

Some researches addressed the scheduling problem inthe HFS fully composed of batch processing machines.Damodaran et al. (2013) developed a particle swarm opti-mization (PSO) approach to schedule batch processingmachines arranged in a permutation flow shop in order tominimize its makespan. Liu and Karimi (2008) presenteddifferent mixed integer linear programming (MILP) modelsfor scheduling with identical and nonidentical batch process-ing machines respectively, and developed a mixed approachintegrating sequence-based and slot-based approaches for themixes of stages with identical and nonidentical batch process-ing machines. Sung et al. (2000) developed a problem reduc-tion procedure incorporated with two efficient heuristics tosolve the scheduling problem for a multi-stage flow shop ofbatch processing machines.

However, in actual manufacturing environments, variousmachine types are often included in the same HFS. There-fore some researches addressed the scheduling problems inthe HFS containing both single processing machines andbatch processing machines. Feng et al (2009) assumed thatthe capacity of each machine is more than one part, whichmeans a machine can process at least two parts simultane-ously, and proposed a genetic algorithm (GA) to minimizethe makespan. Su and Chen (2010) developed a heuristic anda branch-and-bound approach for a two-stage hybrid flowshop in which a single processing machine is followed by abatch processing machine. Yao et al. (2012) proposed severalcombinational heuristic algorithms for a two-stage hybridflow shop in which a batch processing machine is followedby a single processing machine. Behnamian et al. (2012) con-sidered a single-batch problem and a batch-single problemrespectively, taking transportation capacity and transporta-tion time into account. Each problem was formulated as amixed integer programming (MIP) model and a three-stepheuristic algorithm was developed. Luo et al. (2011) con-sidered a two-stage HFS, in which multiple identical batchprocessing machines are located in the first stage, while inthe second stage there is only one single processing machine.A MIP model was constructed and GA was used to obtainnear optimal schedules mainly by minimizing the makespan.

These researches take single processing machines andbatch processing machines into account simultaneously,however, most of them are considered in the context of two-stage HFS. In some complex systems in equipment manu-facturing industry, the number of stages is much more thantwo, which demands computational efficiency in addition tooptimization performance.

In this regard, this paper considers the scheduling problemof the multi-stage HFS. One of the stages is composed ofnonidentical batch processing machines, while others consistof nonidentical single processing machines.

The problem considered in this paper is derived from theequipment manufacturing industry, of which the processingroutes for parts usually involve multiple processing typessuch as machining and heat treatment. According to our sur-vey, with regard to some complex products such as the syn-thetic transmission devices, the processing routes of morethan 35 % parts involve both machining operations and heattreatment operations. The machining machines fall into thecategory of the single processing machines, while the heattreatment furnaces belong to the batch processing machines.Therefore, the scheduling problem becomes more complexwhen they coexist in the processing route of the same part.

It should be noted that the scheduling problems of themachining phase and the heat treatment phase are usuallyconsidered separately. This is mainly because the processingtime of a heat treatment operation is usually much longerthan a machining operation. However, our survey indicatesthat for some complex products, the number of the machiningoperations is much more than that of the heat treatment oper-ations with respect to the whole processing routes of parts,so that the total processing time of the machining phase isclose to, even slightly longer than, that of the heat treatmentphase; on the other hand, it is observed that the processingtime of some complicated machining operations has reachedup to thousands of minutes. These facts make the machiningphase comparable with the heat treatment phase with respectto the essential optimization measures such as makespan,mean flow time, and tardiness, etc. Therefore, it is necessaryto study the scheduling problems in the context of complexprocessing routes involving both machining operations andheat treatment operations.

In actual scheduling context with large dimension data,heuristic rules are often preferable due to the simplicityand efficiency, and they are particularly suitable to dealwith complex, dynamic, and unpredictable environments (Huand Li 2009;Ruiz and Vazquez-Rodriguez 2010). However,no single rule can outperform others in all circumstances,which means the heuristic rules rely heavily on the schedul-ing environments and objectives (Park et al. 1997). There-fore, some researches adopted combinatorial heuristic rules(Barman 1997; Laforge and Barman 1989; Sarper and Henry1996), in which the approaches to obtain the combinatorial

123

J Intell Manuf

rules can essentially fall into the category of enumeration.However, for the variant of the HFS problem consideringmultiple stages and various machine types, the enumerationstrategy leads to unaffordable computational costs due to theenormous expansion for the search space. Therefore, there isa need to study more effective strategies searching for appro-priate combinatorial rules.

In this regard, meta-heuristics are used to search for thesuitable combinations of rules, which are then applied forparts scheduling (Dorndorf and Pesch 1995; Fayad and Petro-vic 2005; Ponnambalam et al. 2001; Vazquez-Rodriguez andPetrovic 2010; Yang et al. 2007).

Among the above researches, some approaches areoperation-oriented, which means the heuristic rules areselected for the operations of parts, rather than for machines.Dorndorf and Pesch (1995) searched a best sequence ofrules for selecting operations via GA. Considering the multi-objective job shop scheduling problem, Vazquez-Rodriguezand Petrovic (2010) developed a hybrid dispatching-rule-based GA, which searches simultaneously for the bestsequence of rules and the number of operations to be handledby each rule.

Other approaches are machine-oriented, which means theheuristic rules are selected for machines instead of parts.Fayad and Petrovic (2005) adopted GA to search the appro-priate rule for each machine in a fuzzy job shop problem.Ponnambalam et al. (2001) developed a similar approach fora multi-objective job shop problem.

However, as for the problem addressed in this paper, dueto the existence of flexible processing routes caused by theparallel machines, the heuristic rules should be selected forboth parts and machines.

Actually, Yang et al. (2007) considered the problem simi-lar to ours, i.e., a HFS containing multiple stages with singleprocessing machines and batch processing machines. Theydeveloped a heuristic-search approach based on GA, in whichheuristic rules are selected for stages, indicating the machinesin the same stage are assigned the same heuristic rule. Never-theless, even if machines are located in the same stage, theymay vary in some status such as capacity constraints, process-ing rates, and the number of parts in buffers, etc. Thereforethe variances of machines have direct impact on the schedul-ing results. On the other hand, in addition to the assignmentsubproblem and the sequencing subproblem caused by theflexible routes (Rossi and Dini 2007), the introduction of thebatch processing machines leads to the third subproblem,i.e., the batch formation subproblem. Yang et al. focusedon the sequencing subproblem, assuming the batches areknown in advance. However, for a HFS containing both sin-gle processing machines and batch processing machines, thebatch formation subproblem should be paid more attention tosince parts are often dynamically grouped into batches, espe-cially for the equipment manufacturing industry that con-

siders small batch production as one of the most essentialproduction modes.

Based on the analysis above, in order to obtain goodscheduling performance for the HFS containing singleprocessing machines and batch processing machines, theassignment subproblem, sequencing subproblem and batchformation subproblem should be all considered in theapproach which searches the heuristic rules for machinesand parts simultaneously.

In this regard, aiming at minimizing the maximum com-pletion time (Cmax ) and minimizing the total weightedtardiness (TWT) respectively, a heuristic-search geneticalgorithm (HSGA) is developed in this paper, selectingassignment rules for parts, sequencing rules for machines(including single processing machines and batch process-ing machines), and batch formation rules for batch process-ing machines, simultaneously. Scheduling solutions are thengenerated using the obtained combinatorial rules. Since thesearch space composed of heuristic rules is much smaller thanthat composed of schedules, the HSGA has lower complexityand higher efficiency, as compared to the GA searching forschedules directly.

In summary, the contribution of the proposed HSGAincludes: (1) a variant of the HFS problem containing mul-tiple stages with both single processing machines and batchprocessing machines is considered; (2) rather than consid-ering the single-batch model and the batch-single modelrespectively, an integrated single-batch-single model forthe addressed problem is presented; and (3) the proposedapproach considers assignment, sequencing and batch forma-tion simultaneously, selecting the appropriate heuristic rulefor each machine and each part, respectively.

The rest of this paper is organized as follows. Section 2presents the mathematical model of the addressed problem.In Sect. 3 the HSGA is developed in detail. A series of exper-imental tests and the computational results are presented inSect. 4. Section 5 presents the conclusion in the end.

Problem description

There are N parts that have to be processed in a k-stage (k ≥3)HFS. Each stage consists of nonidentical machines in parallelthat are either single processing machines or batch process-ing machines and only one stage contains batch processingmachines. Each part has its own release time and due date. Apart has and only has one operation at each stage. Since a partcannot skip any stage, the processing of a part in stage k can bereferred as the k-th operation of that part. Parts are processedby the single processing machines before reaching the stagefor the batch processing machines, where parts are groupedinto batches considering capacity constraints of the batchprocessing machines. Upon completion of the batch process-

123

J Intell Manuf

ing operation, the parts in the batch continue to be processedon the following single processing machines respectively, tillthe end of their processing routes.

Assumptions and problem formulation

The addressed scheduling problem is based on the followingassumptions.

• The capacity of buffers between stages is unlimited.• Stage skipping is not allowed for all parts.• The volumes of all parts are equal so that the batch size is

determined only by the number of parts constituting thatbatch.

• The processing time of a part at each stage is known andconstant.

• Parts are compatible to construct batches.• Sequence independent setup time is considered and the

setup time is known and constant.• Transportation time between machines is neglected.• Preemptive scheduling is not allowed.

The problem formulation can be described using a tripletα|β|γ notation, which is derived from Ruiz and Vazquez-Rodriguez (2010). In the triplet notation, αdefines the shopconfiguration or scheduling environment, β lists the con-straints and assumptions, and γ indicates the objective func-tion. With this notation, two different scheduling problemsare considered in this paper to verify the adaptability of theHSGA, which are listed as follows.

H FK ,(

P M (k))K

k=1

∣∣∣r j,Ssnd , batch(k′)∣∣∣ Cmax (1)

H FK ,(

P M (k))K

k=1

∣∣∣r j,Ssnd , batch(k′)∣∣∣ TWT (2)

As indicated above, problem (1) and (2) have the same α

andβ, where H FK represents a HFS containing K stages;(P M (k))K

k=1 indicates that each stage consists of noniden-tical parallel machines; r j represents unequal release timeof parts; Ssnd represents sequence independent setup time;and batch(k′) indicates the batch processing machines arelocated at stage k′. As shown by parameter γ , problem (1) and(2) have different objective functions, minimizing Cmax forproblem (1) and minimizing TWT for problem (2), respec-tively.

Mathematical model

The following notations are adopted to describe the aboveproblems.

Indexes:

j index for parts ( j = 1, . . ., N )

m index for machines (m = 1, . . ., M)

k index for stages (k = 1, . . ., K )

b index for batches (b = 1, . . ., B)

Parameters:

tc current timet time period (t = 1, 2, . . ., T )

CMm capacity of machine m

CMm ={

1, if m is a single processing machineN+(≥ 2), if m is a batch processing machine

p jkm processing time for part j on machine m atstage k

p jk processing time for part j at stage kst jkm setup time for part j on machine m at stage kC jk completion time for part j at stage kd j due date for part jr j release time for part jw j weight for part jO jk the operation of part j processed at stage k, i.e.,

the k-th operation of part jTO jk type of operation O jk ,

TO jk ={

1, if O jk is a batch processing operation0, if O jk is a single processing operation

TMm type of machine m,

TMm ={

1, if m is a batch processing machine0, if m is a single processing machine

Decision variables:

X jkb =⎧⎨⎩

1,if part j is assigned tobatch b at stage k

0, otherwise

Y jkm =⎧⎨⎩

1,if part j is processed onmachine m at stage k

0, otherwise

S jkt =⎧⎨⎩

1,if the processing of part jat stage k starts at time t

0, otherwise

Based on the assumptions and notations given above, themathematical model of this problem is presented in this sub-section.

The objectives of the addressed scheduling problem areto minimize the maximum completion time and minimizethe total weighted tardiness, respectively, which can beexpressed mathematically by objective function (3) and (4)as follows.

min maxj

{C j K } (3)

minN∑

j=1

w j max{C j K − d j , 0} (4)

123

J Intell Manuf

The actual production has many peculiar characteristicsand must be subject to some constraints. We describe math-ematically the characteristics and constraints below.

M∑m=1

Y jkm = 1, ∀ j, k (5)

(1 − TO jk) + TO jk

B∑b=1

X jkb = 1, ∀ j, k (6)

T∑t=1

S jkt = 1, ∀ j, k (7)

T∑t=1

t S j1t ≥ r j , ∀ j (8)

(1 − TMm)S jkt Y jkm

N∑j ′

t+p jk∑t ′=t

S j ′kt Y j ′km = 0,

∀ j, k, m, and j �= j ′ (9)N∑

j=1

Y jkm X jkb ≤ C Mm, ∀b (10)

p jk ≥ (1 − TO jk)

M∑m=1

Y jkm p jkm

+ TO jkmaxN∑

j=1

M∑m=1

X jkbY jkm p jkm (11)

T∑t=0

t S j (k+1)t ≥T∑

t=0

t S jkt

+p jk +M∑

m=1

Y jkmst jkm,∀ j, and k = 1 · · · K − 1 (12)

X jkb

T∑t=0

t S jkt = X j ′kb

T∑t=0

t S j ′kt ,

j �= j ′, TO jk = 1,∀ j, k, b (13)

C jk ≥T∑

t=0

t S jkt + p jk +M∑

m=1

Y jkmst jkm,∀ j, k, (14)

Constraint (5) ensures that a part is assigned to only onemachine at a time, while constraint (6) ensures that a partis assigned to only one batch. With constraint (7), a partmust be processed once at each stage. Constraint (8) indi-cates that a part cannot be processed before its release time.Constraint (9) ensures that a single processing machine canprocess no more than one part at a time. Constraint (10)states that the size of a batch cannot exceed the capacity ofthe batch processing machine. Constraint (11) represents theprocessing time of a part at stage k, which depends on thetype of the k-th operation of that part. It remains unchanged

for a single processing operation, while for a batch process-ing operation, the processing time is the maximum process-ing time among the parts constituting the batch. Constraint(12) ensures that a part cannot be processed at a stage beforeits preceding stage is completed. Constraint (13) indicatesthe parts assigned to the same batch have the same startingtime at the batch processing stage. Constraint (14) definesthe completion time for a part at a stage.

Heuristic-search genetic algorithm

In this section, the HSGA is developed to solve the proposedscheduling problem. The search space of the HSGA is com-posed of heuristic rules rather than schedules, which meansthe HSGA searches suitable heuristic rules for machinesand parts, and then schedules parts and machines using theobtained combinatorial heuristic rules. The HSGA operateson a search space composed of heuristic rules, resulting inlower complexity and higher computational efficiency ascompared to the meta-heuristics searching for schedulingsolutions directly.

For the simplicity of illustration, the concept of entity isadopted to comprehensively represent the item(s) processedon a machine. For a single processing machine, an entityrepresents a part, while for a batch processing machine, anentity represents a batch.

Encoding scheme

In the HSGA, the chromosome representation contains threesegments, i.e., part segment, machine segment and batch seg-ment, corresponding to part assignment, entity sequencingand batch formation respectively. The gene in each segmenthas an integer representing a certain heuristic rule. Althoughmachines are located in stages, they have different featuresof their own, such as the machine capacities, processing ratesand the queues in their buffers, etc. Similarly, parts may bedifferent in release times, due dates and processing routes,etc. Therefore it is more beneficial to consider various fac-tors and design the encoding scheme for parts and machines,rather than for stages.

In the HSGA, five heuristic rules are listed as the candi-dates for part assignment, eleven for entity sequencing, andthree for batch formation. As an entity on the batch process-ing machine, the due date, arrival time and release time ofa batch are determined by the minimum due date, minimumarrival time, and minimum release time of the parts in thebatch, respectively. Similarly, the maximum processing time,maximum setup time, maximum remaining processing timeand maximum weight of the parts in a batch are considered asthe processing time, setup time, remaining processing timeand weight of the batch, respectively.

123

J Intell Manuf

The candidate assignment rules are listed as follows.

• First Available (FA): The part is assigned to the first avail-able machine among the alternatives that are defined in itsprocessing route.

• Least Utilization (LU): The part is assigned to the machinewith the least utilization rate among the alternatives thatare defined in its processing route.

• Most Available (MA): The part is assigned to the machinewith the least number of queuing parts in its buffer amongthe alternatives that are defined in its processing route.

• Shortest Processing Time (SPT): The part is assigned tothe machine with the shortest processing time of the partamong the alternatives that are defined in its processingroute.

• Earliest Finish Time (EFT): The part is assigned to themachine with the least value of the sum of the earliestavailable time and processing time among the alternativesthat are defined in the processing route of that part.

The candidate sequencing rules are listed below, most ofwhich are taken from Yang et al. (2007), while two additionalsequencing rules, i.e., weighted shortest processing time andweighted earliest due date, are introduced for the objectiveof minimizing total weighted tardiness.

• First In first Out (FIFO): The entity that arrives at themachine first is selected.

• Time In Shop (TIS): The entity with the longest time inshop is selected. Time in shop for entity e, denoted t ise,is defined in (15).

tise = tc − re (15)

where re represents the ready time of entity e.• Shortest Processing Time (SPT): The entity with the short-

est processing time is selected.• Shortest Remaining Processing Time (SRPT): The entity

with the shortest remaining processing time is selected.Remaining processing time for entity e at stage k, denotedr ptek , is defined in (16).

rptek =K∑

k′=k+1

pekm (16)

where pekm represents the processing time of entity e onmachine m at stage k, and pekm is the average processingtime of each alternative machine at stage k.

• Largest Estimated Flow Time (LEFT): The entity withthe largest estimated flow time is selected. Estimated flowtime for entity e at stage k, denoted e f tek , is defined in(17).

e f tek = tc − atek + r ptek (17)

where atekm represents the arrival time of entity e onmachine m at stage k.

• Smallest Processing Time Ratio (SPTR): The entity withthe smallest processing time ratio is selected. Processingtime ratio for entity e at stage k on machine m, denotedptrekm , is defined in (18).

ptrekm = pekm/(tc − re) (18)

• Earliest Due Date (EDD): The entity with the earliest duedate is selected.

• Minimum Slack (MS): The entity with the smallest slackis selected. Slack for entity e at stage k, denoted slackek ,is defined in (19).

slackek = de − tc − r ptek (19)

where de represents the due date of entity e.• Critical Ratio (CR): The entity with the smallest criti-

cal ratio is selected. Critical ratio for entity e at stage k,denoted crek , is defined in (20).

crek = (de − tc)/r ptek (20)

• Weighted Shortest Processing Time (WSPT): The entitywith the smallest weighted processing time is selected.Weighted processing time for entity e at stage konmachinem, denoted wpekm , is defined in (21).

wpekm = pekm/we (21)

where we represents the weight of entity e.• Weighted Earliest Due Date (WEDD): The entity with the

smallest weighted due date is selected. Weighted due datefor entity e, denoted wde, is defined in (22).

wde = de/we (22)

The candidate batch formation rules are listed as follows.• First In First Out (FIFO): Parts waiting to be grouped into

batches are sorted by the arrival time in nondescendingorder.

• Shortest Processing Time (SPT): Parts waiting to begrouped into batches are sorted by the processing timeof the batch processing operation in nondescending order.

• Earliest Due Date (EDD): Parts waiting to be grouped intobatches are sorted by the due date in nondescending order.

For the convenience to illuminate the problem, an instanceis given. Assume there are five parts to be scheduled in a HFS.The HFS contains three stages with totally six machines, twoof which are batch processing machines. A possible chro-mosome is shown in Fig. 1. The value of the first position

123

J Intell Manuf

Fig. 1 An example of a possible chromosome

in the part segment represents the heuristic rule adopted bythe first part for the assignment subproblem, and as indicatedin Fig. 1, MA is adopted as the assignment rule. Similarly,EDD is adopted by the first machine as the sequencing rule.It is observed from the batch segment in Fig. 1 that a batchprocessing machine is associated with both a batch formationrule and a sequencing rule.

Decoding scheme

The decoding scheme, in addition to the algorithms for partassignment, batch formation and entity sequencing, are pre-sented in this subsection.

Part assignment

For the convenience to illuminate the assignment algorithm,assignable part set is defined in Definition 1.

Definition 1 For a part assignment decision at time t , theassignable part set, denoted APSt , is defined as the set ofparts that satisfy one of following conditions:

(i) Let O jk represent the next operation of part j , if k=1,i.e., O jk is the first operation of part j , then j ∈ APSt ;

(ii) Otherwise, if O j,k−1 is completed, k = 2, 3, . . . , K ,then j ∈ APSt .

Let p(i) represent the i-th part in APSt and AssigmentRule j represent the assignment rule for part j . The heuristicalgorithm for assignment is presented as follows.

Part Assignment Heuristic Algorithm (PAHA)

Step 1. Upon the arrival of a new part or the completion of apart at time t , let i = 1, and update APSt ;Step 2. If i > |APSt |, go to Step6;Step 3. For all alternative machines with the capability toprocess p(i), choose the best machine m* according toAssigmentRule j ;Step 4. Assign p(i)to m∗;Step 5. i = i + 1, and go to Step2;Step 6. APSt = ∅. End.

Batch formation

For the convenience to illuminate the batch formation algo-rithm, the definition of candidate part set is presented in Def-inition 2.

Definition 2 The candidate part set for batch processingmachine m, denoted C P Sm , is defined as the set of partsthat have been assigned to m according to the PAHA.

Let BatchFormationRulem represent the batch forma-tion rule for machine m, and pos( j) represent the posi-tion of part j in C P Sm after sorting parts according toBatchFormationRulem . The heuristic algorithm for batch for-mation is presented as follows.

Batch Formation Heuristic Algorithm (BFHA)

Step 1. Let batch set BU = ∅ and update C P Sm ;Step 2. Sort the parts belonging to C P Sm according toBatchFormationRulem ;Step 3. Select parts, the number of which is no more than,to construct batch b, b = { j | j ∈ C P Sm and pos( j) ≤C Mm}, BU = BU ∪ {b}, C P Sm = C P Sm − { j | j ∈ b};Step 4. If C P Sm �= ∅, go to Step3.Step 5. Empty the buffer of m and reassign all batches to m.End.

Entity sequencing

For the convenience to illuminate the sequencing algorithm,the definition of schedulable entity set is given in Definition 3.

Definition 3 The schedulable entity set of machine m,denoted SE Sm , is defined as the set of entities that have beenassigned to m according to the PAHA or the BFHA.

Let SequencingRulem represent the sequencing ruleassigned to machine m, and e represent the schedulable entityto be processed.

The sequencing algorithm is presented below.

Entity Sequencing Heuristic Algorithm (ESHA)

Step 1. Update SE Sm ;Step 2. For all entities e ∈ SE Sm , choose the best entity e∗according toSequencingRulem ;Step 3. Entity e∗ starts to be processed on machine m;Step 4. SE Sm = SE Sm −{e∗} and remove e∗ from the bufferof machine m;Step 5. End.

Chromosome decoding

Since the HSGA searches for rules rather than for schedules,a discrete event simulator (DES) is designed to decode theheuristics in order to construct the schedule and obtain thecorresponding value of the objective function. The decodingalgorithm is presented as follows.

123

J Intell Manuf

Decoding Algorithm

Step 1. Simulation clock t = 0;Step 2. For different segments of a given chromosome, assignthe sequencing rules to machines (including single process-ing machines and batch processing machines), assignmentrules to parts and batch formation rules to batch processingmachines respectively;Step 3. If all parts are completed, go to Step10;Step 4. For each assignable part j , perform the PAHA;Step 5. If all machines are busy, go to Step10.Step 6. If machine m becomes idle and m is a batch process-ing machine, go to Step7; otherwise, if m is a single process-ing machine, go to Step8;Step 7. Perform the BFHA;Step 8. Schedule one entity according to the ESHA;Step 9. Record the starting time, completion time andmachine for the corresponding operation of the entity;Step 10. t = t + 1, go to Step3;Step 11. Calculate the corresponding values of the objec-tive function according to (3) and (4) using the informationrecorded in Step 9. End.

Fitness function and genetic operators

Fitness function is used to evaluate how a chromosome per-forms for a certain objective, which is defined in (23).

fit(i) = 1

obj(i) + 1(23)

where obj(i) represents the corresponding objective func-tion value of the i-th chromosome, which is obtained by thedecoding algorithm presented in Sect. 3.2.4.

Roulette wheel selection is widely used as the most effec-tive selection operator (Zandieh et al. 2010). The probabilityfor a chromosome to be selected, denoted prob(i), is definedin (24).

prob(i) = fit(i)∑ni=1 fit(i)

(24)

Two-point crossover operator is adopted, given the crossoverprobability. Once two parents are selected, two positionsin the chromosome are selected randomly. The substringsbetween the two positions are exchanged by two parents toproduce two new child chromosomes. Then mutation is per-formed with the given mutation probability. One-point muta-tion is adopted in this paper, which selects a mutation posi-tion randomly and substitutes the gene with another heuristicrule presented in Sect. 3.1. The three different segments ofa chromosome perform the above genetic operators respec-tively, using the same crossover probability and mutationprobability.

Computational experiments and results

Experiment design

Since there are no benchmarks available for the addressedproblem, 18 test problems are generated randomly. Thesetest problems include different sizes, with the number of partsranging from 10 to 95, the number of machines ranging from8 to 21 and the number of stages ranging from 3 to 7. A testproblem is denoted jn1mn2sn3, which represents there are n1parts in a n3-stage HFS with totally n2 machines. The num-ber of single processing machines located at each stage israndomly generated in [2,4]. The number of batch process-ing machines is randomly generated in [2,5].The machinesallocated to each stage are shown in Table 1. The attributes ofparts are generated randomly according to Table 2. For eachtest problem, 10 instances are generated randomly. There-fore, there are totally 180 instances. Each instance is evalu-ated through 5 independent replications. The due date of apart is generated by (25).

d j = r j + dlK∑

k=1

p jk (25)

In (25) dl is the due date level and reflects the tightnessof the due date, and

∑Kk=1 p jk represents the sum of the

processing time for all operations of the part.All the algorithms are coded in Java and implemented on a

PC with a 3.4GHz Core i7-2600 CPU and 4G RAM memory.

Parameter setting of the HSGA

A full factor design experiment is conducted in order to iden-tify an optimal parameters setting of the HSGA. The relatedparameters include population size (ps), crossover probabil-ity (pc), mutation probability (pm) and maximum generation(gm). Each parameter is considered as a factor and the levelsof factors are shown in Table 3.

The generated 192 experiments are analyzed by means ofa multifactor analysis of variance (ANOVA) (Montgomery2000) at 95 % confidence level. We consider 18 stochasticallyindependent replications for each parameter combination.

Therefore, there are totally 3,456 test instances. Eachtest instance is tested by 5 independent replications and theaverage value is selected as the performance measure. Theresponse variable of ANOVA is defined in (26).

IncreaseOverBest =I∑

i=1

H SG Ani − Refi

I × Refi× 100 (26)

where I is the total number of test problems, HSGAni isthe objective function value obtained by solving the i-th test

123

J Intell Manuf

Table 1 Machines assigned to each stage

Testproblems

Stage 1 Stage 2 Stage 3 Stage 4 Stage 5 Stage 6 Stage 7

j10m8s3 M1 M2 B3 B4 B5 B6 M7 M8 − − − −j15m8s3 M1 M2 B3 B4 B5 B6 M7 M8

j20m11s3 M1 M2 M3 M4 B5 B6 B7 B8 M9 M10 M11 − − − −j25m11s3 M1 M2 M3 M4 B5 B6 B7 B8 M9 M10 M11

j30m7s3 M1 M2 B3 B4 M5 M6 M7 − − − −j35m7s3 M1 M2 B3 B4 M5 M6 M7

j40m13s5 M1 M2 M3 M4 M5 B6 B7 M8 M9 M10 M11 M12 M13 − −j45m13s5 M1 M2 M3 M4 M5 B6 B7 M8 M9 M10 M11 M12 M13

J50m15s5 M1 M2 M3 M4 M5 M6 B7 B8 B9 M10 M11 M12 M13 M14 M15 − −j55m15s5 M1 M2 M3 M4 M5 M6 B7 B8 B9 M10 M11 M12 M13 M14 M15

j60m16s5 M1 M2 M3 M4 M5 M6 M7 M8 B9 B10 B11 B12 M13 M14 M15 M16 − −j65m16s5 M1 M2 M3 M4 M5 M6 M7 M8 B9 B10 B11 B12 M13 M14 M15 M16

j70m20s7 M1 M2 M3 M4 M5 M6 M7 B8 B9 B10 B11 M12 M13 M14 M15 M20 M16 M17 M18 M19 M20

j75m20s7 M1 M2 M3 M4 M5 M6 M7 B8 B9 B10 B11 M12 M13 M14 M15 M20 M16 M17 M18 M19 M20

j80m21s7 M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 B12 B13 M14 M15 M16 M17 M3 M18 M19 M20 M21

j85m21s7 M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 B12 B13 M14 M15 M16 M17 M3 M18 M19 M20 M21

j90m21s7 M1 M2 M3 M4 M5 M6 M7 M8 M9 B10 B11 B12 M13 M14 M15 M16 M17 M18 M19 M20 M21

j95m21s7 M1 M2 M3 M4 M5 M6 M7 M8 M9 B10 B11 B12 M13 M14 M15 M16 M17 M18 M19 M20 M21

*Mi represents single processing machines and Bi represents batch processing machines

Table 2 Attributes of parts

Attributes Distributions

Release time ∼U [0, 50]

Weight ∼U [0, 1]

Single processing operation Setup time ∼U [5,10]

Processing time ∼U [1, 30]

Batch processing operation Setup time ∼ U [10, 35]

Processing time ∼U [100, 200]

problem with the n-th parameter setting, Refi is the value ofobjective function obtained evaluations by the proposed algo-rithm with the standard parameters (ps=50, pc=0.8, pm=0.1,and gm=500) for each test problem after 25,000. Since nolower bounds are available from benchmark test sets, themethod to obtain Refi in (Ruiz and Maroto 2006) is adoptedin this paper.

It is observed from Table 4 that there is a significant dif-ference for all the main factors. Figure 2 shows that whenpc=0.6, the HSGA performs better with the growth of ps,pm andgm. In addition, the two-factor interaction effects areshown in Table 4. As observed in Table 4, the interaction

between ps and pc is significant, which means that the popu-lation size heavily depends on the crossover probability, andso are the interactions for ps*pm, ps*gm and pm*gm (Fig. 3).In order to identify the best combinations of the four pairs of

Table 4 ANOVA table of the HSGA with respect to minimizing Cmax

Source Df Seq SS Adj SS Adj MS F P

ps 3 153.52 153.52 51.17 261.64 0.000

pc 3 2.98 2.98 0.99 5.08 0.002

pm 3 211.25 211.25 70.42 360.05 0.000

gm 2 45.78 45.78 22.89 117.05 0.000

ps*pc 9 4.87 4.87 0.54 2.77 0.005

ps*pm 9 118.40 118.40 13.16 67.27 0.000

ps*gm 6 25.30 25.30 4.21 21.56 0.000

pc*pm 9 1.56 1.56 0.17 0.88 0.542

pc*gm 6 1.22 1.23 0.21 1.05 0.398

pm*gm 6 22.95 22.95 3.83 19.56 0.000

Error 135 26.40 26.40 0.20

Total 191 614.25

S = 0.442242 R-Sq = 95.70 % R-Sq(adjusted) = 93.92 %

Table 3 Parameters of theHSGA

Factors ps pc pm gm

Levels 6,12,24,48 0.05, 0.30, 0.60, 0.90 0.00, 0.02, 0.10, 0.18 25, 50, 100

123

J Intell Manuf

4824126

16

15

14

130.900.600.300.05

0.180.100.020.00

16

15

14

131005025

psIn

crea

se O

verB

est

pc

pm gm

Fig. 2 Main effects for single factor ps, pc, pm and gm with respect tominimizing Cmax

4824126

30

25

20

15

0.900.600.300.05

0.180.100.020.00

30

25

20

15

1005025

ps

Incr

ease

Ove

rBes

t

pc

pm gm

Fig. 3 Main effects for single factor ps, pc, pm and gm with respect tominimizing TWT

factors, it should be investigated how the interaction affectsthe performance of the HSGA. Take ps*gm as an instance,as indicated in Fig. 4, when gm=100 and ps=48, the HSGAperforms the best. As for ps*pm, the HSGA obtains its bestperformance when ps=48 and pm=0.18. In summary, whenps=48, pc=0.6, pm=0.18, and gm=100, the HSGA providesthe best performance with respect to minimizing Cmax . Simi-larly, as shown in Table 5, Figs. 3 and 5, when ps=48, pc=0.9,pm=0.18, and gm=100, the HSGA provides the best perfor-mance with respect to minimizing TWT. These values are setas default in the following experiments.

Comparison experiments

Three groups of experiments are designed to evaluate theeffectiveness and efficiency of the HSGA: (1) comparisonbetween the HSGA and combinatorial heuristic rules; (2)comparison between the HSGA and the approach proposedby Yang et al. (2007), denoted GA_Y in this paper; and (3)comparison between the HSGA and the CPLEX.

Comparison with combinatorial heuristic rules

We design 165 heuristics of combinatorial rules as com-parison with the HSGA for each of the two objectives. Intheseheuristics, each subproblem, i.e., assignment subprob-

Fig. 4 Two-factor interaction effects with respect to minimizing Cmax

123

J Intell Manuf

Table 5 ANOVA table of The HSGA with respect to minimizing TWT

Source Df Seq SS Adj SS Adj MS F P

ps 3 9558.40 9558.40 3186.13 5825.42 0.000

pc 3 1179.42 1179.42 393.14 718.80 0.000

pm 3 672.25 672.25 224.08 409.71 0.000

gm 2 321.26 321.26 160.63 293.69 0.000

ps*pc 9 19.86 19.86 2.21 4.04 0.000

ps*pm 9 145.01 145.01 16.11 29.46 0.000

ps*gm 6 45.94 45.94 7.66 14.00 0.000

pc*pm 9 128.05 128.05 14.23 26.01 0.000

pc*gm 6 2.99 2.99 0.50 0.91 0.490

pm*gm 6 58.09 58.09 9.68 17.70 0.000

Error 135 73.84 73.84 0.55

Total 191 12205.11

S = 0.739551 R-Sq = 99.40 % R-Sq(adjusted) = 99.14 %

lem, sequencing subproblem and batch formation subprob-lem, is allocated to a heuristic rule. For the convenience toillustrate the experimental results, only five heuristics foreach objective with the best performance are selected andpresented in this subsection, as listed in Table 6.

Experimental results show that the HSGA provides betterperformance than other approaches in all test problems. Asindicated in Table 7, the HSGA outperforms other approacheswith an average gap of 23.53 %, with respect to minimizing

Cmax . As indicated in Table 8, the HSGA outperforms otherapproaches with an average gap of 55.98 %, with respect tominimizing TWT. Therefore, taking advantage of the searchability of GA, the HSGA generates suitable combinatorialrules.

Comparison between different encoding schemes

The algorithm GA_Y (Yang et al. 2007) designed its encod-ing scheme for stages, rather than for machines. We imple-ment the GA_Y in order to evaluate the performance of theHSGA, and the population size and maximum generation ofthe GA_Y are set the same as the HSGA for the sake ofequity. In the GA_Y, wTi represents the total weighted tardi-ness of the ih chromosome and Fi represents the correspond-ing value of the fitness function, which is calculated in (27).

Fi = max{wTi } − wTi (27)

It was stated by Yang et al. that Fi is better to distin-guish the fitness among chromosomes in the selection proce-dure than the conventional operation. However, experimentalresults show that this concept does not perform well and evenworse than some combinatorial rules, as indicated in Tables 9and 10. The reason is that the fitness function proposed byYang et al. is not suitable for the problem addressed in thispaper. Therefore the original fitness function is replaced by(23), based on which we implement an improvement versionof GA_Y, denoted GA_YI. It is observed in Tables 9 and 10,

Fig. 5 Two-factor interaction effects with respect to minimizing TWT

123

J Intell Manuf

Table 6 Combinatorialheuristic rules for comparisonwith the HSGA

Cmax TWT

Sequencingrule

Assignmentrule

Batchformationrule

Notation Sequencingrule

Assignmentrule

Batchformationrule

Notation

SPTR MA SPT SPMS SRPT SPT SPT SRSS

WSPT MA SPT WSMS SPT SPT SPT SSS

MS MA EDD MME SPTR SPT SPT SPSS

SPT MA SPT SMS WSPT SPT SPT WSSS

MS MA FIFO MMF SRPT SPT EDD SRSE

Table 7 Comparison betweenthe HSGA and combinatorialheuristic rules with respect tominimizing Cmax

Test problems HSGA SPMS WSMS MME SMS MMF AVG_GAP %

j10m8s3 377.32 502.20 502.10 420.50 495.20 424.00 24.24

j15m8s3 431.16 577.40 581.90 522.50 597.90 509.40 29.38

j20m11s3 430.64 630.80 649.70 529.20 629.30 528.90 37.84

j25m11s3 523.98 698.60 674.60 597.00 683.60 598.40 24.13

j30m7s3 933.54 1185.90 1191.50 1122.70 1183.90 1110.10 24.13

j35m7s3 1096.22 1370.30 1358.00 1251.60 1392.00 1238.70 20.61

j40m13s5 1075.18 1281.10 1293.80 1155.60 1276.70 1148.00 14.50

j45m13s5 1170.6 1333.00 1360.60 1273.30 1359.20 1268.60 12.67

j50m15s5 1253.86 1728.90 1786.70 1655.10 1765.70 1642.90 36.85

j55m15s5 1347.70 1752.40 1792.10 1717.10 1775.50 1745.60 30.34

j60m16s5 1136.94 1495.90 1492.30 1410.20 1463.90 1421.90 28.14

j65m16s5 1225.90 1552.20 1586.70 1394.70 1549.60 1401.10 22.10

j70m20s7 1293.78 1559.20 1622.30 1468.80 1541.10 1499.70 18.89

j75m20s7 1365.20 1706.30 1706.00 1879.00 1650.20 1935.80 30.05

j80m21s7 1902.60 2224.50 2226.30 1955.70 2250.80 1958.10 11.59

j85m21s7 2022.90 2295.40 2332.50 2190.10 2299.50 2188.60 11.78

j90m21s7 2110.26 2664.80 2584.50 2597.10 2651.20 2600.10 24.13

j95m21s7 2225.58 2747.00 2803.80 2644.60 2724.60 2668.10 22.11

Average 1217.96 1516.99 1530.30 1432.49 1516.11 1438.22 23.53

the GA_YI has improved the performance with an average of8.33 % with respect to minimizing Cmax , and with an aver-age of 44.79 % with respect to minimizing TWT, as comparedwith the GA_Y. It is also indicated in Tables 9 and 10 that theHSGA outperforms GA_YI in all the 18 test problems. Thegap is 2.63 and 15.70 % in average with respect to minimiz-ing Cmax and TWT, respectively. This is because the HSGAconsiders various scheduling information of each machine,such as machine capacity, processing rate and the numberof parts waiting in buffers, etc. Experimental results indicatethat the information mentioned above does have great impacton the scheduling performance.

Comparison with the CPLEX

In order to evaluate the effectiveness of the proposed algo-rithm, a comparison between the HSGA and the CPLEX 12.4

is conducted. The running time for the CPLEX is limitedwithin 6h. As shown in Table 11, for all the test problems,the average gap for minimizing Cmax is −49.35 % and that forminimizing TWT is −89.71 %, which indicates the HSGAoutperforms the CPLEX significantly. In addition, no feasi-ble solution can be obtained by the CPLEX within the timelimit for those test problems larger than j50m15s5, whilenear-optimum solutions can be obtained by the HSGA andthe computational time ranges from 0.2 to 7.4 s, indicatingthe computational efficiency of the HSGA.

Since the test problems are too complex for the CPLEX,we redesign 6 small test problems especially for the com-parison with the CPLEX in which the number of the batchprocessing machine is reduced to one. As shown in Table 10,for minimizing Cmax , the average gap between the HSGAand the CPLEX is 0.84 %, which indicates the optimizationperformance of the HSGA is comparable with the CPLEX.

123

J Intell Manuf

Table 8 Comparison betweenthe HSGA and combinatorialheuristic rules with respect tominimizing TWT

Test problems HSGA SRSS SSS SPSS WSSS SRSE AVG_GAP %

j10m8s3 0 0 0 1.26 2.64 0 –

j15m8s3 0 0.10 2.02 2.02 0 0 –

j20m11s3 0 0.42 0.43 0.74 0 0 –

j25m11s3 0 30.17 41.99 48.74 43.02 20.15 –

j30m7s3 636.21 1413.60 1513.81 1441.69 1456.43 1256.42 122.63

j35m7s3 1328.80 2185.58 2325.66 2196.65 2470.44 2183.21 71.00

j40m13s5 847.16 1468.47 1468.65 1499.46 2238.41 1575.99 94.79

j45m13s5 1842.96 2439.3 2404.36 2573.14 2796.57 2375.15 36.61

j50m15s5 2255.43 4272.22 4355.79 4695.69 4826.45 4595.27 101.69

j55m15s5 3294.65 4682.16 4661.97 5190.46 5854.68 5355.18 56.28

j60m16s5 1820.74 2890.95 2952.33 3359.64 3333.28 3112.55 71.89

j65m16s5 2496.32 3329.60 3226.96 3537.034 3750.86 3348.79 37.75

j70m20s7 2760.78 3440.45 3475.80 4169.93 4004.39 3075.31 31.60

j75m20s7 3997.12 6535.32 6702.00 7698.29 7702.76 6885.49 77.75

j80m21s7 10801.10 12513.70 12356.01 12284.69 14866.03 14080.91 22.40

j85m21s7 14457.78 16133.40 16279.15 15995.86 19034.60 18372.18 18.71

j90m21s7 15087.08 17775.28 17680.93 19268.60 20156.03 19044.72 24.51

j95m21s7 18528.34 19734.21 20010.27 22218.46 22705.28 22887.25 16.10

Average 4453.03 5491.38 5525.45 5899.02 6402.33 6009.37 55.98

Table 9 Comparison betweenthe HSGA, GA_Y and GA_YIwith respect to minimizing Cmax

Test problems GA_Y GA_YI HSGA Gap betweenGA_YI andGA_Y (%)

Gap betweenHSGA andGA_YI (%)

j10m8s3 404.37 381.03 377.32 6.12 0.98

j15m8s3 488.10 448.07 431.16 8.93 3.92

j20m11s3 483.63 456.97 430.64 5.84 6.11

j25m11s3 598.67 526.27 523.98 13.76 0.44

j30m7s3 1000.37 950.23 933.54 5.28 1.79

j35m7s3 1161.47 1120.53 1096.22 3.65 2.22

j40m13s5 1158.37 1101.10 1075.18 5.20 2.41

j45m13s5 1270.90 1195.73 1170.60 6.29 2.15

j50m15s5 1485.80 1316.00 1253.86 12.90 4.96

j55m15s5 1582.20 1409.40 1347.70 12.26 4.58

j60m16s5 1274.97 1173.60 1136.94 8.64 3.22

j65m16s5 1391.00 1275.23 1225.90 9.08 4.02

j70m20s7 1481.90 1312.23 1293.78 12.93 1.43

j75m20s7 1577.30 1393.50 1365.20 13.19 2.07

j80m21s7 2034.50 1918.23 1902.60 6.06 0.82

j85m21s7 2183.80 2048.17 2022.90 6.62 1.25

j90m21s7 2307.90 2152.10 2110.26 7.24 1.98

j95m21s7 2428.13 2291.80 2225.58 5.95 2.98

Average 1350.74 1248.34 1217.96 8.33 2.63

For minimizing TWT, though the CPLEX performs betterthan the HSGA, with the gap ranging from 0.18 to 16.62 %,the computational time of the HSGA is within 1s as comparedwith 6h for the CPLEX.

It is also observed that the gaps for minimizing TWT arealways larger than those for minimizing Cmax . The reasonlies in the objective function itself. For minimizing Cmax ,which is only related to the last completed part, even when the

123

J Intell Manuf

Table 10 Comparison betweenthe HSGA, GA_Y and GA_YIwith respect to minimizing TWT

Test problems GA_Y GA_YI HSGA Gap betweenGA_YI andGA_Y (%)

Gap betweenHSGA andGA_YI (%)

j10m8s3 0 0 0 – –

j15m8s3 0 0 0 – –

j20m11s3 0 0 0 – –

j25m11s3 0 0 0 – –

j30m7s3 1166.37 833.86 636.21 39.88 31.07

j35m7s3 2081.01 1422.74 1328.80 46.27 7.07

j40m13s5 1790.06 1053.63 847.16 69.89 24.37

j45m13s5 3038.28 2077.50 1842.96 46.25 12.73

j50m15s5 4156.73 2953.87 2255.43 40.72 30.97

j55m15s5 5935.80 3821.84 3294.65 55.31 16.00

j60m16s5 3752.39 2100.06 1820.74 78.68 15.34

j65m16s5 4365.95 2946.77 2496.32 48.16 18.04

j70m20s7 5233.14 3230.44 2760.78 61.99 17.01

j75m20s7 7214.17 4727.10 3997.12 52.61 18.26

j80m21s7 13636.13 11645.46 10801.10 17.09 7.82

j85m21s7 18104.96 15032.33 14457.78 20.44 3.97

j90m21s7 20968.67 16028.82 15087.08 30.82 6.24

j95m21s7 24439.44 20539.85 18528.34 18.99 10.86

Average 6437.95 4911.90 4453.03 44.79 15.70

Table 11 Comparison between the HSGA and the CPLEX with multiple batch processing machines

Test problems CPLEX HSGA Cmax Gap (%) TWT Gap (%)

Cmax Time (h:m:s) TWT Time (h:m:s) Cmax Time (h:m:s) TWT Time (h:m:s)

j10m8s3 692.00 6:0:0.0 6.28 6:0:0.0 377.32 0.20 0.00 0:0:0.20 −45.47 −100.00

j15m8s3 851.00 6:0:0.0 54.11 6:0:0.0 431.16 0.31 0.00 0:0:0.32 −49.33 −100.00

j20m11s3 1170.00 6:0:0.0 761.75 6:0:0.0 430.64 0.46 0.00 0:0:0.44 −63.19 −100.00

j25m11s3 1476.00 6:0:0.0 1289.79 6:0:0.0 523.98 0.59 0.00 0:0:0.58 −64.50 −100.00

j30m7s3 1744.00 6:0:0.0 4355.03 6:0:0.0 933.54 0.80 636.21 0:0:0.77 −46.47 −85.39

j35m7s3 1909.00 6:0:0.0 5290.22 6:0:0.0 1096.22 1.00 1328.80 0:0:0.95 −42.58 −74.88

j40m13s5 1774.00 6:0:0.0 4616.65 6:0:0.0 1075.18 1.78 847.16 0:0:1.65 −39.39 −81.65

j45m13s5 2084.00 6:0:0.0 7603.51 6:0:0.0 1170.60 2.06 1842.96 0:0:1.95 −43.83 −75.76

j50m15s5 – – – – 1253.86 2.47 2255.43 0:0:2.38 – –

j55m15s5 – – – – 1347.70 2.75 3294.65 0:0:2.79 – –

j60m16s5 – – – – 1136.94 2.71 1820.74 0:0:2.68 – –

j65m16s5 – – – – 1225.90 3.12 2496.32 0:0:3.02 – –

j70m20s7 – – – – 1293.78 4.96 2760.78 0:0:4.76 – –

j75m20s7 – – – – 1365.20 5.34 3997.12 0:0:5.24 – –

j80m21s7 – – – – 1902.60 5.39 10801.10 0:0:5.49 – –

j85m21s7 – – – – 2022.90 5.92 14457.78 0:0:5.88 – –

j90m21s7 – – – – 2110.26 6.89 15087.08 0:0:6.91 – –

j95m21s7 – – – – 2225.58 7.55 18528.34 0:0:7.40 – –

Average −49.35 −89.71

123

J Intell Manuf

Table 12 Comparison between the HSGA and the CPLEX with one batch processing machine

Test problems CPLEX HSGA Cmax Gap (%) TWT Gap (%)

Cmax Time (h:m:s) TWT Time (h:m:s) Cmax Time (h:m:s) TWT Time (h:m:s)

j5m5s3 547.00 6:0:0.0 240.85 6:0:0.0 513.00 6:0:0.10 241.29 6:0:0.09 −6.22 0.18

j8m6s3 718.00 6:0:0.0 813.97 6:0:0.0 712.00 6:0:0.15 872.34 6:0:0.15 −0.84 7.17

j10m7s3 683.00 6:0:0.0 993.29 6:0:0.0 698.00 6:0:0.17 1087.51 6:0:0.18 2.20 9.49

j12m9s3 789.00 6:0:0.0 1736.59 6:0:0.0 829.00 6:0:0.35 1922.61 6:0:0.34 5.07 10.71

j16m10s5 1145.00 6:0:0.0 2202.12 6:0:0.0 1166.00 6:0:0.53 2568.2 6:0:0.47 1.83 16.62

j18m12s5 1269.00 6:0:0.0 4026.40 6:0:0.0 1307.00 6:0:0.62 4499.56 6:0:0.62 2.99 11.75

Average 0.84 9.32

completion time of other parts is improved, Cmax may remainunchanged. However, TWT is related to all the parts, indicat-ing the improvement on the completion time of any part has apositive contribution to the TWT. Therefore the improvementfor minimizing TWT is more obvious as a result.

In summary, the experimental results show that the HSGAachieves optimization performance and computational effi-ciency simultaneously (Table 12).

Sensitivity analysis

In the comparison experiments presented above, among allthe heuristics of combinatorial rules, two of them, i.e., SPMSand SRSS, outperform others on the whole. Therefore SPMSis adopted as the comparison in robustness with respect tominimizing Cmax , and SRSS as the comparison in robustnesswith respect to minimizing TWT. The sensitivity analysisis conducted in this subsection under the circumstances ofdifferent release times and due date settings, respectively.

Patterns for release time generation

Four different patterns for release time generation are con-sidered to evaluate the robustness of the HSGA, denoteduni50, uni100, exp25 and exp50, respectively. The first twopatterns are sampled from discrete uniform distributions onthe interval [0,50] and [0,100] respectively, and the othertwo follow an exponential distribution with the mean value25 and 50 respectively. Each run of simulation includes 5replications.

It is clearly observed from Table 13 that the HSGA signif-icantly outperforms SPMS in all test problems. As shown inTable 13, the average gaps between the maximum completiontime obtained by the HSGA and SPMS for uni50, uni100,exp25 and exp50 are 26.71, 25.57, 24.34, and 26.10 %,respectively, which indicates that for the objective of min-imizing Cmax , the HSGA is robust to the changing releasetime patterns.

Table 13 The gaps between the Cmax obtained by the HSGA and SPMSwith different release time patterns

Testproblems

uni50(Gap%)

uni100(Gap%)

exp25(Gap%)

exp50(Gap%)

j10m8s3 29.95 47.11 24.11 34.60

j15m8s3 39.92 29.57 36.44 29.71

j20m11s3 42.15 34.21 16.05 43.10

j25m11s3 35.68 33.02 31.13 37.36

j30m7s3 30.18 30.13 27.37 21.87

j35m7s3 11.71 8.36 7.25 23.15

j40m13s5 24.56 21.00 16.70 16.23

j45m13s5 10.83 12.30 27.88 8.11

j50m15s5 45.63 39.26 40.47 28.69

j55m15s5 18.94 29.80 35.61 44.88

j60m16s5 29.19 24.48 37.39 24.98

j65m16s5 22.10 33.70 15.10 29.69

j70m20s7 23.25 29.90 13.98 22.12

j75m20s7 16.97 16.03 15.82 21.38

j80m21s7 19.53 22.14 20.36 21.07

j85m21s7 30.45 13.71 23.10 15.74

j90m21s7 18.80 19.02 29.36 26.75

j95m21s7 31.01 16.61 19.98 20.30

Average 26.71 25.57 24.34 26.10

It is shown in Table 14 that the average gaps between thetotal weight tardiness obtained by the HSGA and SRSS foruni50, uni100, exp25 and exp50 are 71.06, 55.44, 67.43 and56.33 % respectively, which indicates that for the objectiveof minimizing TWT, the HSGA is robust to the changingrelease time patterns.

Due date settings

This experiment is conducted only with respect to minimiz-ing TWT since the due date settings are not related to mini-mizing Cmax . The due dates are generated according to (25),and the due date level, denoted dl, is selected from {1, 2, 3, 4},

123

J Intell Manuf

Table 14 The gaps between the TWT obtained by the HSGA and SRSSwith different release time patterns

Testproblems

uni50(Gap%)

uni100(Gap%)

exp25(Gap%)

exp50(Gap%)

j10m8s3 0.00 0.00 0.00 0.00

j15m8s3 0.00 0.00 0.00 0.00

j20m11s3 0.00 0.00 0.00 0.00

j25m11s3 0.00 0.00 0.00 0.00

j30m7s3 115.97 65.70 74.11 48.83

j35m7s3 23.59 9.42 28.33 34.09

j40m13s5 53.46 75.51 72.82 70.80

j45m13s5 34.15 8.41 35.56 19.59

j50m15s5 223.84 221.74 285.06 234.33

j55m15s5 121.45 100.26 80.45 131.23

j60m16s5 276.78 197.99 222.83 106.02

j65m16s5 134.89 71.58 107.56 107.35

j70m20s7 68.24 64.00 67.93 96.10

j75m20s7 141.15 133.41 146.41 92.12

j80m21s7 12.18 7.44 7.65 8.37

j85m21s7 7.62 7.35 4.02 2.87

j90m21s7 0.28 4.39 13.57 2.03

j95m21s7 65.45 47.64 67.48 64.29

Average 71.06 55.44 67.43 56.33

representing tight, medium, moderate and loose due date set-ting, respectively.

It is clear that the HSGA outperforms SRSS with respectto different due date settings in Table 15. It is also observedthat the gap between the total weighted tardiness obtainedby the HSGA and SRSS increases as dl increases, whichindicates that the HSGA tends to achieve better performancewith the due dates loosened.

Computational efficiency

In order to evaluate the computational efficiency of theHSGA, we develop a classical meta-heuristic GA (MHGA),which operates directly on the search space of schedulingsolutions, as the comparison with the HSGA. The MHGAadopts a 3-segment chromosome, and performs a two-pointcrossover operator and a one-point mutation operator, whichis the same as the HSGA.

For the sake of equity, we record the optimal values of theobjective function obtained by the MHGA after 100 gener-ations and the corresponding CPU time, which is comparedwith the CPU time cost by the HSGA to reach the same orlower values of the objective functions. The computationalresults are presented in Tables 16 and 17 with respect to min-imizing Cmax and TWT, respectively.

As shown in Table 16, with respect to minimizing Cmax , toobtain the same optimal values of the objective function, the

Table 15 The gaps between the TWT obtained by the HSGA and SRSSwith different due date settings

Testproblems

dl=1(Gap%)

dl=2(Gap%)

dl=3(Gap%)

dl=4(Gap%)

j10m8s3 25.69 0.00 0.00 0.00

j15m8s3 21.95 5924.74 0.00 0.00

j20m11s3 4.53 587.65 0.00 0.00

j25m11s3 32.81 888.77 0.00 0.00

j30m7s3 8.04 10.47 16.87 317.12

j35m7s3 0.90 9.83 14.06 67.04

j40m13s5 13.45 31.53 48.69 262.82

j45m13s5 7.72 16.20 38.83 419.63

j50m15s5 68.72 112.83 263.64 938.17

j55m15s5 35.88 61.90 109.92 355.86

j60m16s5 26.88 66.22 174.19 2252.28

j65m16s5 32.50 60.19 132.64 473.89

j70m20s7 25.57 41.13 52.72 512.34

j75m20s7 41.31 70.28 170.90 488.80

j80m21s7 1.75 1.69 1.02 0.56

j85m21s7 4.31 5.73 4.14 5.15

j90m21s7 7.87 7.54 15.95 24.33

j95m21s7 22.98 37.25 62.99 90.38

Average 21.27 440.78 61.48 344.91

Table 16 Comparison on CPU times between the HSGA and meta-heuristic GA with respect to minimizing Cmax

Test problems Cmax CPU Time (ms) CPU timegap%

MHGA HSGA

j10m8s3 380.97 490.67 350.01 40.19

j15m8s3 453.73 781.97 430.34 81.71

j20m11s3 471.37 1026.57 515.44 99.16

j25m11s3 549.83 1353.97 900.67 50.33

j30m7s3 999.97 1917.37 925.61 107.15

j35m7s3 1165.70 2363.60 1563.00 51.22

j40m13s5 1146.43 4067.63 2416.33 68.34

j45m13s5 1268.20 4665.03 2886.65 61.61

j50m15s5 1353.63 5752.00 3523.43 63.25

j55m15s5 1446.63 6526.90 4034.02 61.80

j60m16s5 1272.93 6468.33 3020.96 114.12

j65m16s5 1340.40 6983.07 4055.40 72.19

j70m20s7 1416.67 11609.90 6504.06 78.50

j75m20s7 1471.40 13247.07 7213.08 83.65

j80m21s7 2100.97 12393.27 7002.32 76.99

j85m21s7 2225.33 14039.43 7853.87 78.76

j90m21s7 2288.93 16377.17 9758.39 67.83

j95m21s7 2371.27 18021.47 10941.91 64.70

Average 73.42

123

J Intell Manuf

Table 17 Comparison on CPU times between the HSGA and meta-heuristic GA with respect to minimizing TWT

Test problems TWT CPU Time (ms) CPU timegap%

MHGA HSGA

j10m8s3 0.00 – – –

j15m8s3 0.00 – – –

j20m11s3 0.00 – – –

j25m11s3 0.00 – – –

j30m7s3 921.41 1966.77 696.66 182.31

j35m7s3 1487.39 2481.70 1597.87 55.31

j40m13s5 1431.19 4259.53 1123.60 279.10

j45m13s5 2579.27 4992.77 1636.34 205.12

j50m15s5 3649.72 6146.83 1859.20 230.62

j55m15s5 4683.52 7031.60 2311.76 204.17

j60m16s5 3079.28 6931.90 1732.67 300.07

j65m16s5 4199.07 7800.07 2042.87 281.82

j70m20s7 4828.14 12902.43 2577.20 400.64

j75m20s7 6700.87 12670.13 2814.05 350.25

j80m21s7 15783.97 12315.83 6800.31 81.11

j85m21s7 19807.11 12896.50 3312.23 289.36

j90m21s7 20409.07 15608.57 9225.10 69.20

j95m21s7 23982.50 16811.37 9293.40 80.90

Average 215.00

MHGA spends much more time than the HSGA, with the gapranging from 50.33 to 114.12 and 73.42% in average, whichindicates the HSGA has a significantly higher computationalefficiency than the MHGA.

As shown in Table 17, with respect to minimizing TWT,for the first 4 test problems, the optimal values of the objec-tive function are all 0, and both the HSGA and the MHGAreach the optimal value with initial solutions. The MHGAdoes not actually run for 100 generations, therefore these 4test problems are omitted. For all the other test problems,the gap between the CPU time cost by the HSGA and theMHGA is 215.00 % in average, with a maximum of 400.64 %and a minimum of 55.31 %. Obviously, when the schedulingobjective is minimizing TWT, the gaps are larger as com-pared to minimizing Cmax . The reason is that it is easier forthe HSGA to obtain the same objective function value asthe MHGA with respect to minimizing TWT, since TWT ismore sensitive to the improvement of solutions, as analyzedin Sect. 4.3.3.

In summary, as compared with meta-heuristics whichoperates directly on the search space of scheduling solutions,the HSGA has a significant advantage with respect to compu-tational efficiency, especially when the scheduling objectiveis to minimize TWT.

Conclusion

The problem of scheduling for a multi-stage HFS with sin-gle processing machines and batch processing machines isconsidered in this paper. In the proposed HSGA, GA isadopted to search appropriate heuristic rules for machinesand parts simultaneously, and then parts and machines arescheduled using the obtained heuristics of the combinato-rial rules. According to the comparison experiments, theHSGA outperforms the combinatorial heuristic rules andother heuristic-search approaches using different encodingschemes. Moreover, as compared with the CPLEX, for thetest problems with multiple batch processing machines, theHSGA provides significantly better solutions, and for thosetest problems with only one batch processing machine, theaverage gaps between the HSGA and the CPLEX are only0.84 and 9.32 %, with respect to minimizing Cmax and mini-mizing TWT, respectively. According to the sensitivity analy-sis, the HSGA is robust to the changing release time patternsand due date settings. In addition, the experiments on compu-tational efficiency indicate the HSGA has a significant advan-tage as compared with the meta-heuristic operating directlyon the search space of scheduling solutions, and therefore itis suitable to solve large dimension scheduling problems.

Acknowledgments This work was supported by Natural ScienceFoundation of Beijing (4122069).

References

Allahverdi, A., Ng, C. T., Cheng, T. C. E., & Kovalyov, M. Y. (2008). Asurvey of scheduling problems with setup times or costs. EuropeanJournal Of Operational Research, 187(3), 985–1032.

Barman, S. (1997). Simple priority rule combinations: An approachto improve both flow time and tardiness. International Journal ofProduction Research, 35(10), 2857–2870.

Behnamian, J., Ghomi, S. M. T. F., Jolai, F., & Amirtaheri, O. (2012).Realistic two-stage flowshop batch scheduling problems with trans-portation capacity and times. Applied Mathematical Modelling,36(2), 723–735.

Damodaran, P., Rao, A. G., & Mestry, S. (2013). Particle swarm opti-mization for scheduling batch processing machines in a permutationflowshop. International Journal of Advanced Manufacturing Tech-nology, 64(5–8), 989–1000.

Dorndorf, U., & Pesch, E. (1995). Evolution based learning in a job-shop scheduling environment. Computers & Operations Research,22(1), 25–40.

Fayad, C., & Petrovic, S. (2005). A fuzzy genetic algorithm for real-world job shop scheduling. Innovations in Applied Artificial Intelli-gence, 3533, 524–533.

Feng, H. D., Lu, S. P., & Li, X. Q. (2009). Genetic algorithm for hybridflow-shop scheduling with parrel batch processors. In 2009 Waseinternational conference on information engineering, Icie 2009, VolIi, 9–13.

Hu, H., & Li, Z. (2009). Modeling and scheduling for manufacturinggrid workflows using timed Petri nets. The International Journal ofAdvanced Manufacturing Technology, 42(5–6), 553–568.

123

J Intell Manuf

Kim, Y. D., Joo, B. J., & Shin, J. H. (2009). Heuristics for a two-stagehybrid flowshop scheduling problem with ready times and a product-mix ratio constraint. Journal of Heuristics, 15(1), 19–42.

Laforge, R. L., & Barman, S. (1989). Performance of simple priorityrule combinations in a flow dominant shop. Production & InventoryManagement Journal, 30(3), 1–4.

Liu, Y., & Karimi, I. A. (2008). Scheduling multistage batch plantswith parallel units and no interstage storage. Computers & ChemicalEngineering, 32(4–5), 671–693.

Luo, H., Huang, G. Q., Zhang, Y. F., & Dai, Q. Y. (2011). Hybridflowshop scheduling with batch-discrete processors and machinemaintenance in time windows. International Journal of ProductionResearch, 49(6), 1575–1603.

Mathirajan, M., Bhargav, V., & Ramachandran, V. (2010). Minimiz-ing total weighted tardiness on a batch-processing machine withnon-agreeable release times and due dates. International Journal ofAdvanced Manufacturing Technology, 48(9–12), 1133–1148.

Mirsanei, H. S., Zandieh, M., Moayed, M. J., & Khabbazi, M. R.(2011). A simulated annealing algorithm approach to hybrid flowshop scheduling with sequence-dependent setup times. Journal ofIntelligent Manufacturing, 22(6), 965–978.

Montgomery, D. C. (2000). Design and analysis of experiments (5thed.). New York: Wiley.

Park, S. C., Raman, N., & Shaw, M. J. (1997). Adaptive schedulingin dynamic flexible manufacturing systems: A dynamic rule selec-tion approach. Ieee Transactions on Robotics and Automation, 13(4),486–502.

Ponnambalam, S. G., Ramkumar, V., & Jawahar, N. (2001). A mul-tiobjective genetic algorithm for job shop scheduling. ProductionPlanning & Control, 12(8), 764–774.

Potts, C. N., & Kovalyov, M. Y. (2000). Scheduling with batching: Areview. European Journal of Operational Research, 120(2), 228–249.

Rossi, A., & Dini, G. (2007). Flexible job-shop scheduling with routingflexibility and separable setup times using ant colony optimisationmethod. Robotics and Computer-Integrated Manufacturing, 23(5),503–516.

Ruiz, R., & Maroto, C. (2006). A genetic algorithm for hybrid flow-shops with sequence dependent setup times and machine eligibility.European Journal of Operational Research, 169(3), 781–800.

Ruiz, R., Serifoglu, F. S., & Urlings, T. (2008). Modeling realistic hybridflexible flowshop scheduling problems. Computers & OperationsResearch, 35(4), 1151–1175.

Ruiz, R., & Vazquez-Rodriguez, J. A. (2010). The hybrid flow shopscheduling problem. European Journal Of Operational Research,205(1), 1–18.

Sarper, H., & Henry, M. C. (1996). Combinatorial evaluation of six dis-patching rules in a dynamic two-machine flow shop. Omega, 24(1),73–81.

Su, L. H., & Chen, J. C. (2010). Sequencing two-stage flowshop withnonidentical job sizes. International Journal of Advanced Manufac-turing Technology, 47(1–4), 259–268.

Sung, C. S., Kim, Y. H., & Yoon, S. H. (2000). A problem reductionand decomposition approach for scheduling for a flowshop of batchprocessing machines. European Journal of Operational Research,121(1), 179–192.

Vazquez-Rodriguez, J. A., & Petrovic, S. (2010). A new dispatching rulebased genetic algorithm for the multi-objective job shop problem.Journal of Heuristics, 16(6), 771–793.

Wang, I. L., Yang, T. H., & Chang, Y. B. (2012). Scheduling two-stagehybrid flow shops with parallel batch, release time, and machineeligibility constraints. Journal of Intelligent Manufacturing, 23(6),2271–2280.

Yang, T., Kuo, Y., & Cho, C. (2007). A genetic algorithms simulationapproach for the multi-attribute combinatorial dispatching decisionproblem. European Journal of Operational Research, 176(3), 1859–1873.

Yao, F. S., Zhao, M., & Zhang, H. (2012). Two-stage hybrid flowshop scheduling with dynamic job arrivals. Computers & Opera-tions Research, 39(7), 1701–1712.

Zandieh, M., Mozaffari, E., & Gholami, M. (2010). A robust geneticalgorithm for scheduling realistic hybrid flexible flow line problems.Journal of Intelligent Manufacturing, 21(6), 731–743.

123