05 Machine Part Cell Formation in Group Technology 2

download 05 Machine Part Cell Formation in Group Technology 2

of 13

Transcript of 05 Machine Part Cell Formation in Group Technology 2

  • 8/13/2019 05 Machine Part Cell Formation in Group Technology 2

    1/13

    Production, Manufacturing and LogisticsMachine-part cell formation in group technology using

    a modied ART1 method

    Miin-Shen Yang *, Jenn-Hwai YangDepartment of Applied Mathematics, Chung Yuan Christian University, Chung-Li 32023, Taiwan

    Received 17 August 2004; accepted 9 March 2007Available online 29 April 2007

    Abstract

    Group Technology (GT) is a useful way of increasing the productivity for manufacturing high quality products andimproving the exibility of manufacturing systems. Cell formation (CF) is a key step in GT. It is used in designing goodcellular manufacturing systems using the similarities between parts in relation to the machines in their manufacture. It canidentify part families and machine groups. Recently, neural networks (NNs) have been widely applied in GT due to theirrobust and adaptive nature. NNs are very suitable in CF with a wide variety of real applications. Although Dagli and Hug-gahalli adopted the ART1 network with an application in machine-part CF, there are still several drawbacks to thisapproach. To address these concerns, we propose a modied ART1 neural learning algorithm. In our modied ART1,the vigilance parameter can be simply estimated by the data so that it is more efficient and reliable than Dagli and Hug-

    gahallis method for selecting a vigilance value. We then apply the proposed algorithm to machine-part CF in GT. Severalexamples are presented to illustrate its efficiency. In comparison with Dagli and Huggahallis method based on the perfor-mance measure proposed by Chandrasekaran and Rajagopalan, our modied ART1 neural learning algorithm providesbetter results. Overall, the proposed algorithm is vigilance parameter-free and very efficient to use in CF with a wide varietyof machine/part matrices.

    2007 Elsevier B.V. All rights reserved.

    Keywords: Group technology; Cell formation; ART1 neural network; Learning algorithm; Group efficiency

    1. Introduction

    Prot in manufacturing can be achieved by low-ering costs and improving product quality. Thereare some general guidelines for reducing the costof products without any decrease in quality. Theseinclude improving production methods, minimizing

    aws, increasing machine utilization and reducing

    transit and setup time. Research and development(R&D) engineering is the rst line of defense inaddressing these issues through the design of aunique product and competitive production tech-niques. Keeping a close watch over the productionprocess is also important in the pursuit of prot.Although the traditional statistical process control(SPC) technique has several merits, control chartpattern recognition has become a popular toolfor monitoring abnormalities in the manufacturing

    0377-2217/$ - see front matter 2007 Elsevier B.V. All rights reserved.doi:10.1016/j.ejor.2007.03.047

    * Corresponding author. Tel.: +886 3 265 3119; fax: +886 3 2653199.

    E-mail address: [email protected] (M.-S. Yang).

    Available online at www.sciencedirect.com

    European Journal of Operational Research 188 (2008) 140152www.elsevier.com/locate/ejor

    mailto:[email protected]:[email protected]
  • 8/13/2019 05 Machine Part Cell Formation in Group Technology 2

    2/13

    process by recognizing unnatural control chart pat-terns. This approach not only decreases the wastebut also prevents the defects more efficiently. Manyresearchers have applied neural network models tothe manufacturing process with generally good

    results (see Chang and Aw, 1996; Cheng, 1997;Guh and Tannock, 1999; Yang and Yang, 2002 ).The production process requires a variety of

    machines and often some complex procedures. Fre-quently, parts have to be moved from one place toanother. This results not only in machine idle timebut also wastes the manpower required for the phys-ical movement of the parts. On the other hand, anincreasing number of companies are encounteringsmall to medium size production orders. In this sit-uation, more setup changes and frequent part ormachine movements occur. Group technology(GT) has proven to be a useful way of addressingthese problems by creating a more exible manufac-turing process. It can be used to exploit similaritiesbetween components to achieve lower costs andincrease productivity without loosing product qual-ity. Cell formation (CF) is a key step in GT. It is atool for designing cellular manufacturing systemsusing the similarities between parts and machinesto have part families and machine groups. The partsin the same machine group have similar require-ments, reducing travel and setup time.

    In CF, a binary machine/part matrix of m p dimension is usually provided (see Fig. 1(a)). Them rows indicate m machines and the p columns rep-resent p parts. Each binary element in the m p matrix indicates a relationship between parts andmachines where 1 (0) represents that the pthpart should be (not) worked on the mth machine.The matrix also displays all similarities in partsand machines. Our objective is to group parts andmachines in a cell based on their similarities. If weconsider a machine/part matrix as shown inFig. 1(a) , the result shown in Fig. 1(b) is obtainedby a CF clustering method based on the similaritiesin parts and machines from the machine/part matrixof Figs. 1(a) and 1(b) demonstrates that parts 1 and4, and machines 1 and 3 are in one cell while parts 3,

    5 and 2, and machines 2 and 4 are in another cell. Inthis case, there are no 1 outside the diagonal blockand no 0 inside the diagonal block so that we callit a perfect result. That is, the two cells are com-

    pletely independent where each part family will beprocessed only within a machine group. Unfortu-nately, this perfect result for a machine/part matrixis rarely seen in real situations. On the other hand,another machine/part matrix is shown in Fig. 1(c)with its result in Fig. 1(d). We see that there is a1 outside the diagonal block. In this case, part 3is called an exceptional part because it works ontwo or more machine groups, and machine 1 is calleda bottleneck machine as it processes two or morepart families. There is also a 0 inside the diagonalblock in Fig. 1(d). In this case, it is called a void.

    In general, an optimal result for a machine/partmatrix by a CF clustering method is desired to sat-isfy the following two conditions:

    (a) To minimize the number of 0s inside the diag-onal blocks (i.e., voids);

    (b) To minimize the number of 1s outside thediagonal blocks (i.e., exceptional elements).

    Based on these optimal conditions, Fig. 1(b) is anoptimal result of Fig. 1(a) and 1(d) is an optimalresult of Fig. 1(c).Fig. 1(a). Machine/part matrix.

    Fig. 1(b). Optimal result of Fig. 1(a).

    Fig. 1(c). Machine/part matrix.

    Fig. 1(d). Optimal result of Fig. 1(c).

    M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140152 141

  • 8/13/2019 05 Machine Part Cell Formation in Group Technology 2

    3/13

    There are many CF methods in the literature (seeSingh, 1993; Singh and Rajamani, 1996 ). Some of them use algorithms with certain energy functionsor codes to sort the machine/part matrix. Examplesinclude the bond energy algorithm (BEA) ( McCor-

    mick et al., 1972), rank order clustering (ROC)(King, 1980), modied rank order clustering(MODROC) ( Chandrasekaran and Rajagopalan,1986a) and the direct clustering algorithm (DCA)(Chan and Milner, 1982 ). Others use similarity-based hierarchical clustering ( Mosier, 1989; Weiand Kern, 1989; Gupta and Seifoddini, 1990; Shaferand Rogers, 1993 ) or simulated annealing approach(see Xambre and Vilarinho, 2003 ). Examples of these methods include single linkage clustering(SLC), complete linkage clustering (CLC), averagelinkage clustering (ALC) and linear cell clustering(LCC). However, these CF methods all assumewell-dened boundaries between machine/part cells.These crisp boundary assumptions may fail to fullydescribe cases where machine/part cell boundariesare fuzzy. This is why fuzzy clustering and fuzzylogic methods are applied in CF (see Xu and Wang,1989; Chu and Hayya, 1991; Gindy et al., 1995;Narayanaswamy et al., 1996 ).

    Neural networks have been studied for manyyears and widely applied in various areas. It is alearning scheme that uses mathematical models to

    simulate biological nervous system operations inparallel. Lippmann, 1987 gave a tutorial review onneural computing and surveyed six important neuralnetwork models that can be used in pattern classica-tion. In general, neural network models are of threetypes: feedforward networks (e.g., multilayer percep-tron, see Rumelhart et al., 1986 ), feedback networks(e.g., Hopeld network, see Hopeld, 1982) andcompetitive learning networks (e.g., self-organizingmap (SOM), see Kohonen, 1981; adaptive resonancetheory (ART1), see Carpenter and Grossberg, 1988 ).Both feedforward and feedback networks are super-vised. The competitive learning network on the otherhand is unsupervised. By applying neural networklearning, GT is more adaptive in a variety of situa-tions. Recently, more research is being conductedin applying neural networks to GT by using back-propagation learning ( Kao and Moon, 1991 ), com-petitive learning ( Malave and Ramachandran,1991), ART1 ( Kaparthi and Suresh, 1992; Dagliand Huggahalli, 1995 ) and SOM ( Venugopal andNarendran, 1994; Guerrero et al., 2002 ).

    Since the competitive learning network is anunsupervised approach, it is very suitable for use in

    GT. SOM is best used in GT when the neural nodenumber is known a priori, but this number is notusually known in most real cases. It is generallyknown that ART1 is a competitive learning networkwith a exible number of neural nodes making it bet-

    ter applied to GT than SOM. However, some prob-lems may be encountered in directly applying ART1to GT. Thus, Dagli and Huggahalli (1995) revisedART1 and then applied it to the machine-part CF.Although Dagli and Huggahalli (1995) presented agood application of ART1 to the machine-partCF, we nd that their method still has several draw-backs. In this paper, we rst propose a modiedART1 to overcome these drawbacks and then applyour modied ART1 to the machine-part CF in GT.The remainder of the paper is organized as follows.Section 2 reviews the original ART1 with Dagliand Huggahallis application to the machine-partCF in GT. We describe these drawbacks whenART1 is applied in CF by Dagli and Huggahalli(1995). We then propose a modied ART1 to correctthese problems. Several examples with somemachine/part matrices are presented and comparedin Section 3. Conclusions are made in Section 4.

    2. A modied ART1 algorithm for cell formation

    Although GT has been studied and used for more

    than three decades, neural network applications inGT began only during the last 10 years. Neural net-work learning is benecial for use in GT in a varietyof real cases because it is robust and adaptive. Inmost neural network models, competitive learningis unsupervised making it valuable to be applied inGT. For examples, see the applications from Mal-ave and Ramachandran (1991), Kaparthi and Sur-esh (1992), Venugopal and Narendran (1994) andGuerrero et al. (2002) . In competitive learning,Kohonens SOM has widely been studied andapplied (see Kohonen, 1998, 2001; Guerrero et al.,2002; Lin et al., 2003), but the SOM neural nodenumber needs to be known a priori and the neuralnode number is, in most real cases, unknown. Anappropriate learning algorithm should have theability to function without being provided the nodenumber. Moreover, the SOM learning system oftenencounters a stability and plasticity dilemma(Grossberg, 1976 ). Learning is essential, but stabil-ity mechanism, to resist random noise, is alsoimportant. ART neural networks were proposedto solve this stability and plasticity dilemma ( Car-penter and Grossberg, 1987, 1988 ). On the other

    142 M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140152

  • 8/13/2019 05 Machine Part Cell Formation in Group Technology 2

    4/13

    hand, the machine/part matrix data type in GT isbinary, making ART1 a good choice. Thus, Kapar-thi and Suresh (1992) rst applied the followingART1 algorithm to machine-part CF in GT.

    ART1 algorithmStep 1 . Give a vigilance parameter q and set the ini-

    tial weights with:

    t ij 0 1; bij 0 11 n

    ; i 1; . . . ; n;

    j 1; . . . ; c;

    where tij (t) is the top-down weight whichrepresent the centers and bij (t) is the bot-tom-up connective weight between inputnode i and output node j at time t that isused to evaluate matching scores in thetraining stage.

    Step 2 . Input a training vector x.Step 3 . Use bottom-up weights to evaluate match-

    ing scores and determine the winner accord-ing to the following: j is the winner node when it satisesnode j maxf node j gwherenode j P ni1bij t xi

    Step 4 . Set k X k P ni1 xi and kT j X k Pni1t ij xi.

    Test whether the similarity measurekT j X k

    k X k > q? IF the similarity measure is lar-ger than q THEN go to Step 6; ELSE goto Step 5.

    Step 5 . Disable the node j * so that it will not becomea candidate in the next iteration and go toStep 3. If there is no winner node, then acti-vate a new node and go to Step 2.

    Step 6 . Update the winner as follows:

    t ij t 1 t ij t xi;

    bij t 1 t ij t xi

    0:5 Pni1t ij t xi

    :

    Step 7 . Go to Step 2 until all the training data areinputted.

    However, Dagli and Huggahalli (1995) pointedout that directly applying the above ART1 algo-rithm might present the following drawbacks:

    (a) The vector with few 1 elements is called asparse vector in contrast to a dense vector.The stored patterns grow sparser when moreinput data are applied.

    (b) The input vector order inuences the results.That is, if a sparse vector inputs rst, it willeasily cause the nal stored patterns growsparse.

    (c) Determination of the vigilance parameter q in

    ART1 is important but always difficult. If thesimilarity between the winner and the input X is larger than the vigilance parameter asshown in the Step 4 of ART1 algorithm, itshould be allowed to update the winner or elseactivate a new node as a new group center.Obviously, a larger vigilance will have moreplasticity and generate more groups. However,a smaller vigilance has greater stability andmay result in only one group. Thus a suitablevigilance parameter is very important.

    To solve the rst two drawbacks, Dagli and Hug-gahalli (1995) re-ordered the input vectors accord-ing to the number of 1s in each vector, andapplied them in the order of descending number of 1s to the network. Then, when a comparisonbetween two vectors is successful, instead of storingY , the result of ANDing vectors X and T j , thevector having the higher number of 1s among Y and T j must be stored. This can ensure that thestored patterns become denser as the algorithm pro-gresses. To solve the third drawback, Dagli and

    Huggahalli (1995) rst ran a pre-process with themachine/part matrix to determine appropriate vigi-lance values q1 and q2 relative to part families andmachine groups, and then obtained group numbersN and M , respectively. The values are increased toget different part family and machine group num-bers so that the nal vigilance value can be chosenaccording to that which satised N = M .

    However, by further considering the solutions of Dagli and Huggahalli, we nd that there are stillseveral drawbacks. The rst two modications fromDagli and Huggahalli (1995) often affect the follow-ing input vectors, giving them no opportunity toupdate the output layer. That is, there is no learningbehavior after that. This can be demonstrated by anexample with the machine/part matrix shown inFig. 2 where there are 9 machines and 9 parts.The objective is to identify machine groups. Weuse f x1;... ; x9g to present the 9 machine data vectorsfor the machine/part matrix as shown in Fig. 2.

    Chu and Hayya (1991) pointed out that a betterand reasonable result for the machine groups in themachine/part matrix shown in Fig. 2 should be asfollows:

    M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140152 143

  • 8/13/2019 05 Machine Part Cell Formation in Group Technology 2

    5/13

    Machine Group 1: 3, 4, 7, and 8.Machine Group 2: 1 and 5.Machine Group 3: 2, 6, and 9.

    Suppose all competitions satisfy our expecta-tions, the center of Group 1 is updated in the orderof x8; x4; x3, and x7. As far as the nal center resultsare concerned, Fig. 3 shows that x4; x3 and x7 cannotupdate the node during the learning process becausethey are sparser than x 8. This shows that Dagli andHuggahalli (1995) revision of ART1 is not goodenough.

    The third modication of ART1 by Dagli andHuggahalli (1995) is similar to the validity indexfor clustering algorithms which needs to be run sev-eral times to nd an optimum cluster number. Thus,the redundant evaluation destroys the original ideaof ART1. On the other hand, there may be morethan one set of q1 and q2 for which N = M . We willexplain this with an example later. After makingmore detailed examination for all steps in theART1 algorithm, we nd that the main problemwith an application of ART1 to the machine/partmatrix is caused by Step 6. This is because they exe-cute the update process using a logic AND. Weshow this phenomenon by using the machine/part

    matrix shown in Fig. 2 in the order of x3; x4; x7;and x8. The center change results are shown in

    Fig. 4. We nd that the 7th components of fourinput vectors are 1 except in x4, but we have thatthe nal center change result in the 7th componentbecomes 0. Obviously, this nal center vector isunreasonable because there are three 1 in four

    input vectors. This is because the center changeresults in the ART1 will become sparse afterupdating.

    To prevent ART1 from developing sparse refer-ence vectors after the learning stages, we proposean annealing mechanism to enable the componentto have an opportunity for 0 to approach 1 byreplacing the logical AND with the weightedaverage of the reference vector W ij and the inputvector X i as follows:

    W ij t 1 b W ij t 1 b xi : 1

    Here we adopt b = 0.5. Using the update formula(1) with the same example in Fig. 2 and the same or-der of x3; x4; x7; and x8, we obtain the center changeresults as shown in Fig. 5. We nd that the nal cen-ter change value of 7th component has been up-graded to 0.875. The value of 0.875 is just between0 and 1, but it is much close to 1. The nal resultseems to be more acceptable in the case of three1 in four input vectors.

    We already mentioned that the Dagli and Hug-gahalli (1995) selection method for the vigilanceparameter is not reliable. In fact, to enable ART1to be applied to most real cases of cell formation,the vigilance value should be auto-adjusted fromthe data structure and information. The distances(similarity) between sample vectors play an impor-tant role in deciding the vigilance parameter as if the data sets have differing degrees of dispersion.If the data are more dispersed, the data needs a lar-ger vigilance value to avoid generating too manygroups. If the data have less dispersion, the datashould take on a smaller vigilance value for effective

    classication. According to this analysis, we maytake the data dispersion value as an index in esti-

    Fig. 2. Machine/part matrix.

    Fig. 3. Variation of center. Fig. 4. Variation of center.

    144 M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140152

  • 8/13/2019 05 Machine Part Cell Formation in Group Technology 2

    6/13

    mating the vigilance parameter. Suppose that thereare n vectors for training. We adopt the similaritymeasure with absolute error as an estimator forthe vigilance parameter as follows:

    q P ni1P

    n ji 1j xi x j j

    f n P n 1k 1k : 2

    In general, more data results in more groups,whereas a smaller q also generates more groups.Thus, we divide the total absolute error by a mono-tone increasing function f (n) as shown in Eq. (2) toadjust the similarity measure and make the estima-tor more exible.

    Thus, a modied ART1 algorithm for CF can bedescribed as follows:

    Modied ART1 algorithm

    Step 1 . Determine the vigilance parameter q by (2);Given b = 0.5 and assign the rst trainingvector to W 1.

    Step 2 . Input the training vector x.Step 3 . Calculate the matching score to nd the win-

    ner node j by the following equation:node j min

    jP ni1 j W ij xi j .

    Step 4 . Test the degree of similarity. IFP ni1jW ij xij < q , THEN go to Step 6.ELSE go to Step 5.

    Step 5 . Activate a new node and go to Step 2.Step 6 . Update the winner as follows:

    W ij t 1 b W ij t 1 b x:

    Step 7 . Go to Step 2 until all the training data areinputted.

    We know that Dagli and Huggahallis methodand the proposed modied ART1 algorithm cangroup data into machine-part cells. To accomplishdiagonal blocking for the machine/part matrix,both methods have to group parts into part families

    and machines into machine groups. However, theymay probably generate different number of groupswhen running the algorithms separately by partsand by machines, respectively. Therefore, we cangroup row vectors (machines) and then assign parts

    to machine groups or group column vectors (parts)and then assign machines to part families. Supposewe have already grouped m machines into k groups,part i will be assigned to family k when the partoperated on k machine group is proportionatelyhigher than that of any other machine group.

    3. Numerical examples

    In order to measure the grouping efficiency of analgorithm for machine-part CF, a performancemeasure is needed. Due to its simplicity of calcula-tion, the grouping efficiency measure proposed byChandrasekaran and Rajagopalan (1986b) is themost widely used method. They dene the groupingefficiency g with a weighted mean of g1 and g2 asfollows:

    g xg 1 1 x g2;

    where g1 o eo e v ; g2 mp o vmp o v e ; 0 6 x 6 1 and

    m number of machines, p number of parts,o number of 1s in the part/machine matrix,e number of 1s outside the diagonal block,v number of 0s in the diagonal block.

    An optimal result should have two features with ahigher proportion of 1s inside the diagonal blockas well as a higher proportion of 0s outside the diag-onal block. The values of g1 and g2 are used to mea-sure these two features, respectively. Of course, xallows the designer to modify the emphasis of thetwo features. Since x is a weight between g1 andg2; x 0:5 is generally suggested and will be usedin all of the examples presented next.

    Fig. 5. New approach of center variation.

    M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140152 145

  • 8/13/2019 05 Machine Part Cell Formation in Group Technology 2

    7/13

    Example 1. In this example, we rst use severaldifferent machine/part matrices to demonstrate thebehavior of the dened grouping efficiency g. Weobtain the optimal clustering results for thesemachine/part matrices using the proposed modied

    ART1 algorithm in which some 1s have appearedoutside the diagonal blocks and some 1s havedisappeared from the diagonal blocks. Thesemachine/part matrices and optimal clusteringresults with grouping efciencies are shown in Figs.6(a)6(d). Fig. 6a illustrates a machine/part matrixwith its clustering result without any exceptionalelement and void and a grouping efciencyg = 100%. Fig. 6b illustrates another machine/partmatrix with its clustering result having 8 exceptionalelements and a grouping efciency g = 97.7%.Fig. 6c demonstrates a machine/part matrix withits clustering result having 9 voids and a grouping

    efciency g = 91.7%, and nally, Fig. 6d has bothexceptional elements and voids with a groupingefciency g = 89.3%. Of course, Fig. 6a has a perfectresult without any exceptional element and voidsuch that a grouping efciency g = 100% is

    obtained. For Fig. 6b, we have m = 15, p = 15,o = 54, e = 8, and v = 0. We nd

    g1 54 845 8 0

    1 and g2 15 15 54 015 15 54 0 8

    0:9532:

    Thus, we have the grouping efficiency g 0:5 g10:5 g2 97:7%: Similarly, the grouping efficiencyfor Figs. 6c and 6d are 91.7% and 89.3%, respec-tively. Our proposed modied ART1 method ob-tains the optimal clustering results for thesedifferent machine/part matrices with the advantage

    Fig. 6a. Machine/part matrix and nal clustering result with grouping efficiency = 100%.

    Fig. 6b. Machine/part matrix and nal clustering result with grouping efficiency = 97.7%.

    146 M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140152

  • 8/13/2019 05 Machine Part Cell Formation in Group Technology 2

    8/13

    that the number of groups needs not be given and isautomatically generated from the data. On the otherhand, the vigilance parameter is estimated automat-ically in our modied ART1 algorithm. In compar-ing the grouping efficiencies between Figs. 6a6d,we nd that if there are more exceptional elementsor voids in nal clustering results, the grouping effi-ciency will decrease.

    Example 2. This example uses a machine/partmatrix with 35 parts and 28 machines as shown inFig. 7. We use the data set for comparing the resultsfrom our method to those from Dagli and Huggah-alli (1995). The pre-process results of determining asuitable vigilance based on Dagli and Huggahalli(1995) method is shown in Fig. 8. Table 1 showsall the different combinations of efficiency with eachdifferent choice of vigilances where q 1 and q2 repre-

    sent groups for parts and machines, respectively.We see that the group number can be c = 5, 6 or 7.In fact, it is difcult to pick a suitable group numberc in Dagli and Huggahalli (1995) method. If c = 5 ispicked, there is an efciency g = 75.08%. If c = 6 ispicked, there is an efciency g = 87.81%. Even if c = 7 is chosen with a best efciency g = 89.11% inDagli and Huggahallis method, our approach givesthe nal results of c = 6 with an efciency g =90.68% as shown in Fig. 9. We can see that our pro-posed method presents a simple and efcient way byusing an auto-adjusted estimation method accordingto the structure of the data set itself.

    Example 3. In this example, the machine/partmatrix shown in Fig. 2 with 9 parts and 9 machinesis used. The pre-process results of determining asuitable vigilance based on Dagli and Huggahalli

    Fig. 6c. Machine/part matrix and nal clustering result with grouping efficiency = 91.7%.

    Fig. 6d. Machine/part matrix and nal clustering result with grouping efficiency = 89.3%.

    M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140152 147

  • 8/13/2019 05 Machine Part Cell Formation in Group Technology 2

    9/13

    (1995) method are shown in Fig. 10. Table 2 showsthe different combinations of efficiency with eachdifferent choice of vigilances. The Dagli and Hug-gahalli (1995) method gives the nal results withc = 2 for this machine/part matrix as shown inFig. 11(a). The grouping efciency is g = 81.62%.Our proposed modied ART1 gives the nal resultswith c = 3 for this machine/part matrix as shown inFig. 11(b). The grouping efciency is g = 89.06%.

    The nal results from our proposed modiedART1 algorithm actually present better machine-part cells and also higher grouping efciency thanDagli and Huggahallis method.

    Example 4. The last example uses a larger machine/part matrix with 105 parts and 46 machines asshown in Fig. 12. The nal clustering matrix fromour proposed modied ART1 is shown in Fig. 13.

    Fig. 7. Machine/part matrix with 35 parts and 28 machines.

    Table 1Grouping efficiency of different vigilance by part and machine

    q 1

    q 2 c = 5 0.2 0.3 0.350.25 71.65 75.08 75.080.3 71.65 75.08 75.08

    q 1

    q 2 c = 6 0.25 0.4 0.450.2 87.81 87.81 87.810.35 87.81 87.81 87.810.4 87.81 87.81 87.810.45 88.43 88.43 88.43

    q 1

    q 2 c = 7 0.5 0.550.5 89.11 89.11

    0.55 89.11 89.11

    15 20 25 30 35 40 45 50 55 60 654

    5

    6

    7

    8

    9

    10

    11

    12

    13

    Vigilance (%)

    N u m

    b e r o

    f G r o u p s

    Part GroupMachine Group

    Fig. 8. Variation of group number with vigilance.

    148 M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140152

  • 8/13/2019 05 Machine Part Cell Formation in Group Technology 2

    10/13

    Fig. 9. Final results for machine/part matrix of Fig. 7 using the modied ART1 algorithm with grouping efficiency = 90.68%.

    Fig. 10. Variation of group number with vigilance.

    Table 2Efficiency of different vigilance by part and machine

    q1

    q2 c = 2 0.2 0.25 0.3 0.35 0.4 0.450.2 81.62 81.62 81.62 81.62 81.62 81.620.25 81.62 81.62 81.62 81.62 81.62 81.620.3 81.62 81.62 81.62 81.62 81.62 81.620.35 81.62 81.62 81.62 81.62 81.62 81.620.4 81.62 81.62 81.62 81.62 81.62 81.620.45 81.62 81.62 81.62 81.62 81.62 81.62

    (a) (b)

    Fig. 11. Final result for machine/part matrix of Fig. 2. (a) Using Dagli and Huggahallis method with grouping efficiency = 81.62%. (b)Using the modied ART1 algorithm with grouping efficiency = 89.06%.

    M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140152 149

  • 8/13/2019 05 Machine Part Cell Formation in Group Technology 2

    11/13

    Fig. 12. Machine/part matrix with 105 parts and 46 machines.

    Fig. 13. Final results for machine/part matrix of Fig. 12 using the modied ART1 algorithm with grouping efficiency = 87.54%.

    150 M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140152

  • 8/13/2019 05 Machine Part Cell Formation in Group Technology 2

    12/13

    The results show that the estimated optimal groupnumber c is 7, and the grouping efciency g is87.54%. These results look good for a machine-partCF algorithm. It actually follows the assumptionthat more bottleneck machines and exceptional

    parts will decrease the nal grouping efciencyand so is the case in this example.

    4. Conclusions

    Our main objective in this paper is to provide aneural network application in GT cell formationwith a special focus on the ART1 algorithm.Although ART1 has been applied to GT by Kapar-thi and Suresh (1992) and Dagli and Huggahalli(1995), it encountered problems when directly

    applied to GT. In this paper, we analyze these draw-backs and propose a modied ART1 to t the appli-cation to GT. Some examples are given andcomparisons are made. Based on the performancemeasure proposed by Chandrasekaran and Rajago-palan (1986b) , we nd that our proposed method isvigilance parameter-free and also more efficient inCF with different machine/part matrices than theprevious methods.

    Acknowledgements

    The authors are grateful to the anonymous refer-ees for their critical and constructive comments andsuggestions. This work was supported in part by theNational Science Council of Taiwan, R.O.C., underGrant NSC-92-2118-M-033-001.

    References

    Carpenter, G.A., Grossberg, S., 1987. A massively parallelarchitecture for a self-organizing neural pattern recognitionmachine. Computer Vision, Graphics, and Image Processing

    37, 54115.Carpenter, G.A., Grossberg, S., 1988. The ART1 of adaptivepattern recognition by a self-organization neural network.Computer 21, 7788.

    Chan, H.M., Milner, D.A., 1982. Direct clustering algorithm forgroup formation in cellular manufacture. Journal of Manu-facturing Systems 1 (1), 6476.

    Chandrasekaran, M.P., Rajagopalan, R., 1986a. MODROC: Anextension of rank order clustering of group technology.International Journal of Production Research 24 (5), 1221 1233.

    Chandrasekaran, M.P., Rajagopalan, R., 1986b. An ideal seednon-hierarchical clustering algorithm for cellular manufac-turing. International Journal of Production Research 24, 451

    464.

    Chang, S.I., Aw, C.A., 1996. A neural fuzzy control chart fordetecting and classifying process mean shifts. InternationalJournal of Production Research 34, 22652278.

    Cheng, C.S., 1997. A neural network approach for the analysis of control chart patterns. International Journal of ProductionResearch 35, 667697.

    Chu, C.H., Hayya, J.C., 1991. A fuzzy clustering approach tomanufacturing cell formation. International Journal of Pro-duction Research 29 (7), 14751487.

    Dagli, C., Huggahalli, R., 1995. Machine-part family formationwith the adaptive resonance theory paradigm. InternationalJournal of Production Research 33, 893913.

    Gindy, N.N.G., Ratchev, T.M., Case, K., 1995. Componentgrouping for GT applications- a fuzzy clustering approachwith validity measure. International Journal of ProductionResearch 33 (9), 24932509.

    Grossberg, S., 1976. Adaptive pattern classication and universalrecoding I: Parallel development and coding of neural featuredetectors. Biological Cybernetics 23, 121134.

    Guerrero, F., Lozano, S., Smith, K.A., Canca, D., Kwok, T.,

    2002. Manufacturing cell formation using a new self-organiz-ing neural network. Computers and Industrial Engineering42, 377382.

    Guh, R.S., Tannock, J.D.T., 1999. Recognition of control chartconcurrent patterns using a neural network approach. Inter-national Journal of Production Research 37, 17431765.

    Gupta, T., Seifoddini, H., 1990. Production data based similaritycoefficient for machine-component grouping decisions in thedesign of a cellular manufacturing system. InternationalJournal of Production Research 28 (7), 12471269.

    Hopeld, J.J., 1982. Neural networks and physical systems withemergent collective computational abilities. Proceedings of the National Academy of Sciences, USA 79, 25542558.

    Kao, Y., Moon, Y.B., 1991. A unied group technologyimplementation using the backpropagation learning rule of neural networks. Computers and Industrial Engineering 20(4), 425437.

    Kaparthi, S., Suresh, N.C., 1992. Machine-component cell forma-tion in group technology: A neural network approach. Inter-national Journal of Production Research 30 (6), 13531367.

    King, J.R., 1980. Machine-component grouping in productionow analysis: An approach using rank order clusteringalgorithm. International Journal of Production Research 18(2), 213232.

    Kohonen, T., 1998. The self-organizing map. Neurocomputing21, 16.

    Kohonen, T., 2001. Self-Organizing Maps, 3rd ed. Springer-Verlag, Berlin.

    Lin, K.C.R., Yang, M.S., Liu, H.C., Lirng, J.F., Wang, P.N.,2003. Generalized Kohonens competitive learning algorithmsfor ophthalmological MR image segmentation. MagneticResonance Imaging 21, 863870.

    Lippmann, R.P., 1987. An introduction to computing with neuralnets. IEEE ASSP, 4-22.

    Malave, C.O., Ramachandran, S., 1991. Neural network-baseddesign of cellular manufacturing systems. Journal of Intelli-gent Manufacturing 2, 305314.

    McCormick, W.T., Schweitzer, P.J., White, T.W., 1972. Problemdecomposition and data reorganization by a clusteringtechnique. Operations Research 20 (5), 9931009.

    Mosier, C.T., 1989. An experiment investigating the application

    of clustering procedures and similarity coefficients to the GT

    M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140152 151

  • 8/13/2019 05 Machine Part Cell Formation in Group Technology 2

    13/13

    machine cell formation problem. International Journal of Production Research 27 (10), 18111835.

    Narayanaswamy, P., Bector, C.R., Rajamani, D., 1996. Fuzzylogic concepts applied to machine-component matrix forma-tion in cellular manufacturing. European Journal of Opera-tional Research 93, 8897.

    Rumelhart, D.E., Hinton, G.E., Williams, R.J., 1986. Learningrepresentations by back-propagation. Nature 323, 533536.

    Shafer, S.M., Rogers, D.F., 1993. Similarity and distancemeasures for cellular manufacturing, Part 1: A survey.International Journal of Production Research 31 (5), 1133 1142.

    Singh, N., 1993. Design of cellular manufacturing systems-aninvited review. European Journal of Operational Research 69(3), 284291.

    Singh, N., Rajamani, D., 1996. Cellular Manufacturing Systems.Chapman & Hall, New York.

    Venugopal, V., Narendran, T.T., 1994. Machine-cell formationthrough neural network models. International Journal of Production Research 32, 21052116.

    Wei, J.C., Kern, G.M., 1989. Commonality analysis: A linear cellclustering algorithm for group technology. InternationalJournal of Production Research 27 (12), 20532062.

    Xambre, A.R., Vilarinho, P.M., 2003. A simulated annealingapproach for manufacturing cell formation with multipleidentical machines. European Journal of OperationalResearch 151, 434446.

    Xu, H., Wang, H.P., 1989. Part family formation for GTapplications based on fuzzy mathematics. InternationalJournal of Production Research 27 (9), 16371651.

    Yang, M.S., Yang, J.H., 2002. A fuzzy soft learning vectorquantization for control chart pattern recognition. Interna-tional Journal of Production Research 40, 27212731.

    152 M.-S. Yang, J.-H. Yang / European Journal of Operational Research 188 (2008) 140152