Design of Experiments

20
49 3 Design of Experiments Jack B. ReVelle, Ph.D. 3.1 OVERVIEW Design of experiments (DOE) does not sound like a production tool. Most people who are not familiar with the subject might think that DOE sounds more like something from research and development. The fact is that DOE is at the very heart of a process improvement flow that will help a manufacturing manager obtain what he or she most wants in production, a smooth and efficient operation. DOE can appear complicated at first, but many researchers, writers, and software engineers have turned this concept into a useful tool for application in every manufacturing operation. Don’t let the concept of an experiment turn you away from the application of this most useful tool. DOEs can be structured to obtain useful information in the most efficient way possible. 3.2 BACKGROUND DOEs grew out of the need to plan efficient experiments in agriculture in England during the early part of the 20th century. Agriculture poses unique problems for experimentation. The farmer has little control over the quality of soil and no control whatsoever over the weather. This means that a promising new hybrid seed in a field with poor soil could show a reduced yield when compared with a less effective hybrid planted in a better soil. Alternatively, weather or soil could cause a new seed to appear better, prompting a costly change for farmers when the results actually stemmed from more favorable growing conditions during the experiment. Although these considerations are more exaggerated for farmers, the same factors affect manufacturing. We strive to make our operations consistent, but there are slight differences from machine to machine, operator to operator, shift to shift, supplier to supplier, lot to lot, and plant to plant. These differences can affect results during experimentation with the introduction of a new material or even a small change in a process, thus leading to incorrect conclusions. In addition, the long lead time necessary to obtain results in agriculture (the growing season) and to repeat an experiment if necessary require that experiments be efficient and well planned. After the experiment starts, it is too late to include another factor; it must wait till next season. This same discipline is useful in manufacturing. We want an experiment to give us the most useful information in the shortest time so our resources (personnel and equipment) can return to production. One of the early pioneers in this field was Sir Ronald Fisher. He determined the initial methodology for separating the experimental variance between the factors and the underlying process and began his experimentation in biology and agriculture. © 2002 by CRC Press LLC

Transcript of Design of Experiments

Page 1: Design of Experiments

49

3

Design of Experiments

Jack B. ReVelle, Ph.D.

3.1 OVERVIEW

Design of experiments (DOE) does not sound like a production tool. Most people whoare not familiar with the subject might think that DOE sounds more like somethingfrom research and development. The fact is that DOE is at the very heart of a processimprovement flow that will help a manufacturing manager obtain what he or she mostwants in production, a smooth and efficient operation. DOE can appear complicated atfirst, but many researchers, writers, and software engineers have turned this conceptinto a useful tool for application in every manufacturing operation. Don’t let the conceptof an experiment turn you away from the application of this most useful tool. DOEscan be structured to obtain useful information in the most efficient way possible.

3.2 BACKGROUND

DOEs grew out of the need to plan efficient experiments in agriculture in Englandduring the early part of the 20th century. Agriculture poses unique problems forexperimentation. The farmer has little control over the quality of soil and no controlwhatsoever over the weather. This means that a promising new hybrid seed in a fieldwith poor soil could show a reduced yield when compared with a less effectivehybrid planted in a better soil. Alternatively, weather or soil could cause a new seedto appear better, prompting a costly change for farmers when the results actuallystemmed from more favorable growing conditions during the experiment. Althoughthese considerations are more exaggerated for farmers, the same factors affectmanufacturing. We strive to make our operations consistent, but there are slightdifferences from machine to machine, operator to operator, shift to shift, supplier tosupplier, lot to lot, and plant to plant. These differences can affect results duringexperimentation with the introduction of a new material or even a small change ina process, thus leading to incorrect conclusions.

In addition, the long lead time necessary to obtain results in agriculture (thegrowing season) and to repeat an experiment if necessary require that experimentsbe efficient and well planned. After the experiment starts, it is too late to includeanother factor; it must wait till next season. This same discipline is useful inmanufacturing. We want an experiment to give us the most useful information in theshortest time so our resources (personnel and equipment) can return to production.

One of the early pioneers in this field was Sir Ronald Fisher. He determined theinitial methodology for separating the experimental variance between the factorsand the underlying process and began his experimentation in biology and agriculture.

SL3003Ch03Frame Page 49 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 2: Design of Experiments

50

The Manufacturing Handbook of Best Practices

The method he proposed we know today as ANalysis Of VAriance (ANOVA). Thereis more discussion on ANOVA later in this chapter. Other important researchers havebeen Box, Hunter, and Behnken. Each contributed to what are now known as classicalDOE methods. Dr. Genichi Taguchi developed methods for experimentation thatwere adopted by many engineers. These methods and other related tools are nowknown as robust design, robust engineering, and Taguchi Methods™.

3.3 GLOSSARY OF TERMS AND ACRONYMS

TABLE 3.1 Glossary of Terms and Acronymns

Confounding When a design is used that does not explore all the factor levelsetting combinations, some interactions may be mixed with eachother or with experimental factors such that the analysis cannottell which factor contributes to or influences the magnitude ofthe response effect. When responses from interactions or factorsare mixed, they are said to be

confounded.

DOE Design of experiments is also known as industrial experiments,experimental design, and design of industrial experiments.

Factor A process setting or input to a process. For example, thetemperature setting of an oven is a factor as is the type of rawmaterial used.

Factor level settings The combinations of factors and their settings for one or moreruns of the experiment. For example, consider an experimentwith three factors, each with two levels (H and L = high andlow). The possible factor level settings are H-H-H, H-L-L, etc.

Factor space The hypothetical space determined by the extremes of all thefactors considered in the experiment. If there are

k

factors in theexperiment, the factor space is

k

-dimensional.Interaction Factors are said to have an interaction when changes in one factor

cause an increased or reduced response to changes in anotherfactor or factors.

Randomization After an experiment is planned, the order of the runs israndomized. This reduces the effect of uncontrolled changes inthe environment such as tool wear, chemical depletion, warm-up, etc.

Replication When each factor level setting combination is run more than onetime, the experiment is

replicated.

Each run beyond the first onefor a factor level setting combination is a

replicate.

Response The result to be measured and improved by the experiment. Inmost experiments there is one response, but it is certainlypossible to be concerned about more than one response.

SL3003Ch03Frame Page 50 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 3: Design of Experiments

Design of Experiments

51

3.4 THEORY

This section approaches theory in two parts. The first part is a verbal, nontechnicaldiscussion. The second part of the theory section covers a more technical, algebraicpresentation that may be skipped if the reader desires to do so.

Here is the question facing a manager considering an experiment for a manufac-turing line: What are my optimal process factors for the most efficient operation pos-sible? There may be many factors to be considered in the typical process. One approachmay be to choose a factor and change it to observe the result. Another approach mightchange two or three factors at the same time. It is possible that an experimenter willbe lucky with either of these approaches and find an improvement. It is also possiblethat the real improvement is not discovered, is masked by other changes, or that acheaper alternative is not discovered. In a true DOE, the most critical two, three, orfour factors (although higher factors are certainly possible, most experiments are in thisrange) are identified and an experiment is designed to modify these factors in a planned,systematic way. The result can be not only knowledge about how the factors affect theprocess, but also how the factors interact with each other.

The following is a simple and more technical explanation of looking at the theoryin an algebraic way. Let’s consider the situation of a process with three factors: A,B, and C. For now we’ll ignore interactions. The response of the system in algebraicform is given by

(3.1)

where

β

0

is the intercept,

β

1

,

β

2

, and

β

3

are the coefficients for the factor levelsrepresented by

Χ

A

,

Χ

B

,

and

Χ

C

,

and

ε

represents the inherent process variability.Setting aside

ε

for a while, we remember from basic algebra that we need fourdistinct experimental runs to obtain an estimate for

β

0

,

β

1

,

β

2

, and

β

3

(note that

ε

and

β

0

are both constants and cannot be separated in this example). This is basedon the need for at least four different equations to solve for four unknowns.

The algebraic explanation in the previous paragraph is close to the underlyingprinciples of experimentation but, like many explanations constructed for simplicity, itis incomplete. The point is that we need at least four pieces of information (fourequations) to solve for four unknowns. However, an experiment is constructed to providesufficient information to solve for the unknowns

and

to help the experimenter determineif the results are statistically significant. In most cases this requires that an experimentconsist of more runs than would be required from the algebraic perspective.

Statistically significant

A factor or interaction is said to be statistically significant if itscontribution to the variance of the experiment appears to belarger than would be expected from the normal variance of theprocess.

TABLE 3.1 (continued)Glossary of Terms and Acronymns

Y X X XA B C= + + + +β β β β ε0 1 2 3

SL3003Ch03Frame Page 51 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 4: Design of Experiments

52

The Manufacturing Handbook of Best Practices

3.5 EXAMPLE APPLICATIONS AND PRACTICAL TIPS

3.5.1 U

SING

S

TRUCTURED

DOE

S

TO

O

PTIMIZE

P

ROCESS

-S

ETTING

T

ARGETS

The most useful application for DOEs is to optimize a process. This is achieved bydetermining which factors in a process may have the greatest effect on the response.The target factors are placed in a DOE so the factors are adjusted in a planned way,and the output is analyzed with respect to the factor level setting combination.

An example that the author was involved in dealt with a UV-curing process fora medical product. This process used intense ultraviolet (UV) light to cure anadhesive applied to two plastic components. The process flow was for an operatorto assemble the parts, apply the adhesive, and place the assembly on a conveyor beltthat passed the assembly under a bank of UV lights. The responses of concern werethe degree of cure as well as bond strength. An additional response involved colorof the assembly since the UV light had a tendency to change the color of somecomponents if the light was too intense. The team involved with developing thisprocess determined that the critical factors were most likely conveyor speed, strengthof the UV source (the bulb output diminishes over time), and the height of the UVsource. Additionally, some thought that placement of the assembly on the belt(orientation with respect to the UV source bulbs), could have an effect, so this factorwas added.

An experiment was planned and the results analyzed for this UV-curing process.The team learned that the orientation of the assemblies on the belt was significantand that one particular orientation led to a more consistent adhesive cure. This typeof find is especially important in manufacturing because there is essentially noadditional cost to this benefit. Occasionally, an experiment result indicates that thedesired process improvement can be achieved, but only at a cost that must bebalanced against the gain from improvement. Additional information acquired bythe team: the assembly color was affected least when the UV source was fartherfrom the assemblies (not surprising), and sufficient cure and bond strength wereattainable when the assemblies were either quickly passed close to the source ordwelt longer at a greater distance from the source. What surprised the team was thepenalty they would pay for process speed. When the assembly was passed close tothe light, they could speed the conveyor up and obtain sufficient cure, but there werealways a small number of discolored assemblies. In addition, the shorter time madethe process more sensitive to degradation of the UV light, requiring more preventivemaintenance to change the source bulbs. The team chose to set the process up witha slower conveyor speed and the light source farther from the belt. This created anoptimal balance between assembly throughput, reduction in defective assemblies,and preventive line maintenance.

Another DOE with which the author was involved was aimed at improving alaser welding process. This process was an aerospace application wherein a laserwelder was used to assemble a microwave wave guide and antenna assembly. Theprocess was plagued with a significant amount of rework, ranging from 20 to 50%of the assemblies. The reworked assemblies required hand filing of nubs created on

SL3003Ch03Frame Page 52 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 5: Design of Experiments

Design of Experiments

53

the back of the assembly if the weld beam had burned through the parts. The welderhad gone through numerous adjustments and refurbishment over the years. Supportengineering believed that the variation they were experiencing was due to attemptedpiecemeal improvements and that they must develop an optimum setting that wouldstill probably result in rework, but the result would be steady performance. Theexperiment was conducted using focus depth, power level, and laser pulse width(the laser was not continuous, rather it fired at a given power level for a controlledtime period or pulse). The team found that the power level and pulse width rangesthey had been using over the years had an essentially negligible impact on the weld.The key parameter was the beam focus depth. What’s more, upon further investiga-tion, the team found that the method of setting the focus depth was imprecise and,thus, dependent on operator experience and visual acuity. To fix this process, theteam had a small tool fabricated and installed in the process to help the operatorconsistently set the proper laser beam focus. This resulted in a reduction of reworkto nearly zero!

3.5.2 U

SING

S

TRUCTURED

DOE

S

TO

E

STABLISH

P

ROCESS

L

IMITS

Manufacturers know it is difficult to maintain a process when the factor settings arenot permitted any variation and the limits on the settings are quite small. Such aprocess, often called a “point” process, may be indicative of high sensitivity to inputparameters. Alternatively, it may indicate a lack of knowledge of the effect of processsettings and a desire to control the process tightly

just in case.

To determine allowable process settings for key parameters, place these factorsin a DOE and monitor the key process outputs. If the process outputs remain inspecification and especially if the process outputs exhibit significant margin withinthe factor space, the process settings are certainly acceptable for manufacturing. Todetermine the output margin, an experimenter can run sufficient experimental rep-licates to assess process capability (C

pk

) or process performance (P

pk

). If the outputis not acceptable in parts of the factor space, the experimenter can determine whichportion of the factor space would yield acceptable results.

3.5.3 U

SING

S

TRUCTURED

DOE

S

TO

G

UIDE

N

EW

D

ESIGN

F

EATURES

AND

T

OLERANCES

As stated previously, DOE is often used in development work to assess the differencesbetween two potential designs, materials, etc. This sounds like development workonly, not manufacturing. Properly done, DOE can serve both purposes.

3.5.4 P

LANNING

FOR

A

DOE

Planning for a DOE is not particularly challenging, but there are some approachesto use that help to avoid pitfalls. The first and most important concept is to includemany process stakeholders in the planning effort. Ideally, the planning group shouldinclude at least one representative each from design, production technical support,and production operators. It is not necessary to assemble a big group, but thesefunctions should all be represented.

SL3003Ch03Frame Page 53 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 6: Design of Experiments

54

The Manufacturing Handbook of Best Practices

The rationale for their inclusion is to obtain their input in both the planning andthe execution of the experiment. As you can imagine, experiments are not done everyday, and communication is necessary to understand the objective, the plan, and theorder of execution.

When the planning team is assembled, start by brainstorming the factors thatmay be included in the experiment. These may be tabulated (listed) and then prior-itized. One tool that is frequently used for brainstorming factors is a cause-and-effect diagram, also known as a fishbone or Ishikawa diagram. This tool helps promptthe planning team on some elements to be considered as experimental factors.

Newcomers to DOE may be overly enthusiastic and want to include too manyfactors in the experiment. Although it is desirable to include as many factors as areconsidered significant, it must be remembered that each factor brings a cost. Forexample, consider an experiment with five factors, each at two levels. When allpossible combinations are included in the experiment (this is called a full factorialdesign), the experiment will take 2

5

= 32 runs to complete each factor level settingcombination just once! As will be discussed later, replicating an experiment at leastonce is very desirable. For this experiment, one replication will take 64 runs. Ingeneral, if an experiment has

k

factors at two levels,

l

factors at three levels, and

m

factors at four levels, the number of runs to complete every experimental factor levelsetting is given by 2

k

3

l

4

m

. As you can see, the size of the experiment can growquickly. It is important to prioritize the possible factors for the experiment andinclude what are thought to be the most significant ones with respect to the timeand material that can be devoted to the DOE on the given process.

If it is desirable to experiment with a large number of factors, there are ways toreduce the size of the experiment. Some methods involve reducing the number of levelsfor the factors. It is not usually necessary to run factors at levels higher than three, andoften three levels is unnecessary. In most cases, responses are linear over the range ofexperimental values and two levels are sufficient. As a rule of thumb, it is not necessaryto experiment with factors at more than two levels unless the factors are qualitative(material types, suppliers, etc.) or the response is expected to be nonlinear (quadratic,exponential, or logarithmic) due to known physical phenomena.

Another method to reduce the size of the experiment is somewhat beyond thescope of this chapter, but is discussed in sufficient detail to provide some additionalguidance. A full factorial design is generally desirable because it allows the exper-imenter to assess not only the significance of each factor, but

all

the interactionsbetween the factors. For example, given factors T (temperature), P (pressure), andM (material) in an experiment, a full factorial design can detect the significance ofT, P, and M as well as interactions TP, TM, PM, and TPM. There is a class ofexperiments wherein the experimenter deliberately reduces the size of the experimentand gives up some of the resulting potential information by a strategic reduction infactor level setting combinations. This class is generally called “fractional factorial”experiments because the result is a fraction of the full factorial design. For example,a half-fractional experiment would consist of 2

n–1

factor level setting combinations.Many fractional factorial designs have been developed such that the design givesup information on some or all of the potential interactions (the formal term for this

SL3003Ch03Frame Page 54 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 7: Design of Experiments

Design of Experiments

55

loss of information is

confounding

— the interaction is not lost, it is confounded ormixed with another interaction’s or factor’s result). To use one of these designs, theexperimenter should consult one or more of the reference books listed at the end ofthis chapter or employ one of the enumerated software applications. These will haveguidance tables or selection options to guide you to a design. In general, employdesigns that confound higher level interactions (three-way, four-way, etc.). Avoiddesigns that confound individual factors with each other or two-way interactions(AB, AC, etc.) and, if possible, use a design that preserves two-way interactions.Most experimental practitioners will tell you that three-way or better interactionsare not detected often and are not usually of engineering significance even if noted.

The next part of planning the experiment is to determine the factor levels. Factorlevels fall into two general categories. Some factors are quantitative and cover arange of possible settings; temperature is one example. Often these factors arecontinuous. A subset of this type of factor is one with an ordered set of levels. Anexample of this is high-medium-low fan settings. Some experimental factors areknown as attribute or qualitative factors. These include material types, suppliers,operators, etc. The distinction between these two types of factors really drives theexperimental analysis and sometimes the experimental planning. For example, whileexperimenting with the temperatures 100, 125, and 150°C, a regression could beperformed and it could identify the optimum temperature as something between thethree experimental settings, say 133°C, for example. While experimenting with threematerials, A, B, and C, one does not often have the option of selecting a materialpart way between A and B if such a material is not on the market!

Continuing our discussion of factor levels, the attribute factors are generallygiven. Quantitative factors pose the problem of selecting the levels for the experi-ment. Generally, the levels should be set wide enough apart to allow identificationof differences, but not so wide as to ruin the experiment or cause misleading settings.Consider curing a material at ~100°C. If your oven maintains temperature

±

5°C,then an experiment of 95, 100, 105°C may be a waste of time. At the same time,an experiment of 50, 100, 150°C may be so broad that the lower temperature materialdoesn’t cure and the higher temperature material burns. Experimental levels of 90,100, and 110°C are likely to be more appropriate.

After the experiment is planned, it is important to randomize the order of theruns. Randomization is the key to preventing some environmental factor that changesover time from confounding with an experimental factor. For example, let’s supposeyou are experimenting with reducing chatter on a milling machine. You are exper-imenting with cutting speed and material from two suppliers, A and B. If you runall of A’s samples first, would you expect tool wear to affect the output when B isrun? Using randomization, the order would be mixed so that each material samplehas an equal probability of the application of either a fresh or a dulled cutting edge.

Randomization can be accomplished by sorting on random numbers added tothe rows in a spreadsheet. Another method is to add telephone numbers takensequentially from the phone book to each run and sort the runs by these numbers.You can also draw the numbers from a hat or any other method that removes thehuman bias.

SL3003Ch03Frame Page 55 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 8: Design of Experiments

56

The Manufacturing Handbook of Best Practices

When you conduct an experiment that includes replicates, you may be temptedto randomize the factor level setting combinations and run the replicates back-to-back while at the combination setting. This is less desirable than full randomizationfor the reasons given previously. Sometimes, an experiment is difficult to fullyrandomize due to the nature of experimental elements. For example, an experimenton a heat-treat oven or furnace for ceramics may be difficult to fully randomizebecause of the time involved with changing the oven temperature. In this case, onecan relax the randomization somewhat and randomize factor level combinationswhile allowing the replicates at each factor level setting combination to go back-to-back. Randomization can also be achieved by randomizing how material is assignedto the individual runs.

3.5.5 E

XECUTING

THE

DOE E

FFICIENTLY

The experimenter will find it important to bring all the personnel who may handleexperimental material into the planning at some point for training. Every experi-menter has had one or more experiments ruined by someone who didn’t understandthe objective or significance of the experimental steps. Errors of this sort includemixing the material (not maintaining traceability to the experimental runs), runningall the material at the same setting (not changing process setting according to plan),and other instances of Murphy’s Law that may enter the experiment. It is alsoadvisable to train everyone involved with the experiment to write down times,settings, and variances that may be observed. The latter might include maintenanceperformed on a process during the experiment, erratic gauge readings, shift changes,power losses, etc. The astute experimenter must also recognize that when an operatormakes errors, you can’t berate the operator and expect cooperation on the next trialof the experiment. Everyone involved will know what happened and the next timethere is a problem with your experiment, you’ll be the last to know exactly whatwent wrong!

3.5.6 I

NTERPRETING

THE

DOE R

ESULTS

In the year 2000, DOEs were most often analyzed using a statistical software packagethat provided analysis capabilities such as ANalysis Of VAriance (ANOVA) andregression. ANOVA is a statistical analysis technique that decomposes the variationof experimental results into the variance from experimental factors (and their inter-actions if the experiment supported such analysis) and the underlying variation ofthe process. Using statistical tests, ANOVA designates which factors (and interac-tions) are statistically significant and which are not. In this context, if a factor isstatistically significant, it means that the observed data are not likely to normallyresult from the process. Stated another way, the factor had a discernible effect onthe process. If a factor or interaction is not determined to be statistically significant,the effect is not discernible from the background process variation under the exper-imental conditions. The way that most statistical software packages implementingANOVA identify significance is by estimating a

p

-value for factors and interactions.A

p

-value indicates the probability that the resulting variance from the given factor

SL3003Ch03Frame Page 56 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 9: Design of Experiments

Design of Experiments

57

or interaction would normally occur, given the underlying process. When the

p

-valueis low, the variance shown by the factor or interaction is less likely to have normallyoccurred. Generally, experimenters use a

p

-value of 0.05 as a cut-off point. Whena

p

-value is less than 0.05, that factor/interaction is said to be statistically significant.Regression is an experimental technique that attempts to fit an equation to the

data. For example, if the experiment involves two factors, A and B, the experimenterwould be interested in fitting the following equation:

(3.2)

Regression software packages develop estimates for the constant (

β

0

) as well asthe coefficients (

β

A

,

β

B

, and

β

AB

) of the variable terms. If there are sufficient exper-imental runs, regression packages also provide an estimate for the process standarddeviation (

ε

). As with ANOVA, regression identifies which factors and interactionsare significant. The way regression packages do this is to identify a

p

-value for eachcoefficient. As with ANOVA, experimenters generally tend to use a

p

-value of 0.05as a cut-off point. Any coefficient

p

-value that is less than 0.05 indicates that thecorresponding factor or interaction is statistically significant.

These are powerful tools and are quite useful, but are a little beyond furtherdetailed discussion in this chapter. See some of the references provided for a moredetailed explanation of these tools. If you do not have a statistical package to supportANOVA or regression, there are two options available for your analysis. The firstoption is to use the built-in ANOVA and regression packages in an office spreadsheetsuch as Microsoft Excel. The regression package in Excel is quite good; however,the ANOVA package is somewhat limited. Another option is to analyze the datagraphically. For example, suppose you conduct an experiment with two factors (Aand B) at two levels (2

2

) and you do three replicates (a total of 16 runs). Use a barchart or a scatter plot of factor A at both of its levels (each of the two levels willhave eight data points). Then use a bar chart or scatter plot of factor B at both ofits levels (each of the two levels will have eight data points). Finally, to showinteractions, create a line chart with one line representing factor A and one line forfactor B. Each line will show the average at the corresponding factor’s level.Although this approach will not have statistical support, it may give you a path topursue.

3.5.7 T

YPES

OF

E

XPERIMENTS

As stated in previous paragraphs, there are two main types of experiments found inthe existing literature. These are full factorial experiments and fractional factorialexperiments. The pros and cons of these experiments have already been discussedand will not be covered again. However, there are other types of DOEs that arefrequently mentioned in other writings.

Before discussing the details of these other types, let’s look at Figure 3.1a.We see a Venn Diagram with three overlapping circles. Each circle represents a

specific school or approach to designed experiments: classical methods (one thinks

Y X X XA B AB= + + + +β β β β ε0 1 2 12

SL3003Ch03Frame Page 57 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 10: Design of Experiments

58

The Manufacturing Handbook of Best Practices

of Drs. George Box and Douglas Montgomery), Taguchi Methods (referring to Dr.Genichi Taguchi), and statistical engineering (established and taught by DorianShainin). In Figure 3.1b we see that all three approaches share a common focus,i.e., the factorial principle referred to earlier in this chapter. Figure 3.1c demonstratesthat each pairing of approaches shares a common focus or orientation, one approachwith another. Finally, in Figure 3.1d, it is clear that each individual approach pos-sesses its own unique focus or orientation.

The predominant type of nonclassical experiment that is most often discussedis named after Dr. Genichi Taguchi and is usually referred to as Taguchi Methodsor robust design, and occasionally as quality engineering. Taguchi experiments arefractional factorial experiments. In that regard, the experimental structures are notas significantly different as is Dr. Taguchi’s presentation of the experimental arraysand his approach to the analysis of results. Some practicing statisticians do notpromote Dr. Taguchi’s experimental arrays due to opinions that other experimentalapproaches are superior. Despite this, many knowledgeable DOE professionals havenoted that practicing engineers seem to grasp experimental methods as presented byDr. Taguchi more readily than methods advocated by classical statisticians andquality engineers. It may be that Dr. Taguchi’s use of graphical analysis is a help.Although ANOVA and regression have strong grounds in statistics and are verypowerful, telling an engineer which factors and interactions are important is lesseffective than showing him or her the direction of effects using graphical analysis.

Despite the relatively small controversy regarding Taguchi Methods, Dr. Tagu-chi’s contributions to DOE thinking remain. This influence runs from the promotionof his experimental tools such as the signal-to-noise ratio and orthogonal array and,perhaps more importantly, his promotion of using experiments designed to reducethe influence of process variation and uncontrollable factors. Dr. Taguchi woulddescribe uncontrollable factors, often called noise factors, as elements in a process

FIGURE 3.1a

Design of experiments — I.

Taguchi Methods

ClassicalMethods

ShaininMethods

SL3003Ch03Frame Page 58 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 11: Design of Experiments

Design of Experiments

59

that are too costly, or difficult — if not impossible — to control. A classic exampleof an uncontrollable factor is copier paper. Despite our instructions and specifica-tions, a copier customer will use whatever paper is available, especially as a deadlineis near. If the wrong paper is used and a jam is created, the service personnel willbe correct to point out the error of not following instructions. Unfortunately, thecustomer will still be dissatisfied. Dr. Taguchi recommends making the copier’sinternal processes more robust against paper variation, the uncontrollable factor.

FIGURE 3.1b Design of experiments — II.

FIGURE 3.1c Design of experiments — III.

Taguchi Methods

ClassicalMethods

ShaininMethods

FactorialPrinciple

Taguchi Methods

ClassicalMethods

ShaininMethods

FactorialPrinciple

Fractional

FactorialsInteractions

SL3003Ch03Frame Page 59 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 12: Design of Experiments

60 The Manufacturing Handbook of Best Practices

Other types of experimental designs are specialized for instances where theresults may be nonlinear, i.e., the response may be a polynomial or exponentialform. Several of these designs attempt to implement the requirement for more factorlevels in the most efficient way. One of these types is the Box-Behnken design.There are also classes of designs called central composite designs (CCDs).

Two specialized forms of experimentation are EVolutionary OPerations(EVOP) and mixture experiments. EVOP is especially useful in situations requiringcomplete optimization of a process. An EVOP approach would consist of two ormore experiments. The first would be a specially constructed screening experimentaround some starting point to identify how much to increase or decrease each factorto provide the desired improvement in the response(s). After determining the direc-tion of movement, the process factors are adjusted and another experiment is con-ducted around the new point. These experiments are repeated until subsequentexperiments show that a local maximum (or minimum, if the response is to beminimized) has been achieved. Mixture experiments are specialized to chemicalprocesses where changes to a factor (for example, the addition of a constituentchemical) require a change in the overall process to maintain a fixed volume.

This discussion of designed experiments would not be complete without at leastsome mention of Dorian Shainin and his unique perspective on this topic. Althoughthere may be some room for debate regarding Shainin’s primary contributions to thefield, most knowledgeable persons would probably agree that he is best known forhis work with multi-vari charts (variable identification), significance testing (usingrank order, pink x shuffle, and b[etter] vs. c[urrent]), and techniques for largeexperiments (variable search and component search).

FIGURE 3.1d Design of experiments — IV.

FactorialPrinciple

Fractional

FactorialsInteractions

EmpiricalTaguchi Methods

ClassicalMethods

ShaininMethods

Sign

al-to-Noise Ratios Nonparametric

Probl

em-S

olvi

ng

Rig

orou

sR

esponseSurface Methodology

Robustness

SL3003Ch03Frame Page 60 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 13: Design of Experiments

Design of Experiments 61

Some important terms that are considered to be unique to Shainin’s work arethe red x variable, contrast, capping run, and endcount.

3.6 BEFORE THE STATISTICIAN ARRIVES

Most organizations that have not yet instituted the use of Six Sigma have few, if any,persons with much knowledge of applied statistics. To support this type of organization,it is suggested that process improvement teams make use of the following process tohelp them to define, measure, analyze, improve, and control (DMAIC).

CREATE ORGANIZATION

• Designator

• Appoint cross-functional representation• Appoint leader/facilitator• Agree on team logistics

� Identify meeting place and time� Extent of resource availability� Scope of responsibility and authority

• Identify who the team reports to and when report is expected

DEFINITIONS AND DESCRIPTIONS

• Fully describe problem� Source� Duration (frequency and length)� Impact (who and how much)

• Completely define performance or quality characteristic to be used tomeasure problem� Prioritize if more than one metric is available� State objective (bigger is better, smaller is better, nominal is best)� Determine data collection method (automated vs. manual, attribute vs.

variable, real time vs. delayed)

CONTROLLABLE FACTORS AND FACTOR INTERACTIONS

• Identify all controllable factors and prioritize• Identify all significant interactions and prioritize

Column 1 Column 2 Column 3

Process Improvement TeamProduct Action GroupProject Enhancement Task ForceProblem Solution Pack

SL3003Ch03Frame Page 61 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 14: Design of Experiments

62 The Manufacturing Handbook of Best Practices

• Select factors and interactions to be tested• Select number of factor levels

� Two for linear relationships� Three or more for nonlinear relationships� Include present levels

UNCONTROLLABLE FACTORS

• Identify uncontrollable (noise) factors and prioritize• Select factors to be tested• Select number of factor levels

� Use extremes (outer limits) with intermediate levels if range is broad

ORTHOGONAL ARRAY TABLES (OATS)

• Assign controllable factors to inner OAT• Assign uncontrollable factors to outer OAT• Assignment considerations:

� Interactions (if inner OAT only)� Degree of difficulty in changing factor levels (use linear graphs or

triangular interaction table)

CONSULTING STATISTICIAN

• Request and arrange assistance• Inform statistician of what has already been recommended for experimen-

tation• Work, as needed, with statistician to complete design, conduct experiment,

collect and validate data, perform data analysis, and prepare conclu-sions/recommendations

SL3003Ch03Frame Page 62 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 15: Design of Experiments

Design of Experiments 63

TAGUCHI APPROACH TO EXPERIMENTAL DESIGN

SL3003Ch03Frame Page 63 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 16: Design of Experiments

64 The Manufacturing Handbook of Best Practices

3.7 CHECKLISTS FOR INDUSTRIAL EXPERIMENTATION

In this final section a series of checklists is provided for use by DOE novices. Thereader is encouraged to review and apply these checklists to assure that their DOEsare conducted efficiently and effectively.

CHECKLIST — INDUSTRIAL EXPERIMENTATION

1. DEFINE THE PROBLEM• A clear statement of the problem to be solved.

2. DETERMINE THE OBJECTIVE• Identify output characteristics (preferably measurable and with good

additivity).3. BRAINSTORM

• Identify factors. It is desirable (but not vital) that inputs be measurable.• Group factors into control factors and noise factors.• Determine levels and values for factors.• Discuss what characteristics should be used as outputs.

4. DESIGN THE EXPERIMENT• Select the appropriate orthogonal arrays for control factors.• Assign control factors (and interaction) to orthogonal array columns.• Select an outer array for noise factors and assign factors to columns.

5. CONDUCT THE EXPERIMENT OR SIMULATION AND COLLECTDATA

6. ANALYZE THE DATA BY:

7. INTERPRET RESULTS• Select optimum levels of control factors.

� For nominal-the-best use mean response analysis in conjunctionwith S/N analysis.

• Predict results for the optimal condition.8. ALWAYS, ALWAYS, ALWAYS RUN A CONFIRMATION EXPERI-

MENT TO VERIFY PREDICTED RESULTS• If results are not confirmed or are otherwise unsatisfactory, additional

experiments may be required.

Regular Analysis Signal to Noise Ratio (S/N) Analysis

Avg. response tables Avg. response tablesAvg. response graphs Avg. response graphsAvg. interaction graphs S/N ANOVAANOVA

SL3003Ch03Frame Page 64 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 17: Design of Experiments

Design of Experiments 65

DOE — GENERAL STEPS — I

DOE — GENERAL STEPS — II

DOE PROJECT PHASES

Step Activity

• Clearly define the problem. Identify which input variables (parameters orfactors) may significantly affect specificoutput variables (performance characteristicsor factors).

Also, identify which input factor interactionsmay be significant.

• Select input factors to be investigated and their sets of levels (values).

Apply Pareto analysis to focus on the “vitalfew” factors to be examined in initialexperiment.

Step Activity

• Decide number of observations required.

Determine how many observations are needed to ensure, atpredetermined risk levels, that correct conclusions are drawnfrom the experiment.

• Choose experimental design.

Design should provide an easy way to measure the effect ofchanging each factor and separate it from effects of changingother factors and from experimental error.

Orthogonal (symmetrical/balanced) designs simplifycalculations and interpretation of results.

Phase Activity

• Process characterization experiments.

Identify significant variables that determine outputperformance characteristics and optimum level foreach variable.

• Process control. Determine if process variables can be maintained atoptimum levels.

Upgrade process if it cannot.Provide for training and documentation.

SL3003Ch03Frame Page 65 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 18: Design of Experiments

66 The Manufacturing Handbook of Best Practices

PROCESS CHARACTERIZATION EXPERIMENTS

SCREENING EXPERIMENT

REFINING EXPERIMENT

CONFIRMATION EXPERIMENT

Objective Activity

• Screening Separate “vital few” variables from “trivial many.”• Refining Identify interactions between variables and set optimum ranges

for each variable.• Confirmation Verify ideal values and optimum ranges for key variables.

Step Activity

1 Identify desired responses.2 Identify variables.3 Calculate sample size and trial combinations.4 Run tests.5 Evaluate results.

Step Activity

1 Select, modify, and construct experimental matrix design.2 Determine optimum ranges for key variables.3 Identify meaningful interactions between variables.

Step Activity

1 Conduct additional testing to verify ideal values of significant factors.2 Determine extent to which these factors influence the process output.

SL3003Ch03Frame Page 66 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 19: Design of Experiments

Design of Experiments 67

PROCESS CONTROL

POTENTIAL PITFALLS

It is possible to• Overlook significant variables when creating experiment.• Miss unexpected factors initially invisible to experimenters. The signifi-

cance of unknown factors and process random variations will be apparentby the degree to which outcomes are explained by input variables.

• Fail to control all variables during experiment. With tighter ranges, it isharder to hold process at one end or other of range during experiment.

• Neglect to simultaneously consider multiple performances. Ideally, sig-nificant variables affect all responses at same end of process window.

PROCESS OPTIMIZATION

• OBJECTIVEFind best overall level (setting) for each of a number of input parameters(variables) such that process output(s), i.e., performance characteristics,are optimized.

• APPROACHES� One-dimensional search: all parameters except one are fixed.� Multidimensional search: uses selected subsets of level setting combi-

nations (for controllable parameters). Fractional factorial design.� Full-dimensional search: uses all combinations of level settings for

controllable parameters. Full factorial design.

DIMENSIONAL SEARCH SCALE

Step Activity

1 Determine capability to maintain process within new upper and loweroperating limits, i.e., evaluate systems used to monitor and controlsignificant factors.

2 Initiate statistical quality control (SQC) to establish upper and lower controllimits.

3 Put systems into place to monitor and control equipment.4 Develop and provide training materials for use by manufacturing.5 Document process, control system, and SQC.

ONE-D MULTI-D FULL-D

SL3003Ch03Frame Page 67 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC

Page 20: Design of Experiments

68 The Manufacturing Handbook of Best Practices

LEVEL SETTING CRITERIA

• Level settings for input parameters should be carefully chosen.� If settings are too wide, process minimum or maximum could occur

between them and thus be missed.� If settings are too narrow, effect of that input parameter could be too

small to appear significant.� Settings should be selected so that process fluctuations are greater than

sampling error.� For insensitive input parameters, i.e., robust factors, large differences

in settings are required to bring parameter effect above noise level.

WHY REPLICATION?

• Experimental results contain information on� Random fluctuations in process.� Process drift.� Effect of varying levels of input parameters.

• Thus, it is important to replicate (repeat) at least one experimental runone or more times to estimate extent of variability.

REFERENCES

Barker, T. R., Quality by Experimental Design, 2nd ed., Marcel Dekker, New York, 1994.Barker, T. R., Engineering Quality by Design, Marcel Dekker, New York, 1986.Bhote, K. R., World Class Quality: Using Design of Experiments to Make it Happen, ASQ

Quality Press, Milwaukee, WI, 1991.Box, G. E. P., Hunter, W. G., and Hunter, J. S., Statistics for Experimenters, John Wiley, New

York, 1978.Dehnad, K., Quality Control, Robust Design, and the Taguchi Method, Wadsworth &

Brooks/Cole, Pacific Grove, CA, 1989.Hicks, C. H., Fundamental Concepts in the Design of Experiments, 3rd ed., Holt, Rinehart

& Winston, New York, 1982.Lochner, R. H. and Matar, J. E. Designing for Quality: An Introduction to the Best of Taguchi

and Western Methods of Statistical Experimental Design, Quality Resources, WhitePlains, NY, 1990.

Montgomery, D. C., Design and Analysis of Experiments, John Wiley, New York, 1976.Phadke, M. S., Quality Engineering Using Robust Design, Prentice Hall, Englewood Cliffs,

NJ, 1989.ReVelle, J. B., Frigon, N. L. Sr., and Jackson, H. K., Jr., From Concept to Customer: The

Practical Guide to Integrated Product and Process Development and Business Pro-cess Reengineering, Van Nostrand Reinhold, New York, 1995.

Ross, P. J., Taguchi Techniques for Quality Engineering, McGraw-Hill, New York, 1988.Roy, R., A Primer on the Taguchi Method, Van Nostrand Reinhold, New York, 1990.Schmidt, S. R. and Launsby, R. G., Understanding Industrial Designed Experiment, 2nd ed.,

CQG Printing, Longmont,CO, 1989.Taguchi, G., Introduction to Quality Engineering: Designing Quality into Products and

Processes, Quality Resources, White Plains, NY, 1986.

SL3003Ch03Frame Page 68 Tuesday, November 6, 2001 6:11 PM

© 2002 by CRC Press LLC