Micromechanics Macromechanics Fibers Lamina Matrix Laminate Structure.
Application of the Design Structure Matrix: A Case Study ...
Transcript of Application of the Design Structure Matrix: A Case Study ...
Application of the Design Structure Matrix: A Case Study inProcess Improvement Dynamics
byCory J. Welch
B.S. in Mechanical EngineeringCornell University, Ithaca, NY 1994
Submitted to the Alfred P. Sloan School of Management and the School of Engineeringin partial fulfillment of the requirements for the degrees of
Master of Business Administration BARKERand BRE
Master of Science in Mechanical Engineering MASSAC HRUSTS- -N"TI tbOF TECHNOLOGY
in conjunction with theLeaders for Manufacturing Program
at the LIBRARIESMassachusetts Institute of Technology
June 2001
© 2001 Massachusetts Institute of Technology, All rights reserved
Signature ofAuthor
( Sloan School of ManagementDepartment of Mechanical Engineering
May 11, 2000Certifiedby
John D. Sterman, Thesis AdvisorJ. Spencer Standish Professor of Management
Certifiedby
Daniel E. Wh it yJIis AdvisorSepiir Research Scientist
Acceptedby
Margaret Andrews, Director of Master's Programjwm &rlol of Management
Acceptedby
Ain Sonin, Chairman, DepartdCfdommittee for Graduate StudentsDepartment of Mechanical Engineering
This page is intentionally left blank
2
Application of the Design Structure Matrix: A Case Study inProcess Improvement Dynamics
by
Cory J. Welch
Submitted to the Alfred P. Sloan School of Managementand the Department of Mechanical Engineering
on May 11, 2001in partial fulfillment of the requirements for the degrees of
Master of Business Administrationand Master of Science in Mechanical Engineering
Abstract
A challenging aspect of managing the development of complex products is the notion ofdesign iteration, or rework, which is inherent in the design process. An increasingly popularmethod to account for iteration in the design process is the design structure matrix (DSM).This paper presents an application of DSM methods to a novel product-development project.To model the project appropriately, the DSM method was extended in two ways. First, themodel developed here explicitly includes overlapping activity start and end times (allowingconcurrent development to be modeled). Second, the model includes explicit learning-by-doing (reducing the probability of rework with each iteration). The model is tested on asubset of the tasks in a complex new product developed by the Advanced TechnologyVehicles division of General Motors, specifically the thermal management system of anelectric vehicle. Sensitivity analysis is used to identify the drivers of project duration andvariability and to identify opportunities for design process improvement. The model is alsoused to evaluate potential design process modifications.
Thesis Advisors: John D. Sterman, J. Spencer Standish Professor of ManagementDaniel E. Whitney, Senior Research Scientist
3
Acknowledgements
Completion of this thesis would not have been possible were it not for the support of dozens
of people. First, I want to thank my best friend, Sara Metcalf. I cannot overemphasize the
value of your emotional and intellectual support during this process. I could not possibly
have done this without you. I am also grateful for the support of my mother, MaryAnn, who
has encouraged me in all my endeavors and who has always been there for me. I pray your
health returns to you. Special thanks are also in order for the rest of my family: Kellie for her
friendship; Dad for his words of wisdom; Carol, Jenny, Ryan, Tim, and Nana for their
encouragement; and last, but certainly not least, Bill for being a pillar of support for my ailing
mother. I would also like to extend my sincere gratitude to my management advisor, John
Sterman, who has given me the gift of a new way of thinking and who has profoundly
influenced me -- more than he will probably ever know. Thanks also go to my engineering
advisor, Daniel Whitney, for his patient guidance.
I also must express appreciation for my friends in the LFM program, who will all be missed
sorely after June. Additionally, I want to thank all those at the Advanced Technology Vehicle
division of General Motors who helped make this project possible, especially Greg, Gary, and
Bill. I also appreciate the help of many others at the company, including Cara, Mike, Robb,
June, Lori, Don, Aman, Andy, Cinde, Bob, Eric, Kris, Mark V., Tom E., Bruce, Ron, Ray,
Jim, Jill, Erin, Tom C., Mark S., Steve, Lawrence, Bill, Brendan, and Bob. Finally, I want to
thank the Leaders for Manufacturing program at MIT for their support of this work.
4
Table of Contents1. INTRODUCTION ...................................................................................................... 7
2. APPLICATION OF THE DESIGN STRUCTURE MATRIX (DSM)........................9
2.1. ADVANCED TECHNOLOGY VEHICLES ......................................................................... 92.2. SYSTEM D ESCRIPTION............................................................................................... 102.3. D EFINITION OF THE PROCESS.................................................................................... 102.4. DATA COLLECTION AND PARAMETER ESTIMATION.................................................. 11
2.5. INTRODUCTION TO THE DSM MODEL .......................................................................... 132.6. M ODIFICATION OF ALGORITHM ................................................................................ 17
3. THERMAL MANAGEMENT SYSTEM ANALYSIS..................................................27
3.1. BASELINE PROCESS ANALYSIS ................................................................................. 283.2. M ODIFIED PROCESS ANALYSIS..................................................................................353.3. PROCESS RECOMMENDATIONS ................................................................................. 43
4. MODEL LIMITATIONS............................................................................................43
4.1. STABLE B ASELINE PROCESS........................................................................................444.2. IMBALANCE OF MODEL RESOLUTION AND DATA ACCURACY .................................... 44
5. CO N CLUSIO N ............................................................................................................... 49
6. REFERENCES...............................................................................................................49
APPENDIX 1: DESIGN TASK DESCRIPTIONS.........................................................52APPENDIX 2: AGGREGATION OF DESIGN TASKS ............................................. 55APPENDIX 3: MATRIX SENSITIVITY RESULTS ................................................... 56APPENDIX 4: BASELINE PROCESS TASK DURATIONS, LEARNING CURVE.....57APPENDIX 5: MODIFIED PROCESS TASK DURATIONS, LEARNING CURVE .... 58APPENDIX 6: MODIFIED PROCESS INPUT MATRICES ...................................... 59APPENDIX 7: MODEL CODE ..................................................................................... 60APPENDIX 8: MODEL INSTRUCTION MANUAL.................................................. 78
5
Table of FiguresFigure 2-1 Interaction M atrix ........................................................................................... 14Figure 2-2 Rework Probability Matrix.............................................................................. 15Figure 2-3 Rew ork Im pact M atrix ..................................................................................... 16Figure 2-4 Triangular Distribution of Task Duration .......................................................... 17Figure 2-5 O verlap M atrix ............................................................................................... 19Figure 2-6 Partial Overlap Illustration .............................................................................. 20Figure 2-7 External Precedence Concurrence (adapted from Ford and Sterman, 1998)......... 22Figure 2-8 Learning Curve ............................................................................................... 26Figure 2-9 Effect of Learning Curve on Project Completion Time..................................... 27Figure 3-1 Gantt Chart of Baseline Process without Rework ............................................. 29Figure 3-2 Gantt Chart of Baseline Process with Rework ................................................. 30Figure 3-3 Baseline Process Completion Time Distributions ............................................. 31Figure 3-4 Baseline Process: Row Sensitivity Analysis ..................................................... 34Figure 3-5 Cumulative Distribution Function of Baseline and Modified Processes............ 38Figure 3-6 50% Reduction in Rework Probabilities .......................................................... 41Figure 3-7 100 % Increase in Rework Probabilities .............................................................. 42Figure 4-1 Usefulness Constrained by Data Accuracy ...................................................... 45Figure 4-2 Employee Pull and Resistance.......................................................................... 47Figure 4-3 Effect of Model Resolution on Employee Pull.................................................. 48
6
1. Introduction
Technology companies are realizing that the processes they follow in developing their
products are themselves a source of competitive advantage. Thus, an increasing amount of
attention is being paid to the design and effective management of product development
processes (Ulrich and Eppinger, 2000; Wheelwright and Clark, 1992). One challenging
aspect of managing the development of complex products is the notion of design iteration, or
rework, which is inherent in the design process (Eppinger et al., 1994; Eppinger et al., 1997;
Smith and Eppinger, 1997a; Smith and Eppinger, 1997b; Steward, 1981). Osborne (1993)
discovered that between one third and two thirds of total development time at one
semiconductor firm was a result of design iteration. Since substantial savings could be
realized by minimizing the amount of rework in the design process, it is desirable to have
management methodologies that consider rework. Unfortunately, conventional management
techniques such as Gantt charts and PERT/CPM are unable to account for design iteration.
One method of explicitly accounting for iteration in the design process that is increasingly
used is the design structure matrix (DSM)2
The design structure matrix was developed by Steward (198 1a, 198 1b), who described how
DSM could be used to identify iterative circuits in the design process and suggested means to
manage these circuits, such as optimally sequencing design tasks to minimize iteration.
Subsequently, much work has been done to develop the DSM methodology. Black (1990)
applied DSM to an automotive brake system in an attempt to optimize the design process.
Smith and Eppinger (1 997a) presented a DSM model that uses deterministic task durations
1Project Evaluation and Review Technique/Critical Path Method2 Sometimes referred to as the dependency structure matrix.
7
and probabilistic task rework to estimate project duration. They assume, however, that all
design tasks are sequential (i.e., no overlapping of design tasks). Smith and Eppinger (1997b)
also developed a DSM model representative of a highly concurrent engineering process.
Their model assumes that all design tasks are performed in parallel and that rework is a
function of work completed in the previous iteration. Additionally, Smith and Eppinger
(1998) describe a model that addresses the question of whether design tasks should be
sequential or parallel. Carrascosa, Eppinger and Whitney (1998) describe a DSM model,
based largely on work done by Krishnan, Eppinger, and Whitney (1997), that relaxes the
assumption that design tasks are either purely sequential or parallel, permitting partial
overlapping of sequential or parallel activities.
Browning (1998) drew upon much of this previous work and developed a DSM simulation
model implemented in Microsoft@ Excel and Visual Basic that integrates project cost,
schedule and performance. Unlike previous models, Browning's model treats rework
probability and rework impact separately, treats task durations as random variables, and
applies a learning curve to the duration of reworked tasks. Browning's model permits tasks to
be both sequential and parallel, but does not permit partial overlapping of interdependent
activities. Additionally, Browning's model is intended for use as a project simulation tool
whereas previous models are primarily mathematical in nature.
In this paper, I discuss the application of a portion of Browning's (1998) model to the thermal
management system of a battery-powered electric vehicle. I extend Browning's model to
permit partial overlapping of interdependent design tasks (allowing concurrent development
8
to be modeled). I also modify Browning's algorithm to apply a learning curve to the rework
probabilities rather than to the task durations. The goal of the application and model
development was two-fold. First, I developed and evaluated Browning's model for potential
use at the Advanced Technology Vehicles division of General Motors. Second, I used the
model to identify opportunities for process improvement and to evaluate design process
modifications.
The remainder of this paper is organized as follows. Section 2 introduces the model
developed by Browning (1998) and discusses the modifications I made. Section 3 describes
the application of the model, presenting the results of baseline process analysis, sensitivity
analysis, and comparison of the baseline design process with a modified design process.
Section 4 evaluates the modeling technique and makes suggestions for future use. Section 5
provides concluding remarks.
2. Application of the Design Structure Matrix (DSM)
This section introduces the organization where the study was conducted, describes the design
process that I modeled as well as the fundamentals of Browning's (1998) DSM model, and
explains the modifications I made to Browning's algorithm.
2.1. Advanced Technology Vehicles
I conducted this study while employed as an intern at the Advanced Technology Vehicles
(ATV) division of General Motors. ATV is responsible for the development of alternative
propulsion vehicles (e.g., electric and hybrid-electric vehicles) and is an organization that
bridges the gap between pure research and development and mass production. Alternative
9
propulsion vehicles require much more invention than a typical vehicle development program,
and sales volumes are much smaller. Thus, the processes followed are generally less well
defined than those for a mass market vehicle. Each vehicle program is typically quite
different from the last.
2.2. System Description
The process chosen for analysis was the thermal management system (TMS) of a battery-
powered electric vehicle. The TMS maintains component and cabin temperatures within an
acceptable range. Components such as batteries, motors, and controllers generate significant
heat during operation, which must be dissipated for proper functionality, reliability, and
durability. Cabin temperature (the temperature within the passenger compartment of the
vehicle) must also be maintained within an acceptable range for customer comfort and
satisfaction. The loading of the thermal management system, which in part determines the
required system mechanization and component sizing, is highly dependent on characteristics
of the components it is responsible for cooling or heating and thus is tightly coupled with the
design of the rest of the vehicle. This characteristic of the system makes it a good candidate
for evaluation using DSM, which can highlight important interactions in a complex design
process.
2.3. Definition of the Process
After selecting the TMS as the process to be modeled, the specific tasks in the design of the
system must be identified. To identify the design tasks, I conducted approximately ten one-
on-one interviews with the engineers and managers responsible for this system (one design
engineer, one test engineer, and one engineering manager). Interviews ranged from thirty
10
minutes to two hours. Design tasks were elicited from the engineers and managers with the
following guidelines for task identification.
A good DSM process model will:
- capture all significant design tasks (if not explicitly, then as a subset of a larger task)
- capture all tasks that could result in design changes (e.g., design reviews)
* capture interactions with other systems
* capture significant transfers of information between tasks and subsystems
- have an appropriate level of aggregation.
The last guideline was the most problematic. Too high a level of aggregation runs the risk of
producing a model that does not capture important information flows. However, too low a
level of aggregation (i.e., having too much design detail) can overwhelm the modeler and the
experts with data gathering and parameter estimation for which they have little basis and
which might add no value in the end. To mitigate these risks, I started by asking the system
experts to identify all design tasks without regard to the level of detail. I then worked with the
engineering manager to combine many sub-tasks into larger, aggregate categories. The
resulting DSM includes 19 design tasks (see Appendix 1 for task descriptions). An
illustration of the aggregation process is provided in Appendix 2.
2.4. Data Collection and Parameter Estimation
Once the design tasks have been identified, the analyst must work with experts to estimate the
rework probabilities, rework impacts, task durations, and task learning curves (see sections
2.5 and 2.6). To ensure that the input data were as accurate as possible, I worked one-on-one
with the engineering manager most knowledgeable about the overall design process. I
provided detailed instructions on how to fill out each of the matrices before a series of
meetings during which the data were estimated (see Addendum 1 of Appendix 8). Then, I
II
met with the manager approximately ten times for meetings ranging between 30 and 90
minutes each to elicit data and estimate parameters. Data on task durations (Appendix 4)
were also obtained from other engineers who had worked on past projects.
A difficult aspect of parameter estimation was differentiating between rework probability and
rework impact. It became evident that the tendency was to assign the same value for rework
impact as for rework probability, effectively lumping the probability and impact of rework
into one number. To minimize this effect, I frequently repeated the specific question to be
answered for each data value (see Addendum 1 of Appendix 8). Additionally, it was difficult
to assign a single value to either rework probability or rework impact. In some cases, there
may be a high probability of rework occurring, but the impact may be low. At the same time,
there may be a small probability of rework occurring, but the impact may be high. Assigning
only one value to rework probability and rework impact was thus difficult. This difficulty
might be overcome by instead asking for a range of possible rework probabilities and rework
impacts. However, the problem would still exist of potentially having multiple rework
probability ranges corresponding with different rework impact ranges. After participating in
all of the data collection, I must emphasize the value of one-on-one meetings, as opposed to
surveys. Due to the complexity of the model algorithm and the numerous caveats that go
with data estimation (see Addendum 1 of Appendix 8), having a model expert present during
data estimation is vital to eliciting reasonable data.
12
2.5. Introduction to the DSM model
The model used to analyze the TMS design process was adapted from Browning (1998)3. For
simplicity, the model used in this paper focuses on project duration and does not model
project cost or performance (quality), as did Browning's model. In addition to describing the
basics of the algorithm developed by Browning, this section also presents refinements of the
algorithm. The need for these refinements was discovered in the process of modeling the
TMS design process. While the changes were considered necessary to characterize
adequately this particular process, the results are generic enough to be applied to any design
process. The modifications could also be incorporated easily into Browning's integrated
duration, cost, and quality model.
Basic DSM
An "activity-based" DSM is a square matrix that identifies interdependent design activities.4
Figure 2-1 shows the interaction matrix for the thermal management system design process.
The presence of a "1" in the cell where two design tasks interact indicates that one task either
provides information to or receives informationfrom the other. The rows of the matrix
indicate that the task associated with that row receives information from the tasks whose cells
contain a one. For example, in Figure 2-1, task 5 (Build Bench Skeleton) is seen to receive
informationfrom tasks 3, 4, 7 and 9. Conversely, the columns of the matrix indicated that the
task associated with that column provides information to the tasks whose cells contain a one.
For example, in Figure 2-1, task 5 is seen to provide information to tasks 9 and 11.
3 The model used an Excel spreadsheet interface with Visual Basic macros. The program can be obtained fromthe author.4 Many different kinds of DSM's are used. Examples include component-based, parameter-based, team-basedand activity-based DSM's. This model is an example of an activity-based DSM. Additional information on DSMcan be found at http://web.mit.edu/dsm/.
13
C
Activity Providing Information
coC CO
Build~C B(D keeo
C~~r D00in EED81rnt r0t Bec Co nEt -G)
0 0
Define~( C0to Sta U1Bec Tes Sub Ze 11 M 1
Cul Muld 13 (D1- 3 a D
M T 1)
DWaie Pa.k n 12 W 71 1)( 14
S0 (D 0 LI
M 0 )2-(
B(Dl AE hLT0Xa W
000 0 0 0 0 =_ L C
Determine System Requirements 21 1 TIW - 1 1 1 1 1DefineMechanization 3 1 1 1 1 1 1 1
Thermal Analysis 4 1 1Build Bench Skeleton 5-
E Preliminary Packaging 6 1 11 11 1RComponent Selection 7T7 1 1 1 1 _ T 1
Integrate Bench Components 9 -_ 1 1Define Control Strategy 101 1__ -T_ 1 _
Code Software 121_
EIntegrate Mule Software 14 1 1Mule Testing 15 1 1_
Detailed Packaging 16 1 1 1~) Alpha Hardware/Software ReleaseT17 1 1_ 1 _
Build Alpha 18 1 11Alpha Testing 19-----------------------------
Figure 2-1 Interaction Matrix
Browning's (1998) Model
Browning's (1998) model uses the DSM structure and Monte Carlo simulation to predict the
distribution of possible project durations. The model assigns both rework probabilities and
rework impacts to each instance where an interdependence between design activities was
identified. As an example, the rework probability matrix in Figure 2-2 illustrates that upon
completion or rework of task 4, a 50% probability exists that tasks 2 and 3 would have to be
reworked, and a 5% probability exists that task 1 would have to be reworked. In general,
numbers above the diagonal are estimates of the likelihood that completion of the task in the
same column as that number will cause re-work of the task in the same row as the number.
These numbers represent the chances offeedback from one task to another. The numbers
below the diagonal have similar meaning, but instead represent a situation offeedforward.
14
The presence of a number below the diagonal indicates that there is forward information flow
from one task to another. Thus, if the upstream task were re-worked, there is a probability
that the downstream task might also have to be re-worked (assuming the downstream task has
already started) since the information on which the downstream task is dependent upon would
have changed. For example, in Figure 2-2 we see that task 4 provides information forward to
tasks 5 and 7. Figure 2-2 illustrates that if tasks 5 and 7 had already started and task 4 had to
be reworked, there is a 10% chance that task 5 would have to be reworked and a 50% chance
that task 7 would have to be reworked. In general, numbers below the diagonal indicate the
likelihood that re-work of the task in the same column as the number will cause subsequent
re-work of the task in the same row as the number.
C
0
C
E~
C
0
E0
CD)
C0CL00.
-C.)
Ca
a)
(D
61 71 8
ECDCO
U0
CDMl
9
C
10
Cn
(D
C:0
0
11
0
CO
~0-00
0D
121 13
0
0)A
CD
0l)
75Co
141 15
_0C
a
16
aI)
a:
0
ISXCIS
0
0.
~0
0)CCl)0I-00.
171 181 19.10
.75 .10 .05 .20 .05
.10 .10 .05 .10 .05
.0 0 1 .10
j .0 5 1 2 13 0 .1 01 1 .0 5 1 .0 5. 1.051.20 1 .301.101 M .05
Define EICD 8 . .z .1 .1__ I IIntegrate Bench Components 9 1.0 1.0
Define Control Strategy 10 .80 .20 .80 .20Bench Test Subsystem 11 1.0 .05 1.0
Code Software 12 1.25Build Mule 13 .10
Integrate Mule Software 14 1.01.50Mule Testing 15 .50
Detailed Packaging 16 1.0 1.0 1.0Alpha Hardware/Software Release 17 1.0 1.0 .10.10M
Build Alpha 18 .25Alpha Testing 19 .. 10 1.0
Figure 2-2 Rework Probability Matrix
15
Although rework may be required of a design task, rework of the entire design task may not
be necessary. The rework impact matrix in Figure 2-3 provides the fraction of original task
duration that would be required in the event that rework was required. For example, the
rework impact matrix in Figure 2-3 indicates that an estimated 10% of task 3 would have to
be reworked if the Monte Carlo simulation determined that rework of task 3 was required
upon completion of task 4. Readers requiring additional detail about how to fill in the rework
probability and impact matrices should refer to Addendum 1 of Appendix 8.
C0
0
CL
00.
CDC
0(a.E0
CD
71 8
.051 .05
.10
ECn
U
a)
_I-
9
00
C
10
CO0
(D
)CD
7)
C0
C:
11
CO
0
12
a)
13
CU
0)C/)
14
0)CCUa)I-a)
0)C
(U0(U0~~0a)
a)0
ca2-
CU3a:cis
151 161 17
(U
0.
ca
0)C
CUa)I-CU
0.
181 191.051 I 1 1 1. 1. 1 .1
.101 .05
.051 .10 .05
Preliminary Packaging 6 .20 .10 .10Component Selection 7 .20 .10 .05 .05 .10 .10 .10 .10
Define EICD 8 .05 .05.05 .05Integrate Bench Components 9 .10 .
Define Control Strategy 10 .80 ._ .10 .051 .05Bench Test Subsystem 11 .10 .05 .20
Code Software 12 .05 .25Build Mule 13 .01
Integrate Mule Software 14 1.0 1.0Mule Testing 15 .20 .10
Detailed Packagin 16 .10 .05 .10
Alpha Hardware/Software Release 17 .05 .05 .10 .10Build Alpha 18 .10
Alpha Testing 19 .10 .05 .10 .05 .05
Figure 2-3 Rework Impact Matrix
16
, ,| I | || | | | .0 5
.10 .10
Design task durations are modeled as random variables with triangular distributions for
mathematical convenience. Estimates of the most likely value (MLV), best case value
(BCV), and worst case value (WCV) provide the endpoints of the triangular distribution.
Figure 2-4 illustrates the probability distribution function of the design task duration as
modeled by Browning (1998). Estimates of the task durations for the TMS design process are
provided in Appendix 4.
BCV MLV WCV
Design Task Duration
Figure 2-4 Triangular Distribution of Task Duration
With probabilistic task rework and randomly distributed task durations, output project
duration will be a continuous random variable. Monte Carlo simulation is used to predict the
distribution of project duration. Readers requiring additional detail on the model algorithm
should refer to Browning (1998) or Appendix 7 for the model code.
2.6. Modification of Algorithm
In this section, I describe the modifications made to Browning's (1998) base algorithm.
Specifically, I describe modifications permitting task overlap (permitting concurrent
engineering to be developed) and learning-by-doing (learning curve on rework probabilities).
17
2.6.1. Task Overlap
While defining the thermal management system design process and gathering data for
modeling, it became apparent that simulated design times using Browning's (1998) model
were likely to be several fold longer than what they would be in reality. The primary reason
was that the algorithm developed by Browning (1998) requires that a downstream task cannot
begin until the upstream task on which it is dependent is 100% completed. Browning's
algorithm permits activities to be performed in parallel only if the downstream task does not
rely on the upstream task for any information (i.e., no task interdependence). In reality, more
and more programs are designed as concurrent engineering programs. That is, activities
dependent on one another proceed, at least to some extent, in parallel and extract information
as they progress. While a design task may require information from another upstream design
task, it is not always the case that 100% of the upstream task must be completed before the
downstream task may commence.
To account for a degree of overlap between tasks that were interdependent, an additional
matrix was created. For illustration, the overlap matrix for the TMS is shown in Figure 2-5.
18
0) Co)C 0 o
a) a) C)
U a) cCM 0) E 0
0) C C CD CD
.0 o .- oE) 0 ~ Co 0) 0000 Cu 2 o)
C o I' 0 75 C .0>. a) > U ) CO 0 :3 7- a) a) _ o(I - ) CC~0 a) )
0a ) Q u a) 5 0D 0(D ~ < Co a) a a. .Co (
0)~ C a) i W 0 0 5 L 0) 26 a) 0 a) )/~ a) _0 <
C E C'D -u F- Q)~00) ~CD a) a) 3 o - 0) 0 0) W ~ ~ o>00 ma. EL C.MO 0 0 M G~ ~ < Co <
Determine System Requirements1
Define Mechanization 3 .2Thermal Analysis 4
Build Bench Skeleton 5Preliminary Packaging 6
0 '4 C 6 7 80 I4I ID 10 17 10
IIZI III til 11 111
.U.
Com onent Selection 7 .20 .50Define EICD 8 .
Inte rate Bench Com onents 9 1.0 .Define ConLrol Strate 1 U .0 .2Bench Test Subs stem 11 1.0 .10 .80
Code Software 12 .50Build Mule 13 1.0
Inte rate Mule Software 14 1.0 1.0Mule Testin 15 1.0 11.
Detailed Packa in 16 1.0 1.0 1.0Al ha Hardware/Software Release 17 1.0 1.0 .3311.0
Build Al ha 18 1.0Alpha Testing 19 L_1.0_1._1._1._1.
Figure 2-5 Overlap Matrix
Each sub-diagonal cell where an interdependence exists requires an estimation of the degree
of overlap between these design tasks. For example, Figure 2-1 shows that task 5 is
dependent on information from task 3. Thus, an estimate of the percentage of task 3 that must
be completed before task 5 can commence is required. From Figure 2-5, we see that about
80% of task 3 must be completed before task 5 may begin. Using this matrix, any degree of
overlap, from 0-100%, may be specified, unlike the baseline algorithm developed by
Browning, which assumes that 100% of the upstream task must be complete before the
dependent downstream task may commence (equivalent to having a "1" in the overlap matrix
cells).
19
0050
2 9 10 11 12 13 19
I.U
Figure 2-6 illustrates the difference between Browning's model and the model as I have
applied it. Figure 2-6a depicts the Gantt chart representation of two design tasks performed
purely in sequence, as required by Browning's model if an interdependence exists between
these two tasks. Figure 2-6b depicts two purely parallel design tasks. Browning's model
only permits this situation when no interdependence exists between the design tasks (i.e., a
zero in the matrix cell where the two design tasks overlap). Figure 2-6c, on the other hand,
illustrates a situation where two tasks are interdependent (i.e., the downstream task depends
on the upstream task), but the downstream task may commence before 100% of the upstream
task is completed. This partial overlap scenario illustrates the effect of including the overlap
matrix in the model.
A: Pure Sequential B: Pure Parallel C: Partial Overlap
Figure 2-6 Partial Overlap Illustration
It is worth noting that only sub-diagonal entries exist in the overlap matrix. Since the tasks
are listed in chronological order of task start time, super-diagonal elements, if included, would
be an estimation of the percentage of a downstream task that would need to be completed
before the upstream task rework could commence. While in reality it is quite possible to
identify the need for rework before completion of the downstream task, this model does not
consider that possibility. The discovery of required rework for upstream tasks is assumed
only to occur after the downstream task is completed, consistent with Browning's (1998)
formulation.
20
While accounting for a degree of overlap between dependent design tasks represents
increased model resolution, it should be noted that the method of accounting for overlap is
quite simplified for mathematical convenience and ease of data collection. A more realistic
representation of the overlap of dependent design tasks in a concurrent engineering situation
might be obtained by applying the concepts presented by Ford and Sterman (1998a). Ford
and Sterman (1998a) defined "External Process Concurrence" relationships, which are
(possibly) nonlinear relationships between the work accomplished in an upstream design
phase and the work available to be accomplished in a downstream design phase. Because
these relationships can be nonlinear, the optimal degree of parallelism between tasks can
change as the tasks progress. Ford and Sterman (1998b) describe a method to estimate these
relationships from the participants in a program. While this concept was applied to dependent
design phases, it seems reasonable that it could also be applied to dependent design tasks.
Figure 2-7 illustrates the concept of External Precedence Concurrence relationships. Figure
2-7a portrays a relationship where the downstream task may begin with 0% of the upstream
task completed. That is, the tasks are essentially independent. In Browning's (1998) project
model, this situation would represent a case where no interdependence was identified between
an upstream and a downstream task. Figure 2-7b illustrates a case where 100% of the
upstream task must be completed before the downstream task may commence. The
relationship portrayed in Figure 2-7b represents the assumed relationship between dependent
design tasks in Browning's (1998) project model.
21
100%
Percent ofdownstreamtaskavailable forcompletion
0i
A: 100% Parallel------
0Percent of upstreamtask completed
C: Discontinuous Overlap
100%
Percent ofdownstreamtaskavailable forcompletion
0100%
B: 100% Series
0 1Percent of upstreamtask completed
D: Continuous Overlap
100%
Percent ofdownstreamtaskavailable forcompletion
0U
Percent of upstreamtask completed
100%
Percent ofdownstreamtaskavailable forcompletion
0100% U
Percent of upstreamtask completed
Figure 2-7 External Precedence ConcurrenceSterman, 1998)
(adapted from Ford and
Figure 2-7c, on the other hand, illustrates a relationship between dependent design tasks that
is permitted in the project model as I have modified it. This relationship is a discontinuous
function, where 100% of the downstream task is available for completion upon completing a
certain percentage of the upstream task. Finally, Figure 2-7d illustrates the relationship that is
more likely to exist in reality. Figure 2-7d illustrates how, in concurrent engineering
processes, a continuous nonlinear relationship between the percent of downstream task
available for completion and the percent of upstream task completed might be represented.
22
10%
UUU ______
UUUUUUUUU
U
UU
UU
mu. umu uP ____
4
4*
4______ 4 _____________
-4--4
44
-- 44
,
~p ______ ______ ____________
100%
While Figure 2-7d, in the author's opinion, more accurately represents the flow of information
between interdependent design tasks in a concurrent engineering environment, the added
model complexity and complication of data gathering make using this structure impractical in
the DSM model. The relationship shown in Figure 2-7c is considered to be an improvement
over the restrictive relationship of Figure 2-7b and is used for simplicity. The reader should
refer to Appendix 7 for the code used to model this relationship.
2.6.2. Learning Curve
In addition to the model modification permitting partial task overlap, I also revised the
modeling of "learning-by-doing" to permit rework probabilities to be dynamic rather than
static. The algorithm developed by Browning (1998) recognized that the nominal duration of
a design task might be reduced upon iteration of that design task. The initial design task time
may include considerable up front activity, such as setup and gathering of information, that
may not have to be repeated if iteration of that design task is required later in the development
process. Thus, Browning's algorithm included a subjectively estimated learning factor that
represented the percentage of initial task time it would take to perform a design task the
second time it is performed during the development process. The model as I applied it,
however, did not use this learning curve for three reasons. First, the reduction in effort
required for subsequent performance of a task was considered to be accounted for adequately
in the rework impact matrix. Second, simulated project durations were relatively insensitive
to this learning curve since the reduction in task duration was a one-time step reduction in
task time, rather than a fractional reduction with each iteration of the task. Third, I applied a
different learning curve, as discussed below, and did not want to overburden participants with
data estimation on multiple learning curves.
23
Browning's algorithm assumed that the likelihood of reworking a design task would be
constant regardless of the number of iterations of that design task. It has been argued that it is
not readily apparent whether the rework probability would increase or decrease with iteration
of the task (Smith and Eppinger, 1997). However, observations at ATV indicated that the
likelihood of reworking a design task should indeed be reduced with iteration of the design
task.
For example, there is an estimated 20% chance that Alpha Testing (see Appendix 1 for a task
description and Figure 2-2 for rework probability matrix) may reveal problems with the
control logic, requiring the task Define Control Strategy to be reworked. Once these logic
errors are identified, the control strategy can be redefined, at which point additional Alpha
Testing will be required to validate the control strategy changes. It seems unreasonable,
however, to assume that upon re-testing, the probability would still be 20% that the control
strategy would have to be redefined. Learning has accumulated through the Alpha testing
process, and the likelihood of having to redefine the control strategy should now be less. In
fact, the reason for these testing stages is to identify problems and reduce the likelihood that
the design task has errors that may later become apparent to the customer.
It is well understood that as the rework probabilities approach unity the simulated project
duration approaches infinity. The additional work generated by iteration is roughly
proportional to 1/(1-R), where R represents the likelihood that a design task will have to be
reworked. Thus, as R approaches unity, the project can never be completed. Consequently,
24
estimated project durations are extremely sensitive to any large rework probabilities.
However, discussions with engineers and managers indicated that many instances exist where
the likelihood that a task will need to be reworked upon completion of a downstream task,
such as testing, is very high (in some cases nearly 100%) on the first iteration (even in
processes that did converge). After learning what was not considered, the likelihood of
iterating that task goes down substantially.
The existence of "learning by doing" is well documented in the literature (Teplitz, 1991)).
The simplest reasonable formulation for a rework probability learning curve is to assume the
probability of reworking a task falls by a constant fraction with each iteration. Let RPi, = the
estimated probability of reworking task i upon completion of downstream task j. Let LCi =
the learning curve assigned to task i, defined as the fractional reduction in rework probability
per design task iteration. Let n = the number of times task i is reworked as a result of
completion of task j. 5 The equation for the rework probability as a function of the number of
task iterations n is then6 RPi?1 (n) = RP 1 (0) * (1- LCi )" , where RPiJ(0) = the estimated
rework probability for the first iteration (see section 2.4 for discussion of parameter
estimation).
5 This distinction is important. It means that individual rework probabilities are reduced only upon iteration ofthe task, as compared with a reduction of the entire row of rework probabilities associated with a task: learning isspecific to a task and does not spill over to other tasks.6 This formulation was only applied to the superdiagonal rework probabilities. Discussion with the engineeringmanager indicated that these probabilities were less likely to be reduced with iteration of the design task sincethey represented situations offeedforward rather than feedback. In many cases, the likelihood of reworking thedownstream task was 100% if the upstream task was reworked due to strong dependence of the downstream taskon the upstream task (as indicated by many subdiagonal l's in the rework probability matrix). Since, in manycases, the 100% value was not likely to be reduced with iteration, a learning curve was not applied tosubdiagonal rework probabilities.
25
Thus, the rework probability will decay exponentially to zero with increasing iteration of the
task, as illustrated in Figure 2-87. In the example, RPjj (0) = 0.5 and LC = 0.2.
Figure 2-8 Learning Curve
To estimate the learning curve factors for the TMS design process, I worked directly with the
engineering manager. Starting with the estimated rework probabilities for the first iteration,
the manager sketched how he thought the rework probability would vary with iteration of the
design task. From this sketch, we estimated the learning curve factor that would yield
approximately the same shape as the sketch. Readers requiring additional detail should refer
to Appendix 7 for the code used to model this learning curve. Estimates of the learning curve
for the TMS design process are provided in Appendix 4.
7 It also might be reasonable to assume that a minimum rework probability, RPmini,, exists and that the reworkprobability exponentially approaches RPminij rather than zero. The formulation for rework probability would
then be: RP 1 (n) = RP min + (RPj (0) - RP min )* (1 - LCj )" . However, this
formulation would require an additional matrix for the values of RPmini,. In practice, the number of iterationsis likely to be small enough that the rework probabilities are not likely to approach zero.
26
Rework Probability vs. IterationNumber
0 .6 - ..... ....... - - .. ..
RP(n) = 0.5 x 0.8"0.4
.02 0.3-a..2 0.2-0 S0.1
0)
0 1 2 3 4 5 6 7
Number of iterations (n)
Figure 2-9 illustrates the effect that the rework probability learning curve had on simulated
project completion time for the TMS design process. Without a learning curve applied to the
rework probabilities, project completion time is much more sensitive to uncertainty in the
estimated rework probabilities. With no learning curve, project completion times approach
infinity with less than a doubling of the estimated rework probabilities (which results in some
rework probabilities being near, or equal to unity).
Sensitivity of Project Completion Timeto Rework Probabilities and Probability Learning
Curve
210 -190-
0 170-c ~ 150-
( 0 a) 130-0 Z 110-A
E f5' 90- -
70 -50
0 0.5 1 1.5 2
Multiple of Rework Probability
- -Estimated Learning Curve -- No Learning Curve
Figure 2-9 Effect of Learning Curve on Project Completion Time
3. Thermal Management System Analysis
This section describes analysis of the TMS design process using the DSM model. I used the
model to identify opportunities for improvement in the baseline process and to evaluate
potential process changes.
27
3.1. Baseline Process Analysis
3.1.1. Gantt Charts
After defining the process and gathering the requisite input data, the process may be analyzed
for estimated project completion time, project completion time variability, and drivers of
project variability. A useful first step in the analysis is to simulate project progression
assuming all tasks are done correctly the first time (no rework). The test was implemented by
setting all the rework impact values in the impact matrix to zero8 . I then generated a Gantt
chart of the "no rework" case to provide a common sense check of the project simulation9 .
Since most managers are quite familiar with Gantt chart type notation, using a Gantt chart to
inspect task ordering, degree of overlap, and project completion time in the event of no
rework will help to discover any flawed data points. Further, the simple visual representation
will orient the managers to the modeling process and could increase the likelihood that they
will trust the results. The baseline TMS design process with no rework is shown in Figure
3-1.
8 It may seem logical to instead assign values of zero to the rework probability matrix. However, the softwarecode uses the values in the probability matrix to determine the existence of a precedence relationship. If theentire rework probability matrix were zero, the code would assume that all tasks could be performed in parallel.Assigning a value of zero to the rework impact avoids this effect while never assigning rework during theprocess.9 The "Most Likely Value" of task durations were used.
28
Bcseline Process (without rework)
VTSDetermine System RequirementsDefine MechanizationThermal AnalysisBuild Bench SkeletonPreliminary PackagingComponent SelectionDefine EICDIntegrate Bench ComponentsDefine Control StrategyBench Test SubsystemCode SoftwareBuild MuleIntegrate Mule SoftwareMule TestingDetailed PackagingAlpha Hardware/Software ReleaseBuild AlphaAlpha Testing
0 10 20 30 40 50 60 70 80
E Icpsed T ime (Weeks)
Figure 3-1 Gantt Chart of Baseline Process without Rework
As seen in Figure 3-1, the nominal project completion time, assuming no rework in the design
process, is 68 weeks, compared with a target project completion time of 78 weeks for design
of the thermal management system. In this case, the degree of overlap among the activities
and the final project completion time were both consistent with values expected by the
engineering manager for a "no rework" case.
For comparison, a Gantt chart was also generated for a simulation where rework could
occurl . While this chart is simply one run of the Monte Carlo simulation, it can facilitate
understanding the modeling process. The visual representation of tasks being reworked upon
1 Both the Gantt chart with rework and the Gantt chart without rework were also generated using codedeveloped by Browning (1998)
29
completion of other tasks can help to demonstrate the model algorithm without requiring
managers to inspect code. Additionally, I found this chart to be helpful in troubleshooting and
correcting bugs resulting from customization of the algorithm. Figure 3-2 shows the Gantt
chart with rework for the TMS design process. Note that this chart only represents one
possible project duration and rework sequence. Hundreds, or thousands, of project
simulations would need to be run to get a distribution of simulated project completion times.
Bcaseline Process (with rework)
VTSDetermine System RequirementsDefine Mechanization
Thermal AnalysisBuild Bench SkeletonPreliminary PackagingComponent SelectionDefine EICDIntegrate Bench Components
Define Control StrategyBench Test SubsystemCode SoftwareBuild MuleIntegrate Mule SoftwareMule Testing
Detailed PackagingAlpha Hardware/Software Release
Build AlphaAlpha Testing
0 10 20 30 40 50 60 70 80 90 100 110
E Icpsed T ime (Weeks)
Figure 3-2 Gantt Chart of Baseline Process with Rework
3.1.2. Comparison of Simulated and Target Completion Times
Figure 3-3 portrays the results of a Monte Carlo simulation of the TMS design process. Five
thousand runs were used in the simulation to generate the cumulative distribution function
30
(CDF) shown in Figure 3-3. On a 300 Mhz processor, five thousand runs took about twelve
minutes. In all runs in this paper, the time step used in the model was 0.5 weeks. Five
thousand runs were used to minimize noise in the result so that a better comparison could be
made to the modified process described in section 3.2. Fewer runs could be used to reduce
simulation time at the expense of increased noise when comparing two processes. The CDF
indicates the probability (y-axis) that the project will be completed in the time shown on the
x-axis.
Baseline Process100 - - - - - - - -- - -
o0 908070-
M 6050 Baseline Mean: 82.6 weeks
0 00 40 Baseline Std Dev: 11.1 weeks
40-e 30. 20
1 Target =78 weeksE 10
60 70 80 90 100 110 120
Project Completion Time (weeks)
Figure 3-3 Baseline Process Completion Time Distributions
The simulated average project duration is 82.6 weeks with a standard deviation of 11.1 weeks,
compared with a target completion time of 78 weeks. The cumulative distribution function is
skewed somewhat to the right, consistent with the tendency for some projects to take longer
than anticipated. As seen in Figure 3-3, the simulation indicates that the likelihood of
completing the project by the target date of 78 weeks is about 40%.
31
The target date of 78 weeks was estimated by considering the start time and expected
completion time of the design process that was underway during the data collection phase.
The start time was determined from conversations with the engineer primarily responsible for
system design. The completion time, however, was more difficult to obtain because the
modeling of the TMS design process only covered a portion of the entire design process. That
is, only the first stage of prototype testing (Alpha testing), was included in the modeling.
Thus, the "end" of the design process had to be defined as the projected time when Alpha
testing and all upstream rework resulting from Alpha testing would be completed. The best
estimate of the end date was agreed upon with the engineering manager and corresponded
with the release of the control system on the overall vehicle program timing chart.
Use of Project Completion Time Distributions
Assuming accurate input data and task definition, the simulated distribution of project
completion times could be quite useful. For example, simulated project durations might
reveal a significant discrepancy between simulated project completion times and target project
completion times. In this case, the organization may choose to assign additional resources to
certain project tasks to increase the likelihood that the project will be completed in an
acceptable amount of time. Alternately, the organization may decide that the complexity of
the design task is too great, in which case certain product improvements may be postponed
until later generations of the product to minimize complexity. In extreme cases, an
organization may conclude that the project will take too long or cost too much compared with
targets and market assessments, in which case the organization may opt to cancel the project
and allocate resources elsewhere.
32
3.1.3. Sensitivity Analysis
While simulation of the project using the DSM model might provide management with a
better estimate of project completion time, it is also desirable to identify opportunities to
reduce the project completion time or to minimize the variation in project completion time.
Traditional DSM analyses have concentrated on task sequencing as a potential source of
process optimization (Smith and Eppinger, 1997a). However, attempts to re-sequence the
design process produced no interesting results in this example. While re-sequencing
algorithms might theoretically be able to optimize design configurations, in reality I found
that most design tasks in the thermal management system had strong precedence constraints
making many alternate sequences infeasible. Thus, in an attempt to identify the drivers of
project duration and variability, I performed sensitivity analysis on the estimated rework
probabilities.
Row Sensitivity
I first analyzed the impact of a reduction in the probability of reworking each task. I reduced
all rework probabilities in the row associated with a given task by 99%", repeating this for
each design task to identify which tasks most significantly affected project duration due to
rework of that task. The results are shown in Figure 3-4. The code used to generate this
analysis automatically is provided in Appendix 7.
"Probabilities were reduced by 99% to achieve a near elimination of the rework probabilities without affectingwhether tasks are performed in series or in parallel. Reducing rework probabilities to zero would cause thealgorithm to permit all tasks to be done in parallel.
33
Baseline Baseline Percent Runs perRow Sensitivity Mean Std Dev Reduction Simulation
82.6 10.9 99% 3000New New Std % Mean % Std Dev
Task Mean Dev Change ChangeVTS 82.9 11.0 0.6% 1.5%
Determine System Recuirements 82.6 10.9 0.2% 1.0%
Thermal Analysis 81.5 10.0 -1.1% -8.2%Build Bench Skeleton 82.2 10.9 -0.2% 0.3%
Prelimina Packain 81.0 9.5 -1.7% -12.0%
Define EICD 82.2 10.9 -0.2% 0.4%
Integrate Bench Components 82.5 10.8 0.1% -0.2%
Bench Test Subsystem 82.6 10.6 0.2% -2.2%Code Software 80.2 9.7 -2.7% -10.2%
Build Mule 82.7 11.1 0.3% 2.3%Integrate Mule Software 82.7 10.7 0.3% -1.5%
Mule Testing 81.8 10.3 -0.8% -5.2%Detailed Packaging 81.3 9.7 -1.3% -10.3%
Alpha Hardware/Software Release 82.4 10.9 0.0% 0.5%Build Alpha 82.9 11.1 0.6% 2.5%
Alpha Testing 80.6 9.4 -2.2% -13.6%
Figure 3-4 Baseline Process: Row Sensitivity Analysis 2
As highlighted in Figure 3-4, the top three tasks that most significantly impact project
duration and variability due to rework are:
" Define Control Strategy* Define Mechanization
* Component Selection.
After identifying which design task rework probabilities have the greatest impact on project
duration, one must then determine how, in reality, those rework probabilities might be
reduced. For example, the likelihood of having to rework the Component Selection task (i.e.,
the likelihood of having to use different components than originally intended) might be
12 Note that some of the simulated project means are actually higher than the baseline process. Tasks with lowrework probabilities and/or low rework impacts will not appreciably reduce the project mean duration orstandard deviation upon reducing the rework probabilities. For these tasks, noise in the project simulationresulting from running a finite number of simulations may result in slightly higher project mean duration or
34
reduced by purchasing higher quality components or by purchasing components that
somewhat exceed the design requirements. While this approach would not come at zero cost,
the potential benefit in terms of reduced project duration and variability could be weighed
against the cost to determine whether this strategy would be effective.
Matrix Sensitivity
In addition to row-sensitivity analyses, one may be interested in understanding which
individual design task interactions have the greatest impact on project duration and variability.
If one or two interactions (rework probabilities) had high leverage, strategies could be
developed to minimize those specific rework probabilities. To gain an appreciation of
whether an individual interaction was an area of high leverage, I performed a Matrix
Sensitivity analysis. This analysis is similar to the row and column sensitivity analyses except
that I reduced each rework probability independently (again by 99%) and calculated the
resultant percent reduction in project duration. However, in this example, no individual
rework probability had a significant impact on project duration. Results of the Matrix
Sensitivity analysis are provided for illustration in Appendix 3.
3.2. Modified Process Analysis
Description of Modification
After identifying the design tasks whose rework probabilities had the greatest impact on
project duration, I worked with the engineering manager to determine possible design process
changes. One option identified for process change was to modify the Define Control Strategy
task. To reduce the likelihood of reworking the Define Control Strategy task late in the
standard deviation. For 3000 runs, the noise on mean duration is less than about 1% and the noise on standarddeviation is less than about 3%.
35
process, we inserted a new task further upstream in the process. This new task, termed High
Level Control Strategy, is a preliminary version of the Define Control Strategy task. While
not all of the details are yet available to develop the control strategy fully at this stage, rough
outlines of the control strategy can be developed and tested analytically. Then, the details of
the control strategy can be refined later. Thus, the design task Define Control Strategy was
renamed Refine Control Strategy in the modified process. The task Refine Control Strategy
(not the task High Level Control Strategy) would be the task potentially requiring rework after
downstream tasks such as Alpha Testing.
After inserting the High Level Control Strategy task, the engineering manager estimated the
interdependence, rework probabilities, rework impacts, and task overlap values associated
with it (see Appendix 6). Additionally, the manager estimated the likely reduction in the
probability of reworking the downstream Refine Control Strategy task resulting from the
additional upfront work done in the High Level Control Strategy task. Finally, changes were
made to the estimated task durations. The new design task was estimated to require four
weeks. However, the additional time spent up front on the process was expected to reduce the
Refine Control Strategy design task by only two weeks (see Appendix 5). The estimated four
weeks required for High Level Control Strategy is not completely offset by the estimated two-
week reduction in Refine Control Strategy since there is some task redundancy. Thus, any net
savings from the use of a High Level Control Strategy task would have to come from a
reduction in rework, which can be estimated by simulation of the project.
36
Note that this estimation does not provide any sensitivity to the amount of time spent
developing the High Level Control Strategy. Potentially, one could make multiple
estimations of the extra time spent on this task (e.g., 2 weeks, 4 weeks, 6 weeks) and multiple
corresponding estimations of the reduction in rework probability to determine an optimal
level of upfront effort.
Simulation Results
I simulated the modified process with 5000 project runs to compare the distribution of project
completion times. As illustrated in Figure 3-5, simulation results indicate that the modified
process has a slightly lower mean (82.5 weeks vs. 82.6 weeks, -0.1 %) and a lower standard
deviation (9.6 weeks vs. 11.1 weeks, -13.5%). However, even with 5000 simulations, the 0.1-
week difference in mean project completion times is not statistically significant (the 95%
confidence interval for the difference is [-0.3, 0.5]1". Moreover, the 0.1 week difference is not
practically significant.
13 For this and future comparison of simulated project means, I use the following formula for calculation of the
2 2
l00(l-x) confidence interval (Hogg and Ledolter, 1992): x - y ± z(a / 2) - + ' , where x = baseline1 2
process mean, y = modified process mean, sx = baseline process sample variance, sy2 = modified process
sample variance, z = standard normal distribution function, nI = number of baseline process runs per simulation,
n2 = number of modified process runs per simulation.
37
Baseline vs. Modified Process100 -
0S90-80 8070-
260- Baseline Mean: 82.6 weeks250 Baseline Std Dev: 11.1 weeks
0 40-40 ' New Process Mean: 82.5 weeks
> 30 - New Process Std Dev: 9.620 -10 -
E 0
0 60 70 80 90 100 110 120
Project Completion Time (Weeks)- - New Process - Baseline Process
Figure 3-5 Cumulative Distribution Function of Baseline and Modified Processes
Although the modified process does not result in a significant reduction in process mean, the
process does result in reduced variability in the project duration. Reduced variability means
that project completion time is essentially more predictable, thereby reducing risk. In some
instances, it may even be acceptable to permit slightly higher average project duration if it
reduces project variability and therefore the risk that the project will take considerably longer
than anticipated. In this case, the reduction in project standard deviation from 11.1 weeks to
9.6 weeks is statistically significant at p<0.01 (the 99% confidence interval for al 2/ C22 = [1.3,
1.1], which does not contain one as a possible ratio of variances) 14. Given that the modified
14 To determine whether the reduction in variation was statistically significant, I employed the F-test (Hogg andLedolter, 1992). The 100(1-c) percent confidence interval for the ratio of variances (CF2/ 022) is given by the
38
process results in a comparable mean project duration but reduced variability, it may be
desirable to use the modified process. However, one must first take into consideration the
inherent inaccuracy of the estimated rework probabilities themselves, which is discussed in
the next section.
Data Accuracy Considerations
Although the model accounts for variability in estimated task time duration (via modeling
each task duration as a distribution -- see Section 2.1), it does not account for the error
inherent in the estimated rework probabilities. As discussed in Section 2.4, rework
probabilities are highly subjective and are subject to considerable error. For example, when
iterating on estimated rework probabilities, one probability was modified from 80% to 20%.
Additionally, many of the estimated rework probabilities were between 5% and 20%. With
this new product and process, an estimated rework probability of 10% could easily have a true
rework probability of 5% or 20%. Thus, it is conceivable that rework probabilities may be
50% lower, or 100%-200% greater, than estimated. Unfortunately, whether the modified
process is beneficial depends on whether the true rework probabilities are higher or lower
than the estimated rework probabilities.
1 2 s2
following formula: L , F(a /2; r,,r 2 ) , where rj = ni - 1 (n, = number of baselineF(a / 2; r2, r,) S2 s 2
process runs per simulation), r2 = n2 - 1 (n2 = number of modified process runs per simulation), F = F-
distribution function, s2 = sample variance of baseline process, S2 = sample variance of modified process. The
F-test is typically reserved for comparing variances of normal distributions (Hogg and Ledolter, 1992). Notethat the project duration distributions being compared here are not normal (they are skewed to the rightsomewhat). However, given the large number of runs per simulation (1000-5000) (resulting in an extremely lowprobability that the variances are equal per the F-test) and the modest degree of skew-ness of the distributions
39
To explore the impact of rework probability uncertainty, I simulated the baseline and
modified processes with both a 50% reduction and a 100% increase in all estimated rework
probabilities. Figure 3-6 compares the baseline and modified processes with all rework
probabilities reduced by 50%. For comparison, Figure 3-6 also illustrates the baseline process
with the originally estimated rework probabilities. In this case, although the variability of the
modified process is lower, the mean project completion time is higher. Using 1000 runs per
simulation, the project standard deviation is 5.4 weeks for the baseline process and 5.0 weeks
for the modified process, a 7.4% reduction (98% confidence interval for a(1/ Y2 = [1.03,
1.32]). The mean project duration is 73.8 weeks for the baseline process and 74.9 weeks for
the modified process, a 1.5% increase (99% confidence interval for mean difference is [0.5,
1.7]).
These results makes sense considering that the additional time spent on the High Level
Control Strategy task is not fully offset by a reduction in time of the Refine Control Strategy
task. The lower the rework probabilities, the less opportunity there is to make up for time
added early in the process by reductions in rework later in the process. While in some
instances an increased mean project duration may be an acceptable tradeoff for reduced
variability and risk, inspection of Figure 3-6 reveals that the reduction in variation in this
example comes at the expense of a higher project completion time except for the extreme tail
of the cumulative distribution function. Thus, little benefit in terms of reduced risk would
likely be achieved under these circumstances with the modified process.
40
being compared, the F-test is considered to provide reasonable evidence that the variances of the distributions areindeed different.
Baseline vs. Modified Process0.5 x Rework Probabilities
* 100 -
S80 -I160-
040-
20 -
0E 60 65 70 75 80 85 90
Project Completion Time (Weeks)-0.5 x Rework Prob. (Baseline Process)- - -0.5 x Rework Prob. (Modified Process)---- Original Prob. (Baseline Process)
Figure 3-6 50% Reduction in Rework Probabilities
On the other hand, Figure 3-7 compares the baseline and modified processes when the rework
probabilities are increased by a factor of two. For comparison, Figure 3-7 also illustrates the
baseline process with the originally estimated rework probabilities. Now, both the simulated
project mean duration and variance were lower than in the baseline process. Using 1000 runs
per simulation, the simulated project mean duration is 119.6 weeks for the baseline process
and 114.4 weeks for the modified process, a 4.3% reduction (99% confidence interval for
mean difference is [2.1, 8.3]). The standard deviation of the modified process is 29.5 weeks
for the baseline process and 24.7 weeks for the modified process, a 16.3% reduction (99%
confidence interval for a / C22 = [1.2, 1.7]). Again, it is logical that the benefits from
changing the design process are greater when the rework probabilities are higher. Higher
rework probabilities mean that more rework will have to be performed, resulting in greater
opportunity for improvement by doing up-front work that could reduce the rework probability
downstream.
41
Figure 3-7 100 % Increase in Rework Probabilities
Table 3-1 summarizes the impact of rework probability uncertainty on project mean and
standard deviation. In all cases, project standard deviation (variability) is lower with the
modified process. However, mean project duration is significantly lower only for the scenario
where rework probabilities are 100% greater than the estimated probabilities.
Project % Mean Project % StandardMean Change Std. Dev. Deviation
Scenario Process (weeks) (baseline & (weeks) Changemodified) (baseline &
modified)Original rework Baseline 82.6 11.1probabilities Modified 82.5 ~0.1%11 9.6 -13.5%
50% reduction in Baseline 73.8 5.4all rework +1.5% -7.4%probabilities Modified 74.9 5.0
100% increase in Baseline 119.6 29.5all rework -4.3% -16.3%
prbailtis Modified 114.4 24.7probabilities
Table 3-1 Impact of Rework Probability Uncertainty
1 This change is not statistically significant. All other changes are statistically significant. See text forconfidence intervals.
42
Baseline vs. Modified Process2 x Rework Probabilities
100 --.
80 -
60 -cc
2 40 -20-20 1--
cu 0
60 80 100 120 140 160 180 200EProject Completion Time (Weeks)
-2 x Rework Prob. (Baseline Process)- - 2 x Rework Prob. (New Process)- -Original Prob. (Baseline Process)
3.3. Process Recommendations
As seen in section 3.2, the evaluation of whether the modified process is better is different
depending whether actual rework probabilities are higher or lower than the estimated
probabilities. Given the sensitivity of the analysis to the accuracy of estimated rework
probabilities, it is impossible to conclude that the modified process is definitively better than
the baseline process in all circumstances. However, working with the engineering manager
on this issue, we concluded the following based on the analysis in Section 3.2:
" If the system being designed is markedly different from previously designed systems,
the rework probabilities are likely to be high. In this case, it may be advantageous to
include the High Level Control Strategy design task. Under these circumstances, the
additional effort associated with performing the High Level Control Strategy design
task is more likely to be recovered in the long run with a reduction in rework.
" In contrast, if the system being designed is quite similar to previously designed
systems, the rework probabilities should be lower. In this case, the added time
necessary to perform the High Level Control Strategy design task may never be
recovered with a reduction in rework. Under these circumstances, it would be
desirable to wait until all information is available to define the control strategy.
4. Model Limitations
In this section, I discuss limitations the DSM model as applied in this paper and discuss
strategies for increasing the benefits of DSM.
43
4.1. Stable Baseline Process
One drawback of DSM modeling is that it assumes the process being analyzed is relatively
stable and that the interactions among design tasks are known (Smith and Eppinger, 1997b).
Thus, applying the model to new or evolving processes can be problematic. On the spectrum
of brand-new to highly evolved processes, the application described here was closer to the
brand-new end. Unfortunately, while greater potential for improvement lies in uncertain and
evolving processes, these processes are the least suitable for application of DSM. Likewise,
one might argue that a very well-known and highly stable design process has less room for
improvement from DSM modeling even though it is more suitable for modeling due to better
data quality.
4.2. Imbalance of Model Resolution and Data Accuracy
The most significant drawback of an analytical model attempting to characterize a design
process is a lack of accurate data. Model resolution that is inconsistent with the level of data
accuracy will diminish the effectiveness of DSM.
When an analytical model does not predict outcomes that are consistent with reality, a
common response is to increase model resolution. By considering more factors, it is
sometimes possible to improve the accuracy of the prediction. However, this approach will
be ineffective if the accuracy of the prediction is constrained not by model resolution, but by
the data being input to the model (that is, "garbage in, garbage out"). As seen in section 3.2,
data inaccuracy can severely limit the ability of a DSM project model to "predict" project
completion time, which also limits its ability to compare one process to another.
Unfortunately, the predicted project completion times are highly dependent on the rework
44
probabilities, which cannot be estimated with great accuracy. With the DSM model described
in this paper, I consider the constraint, or "bottleneck" (Goldratt, 1992), to be data accuracy
rather than model resolution. This situation is illustrated in Figure 4-1, which conveys that an
increase in the Accuracy of Data will increase the Usefulness of DSM; however, an increase in
Model Resolution will have little or no impact on Usefulness of DSM if usefulness is already
constrained by the Accuracy of Data.
Accuracy ofData
. ... Usefulness ofDSM Model A
Resolution
Process Improvement
Figure 4-1 Usefulness Constrained by Data Accuracy16
Unfortunately, an increase in model resolution can be worse than ineffective, it can be
counterproductive. Shiba, Graham, and Walden (1993) discuss the difference between
management push and employee pull. Management could push the use of DSM onto
employees, forcing them to use it and to therefore to improve their processes. However, as
discussed by Keating et al (1999), management push will likely be ineffective in the long run
if not accompanied by employee pull. Employee pull comes in the form of a willingness of
employees to engage in an activity because of their internal commitment and perception of the
16 Uses representation consistent with the system dynamics method (Forrester, 1961). A "+" at an arrowheadindicates that increase (decrease) in independent variable (at the tail of the arrow) will cause a subsequentincrease (decrease) in the dependent variable (at the arrowhead), ceteris paribus. In contrast, a "-" at an
45
benefits of that activity. As illustrated in Figure 4-2, Management Push could initiate the use
of DSM, leading to some amount of Process Improvement, depending on the Usefulness of
DSM. When users observe Process Improvement, the Perceived Benefits of DSM would
increase, resulting in an increase in the Perceived Benefit/Cost Ratio of DSM. As illustrated
in Figure 4-2, Employee Participation, through estimating data and assisting in process
modeling, can affect the Perceived Benefits of DSM. An increase in the Perceived
Benefit/Cost Ratio of DSM could stimulate additional Use of DSM and further Process
Improvement, creating a reinforcing cycle of improvement from Employee Pull (loop RI in
Figure 4-2)'.
However, Use of DSM also increases the Perceived Cost of DSM, since implementation
requires effort such as training in the modeling method, data estimation, and analysis (see
Section 3). Since an increase in the Perceived Cost of DSM will lower the Perceived
Benefit/Cost Ratio of DSM, we also observe a balancing effect of potential Employee
Resistance (loop B I in Figure 4-2)18. Whether the reinforcing loop of Employee Pull or the
balancing loop of Employee Resistance dominates the dynamics of the implementation effort
depends on the perception of the relative benefits of DSM to the costs.
arrowhead indicates that increase (decrease) in independent variable (at the tail of the arrow) will cause asubsequent decrease (increase) in the dependent variable (at the arrowhead), ceteris paribus.17 A reinforcing loop is one in which an increase (decrease) of a variable within the loop ultimately causes asubsequent additional increase (decrease) of that variable, ceteris paribus. Reinforcing loops are commonlyknown as vicious (or virtuous) cycles, depending on whether the spiral is beneficial or detrimental.18 A balancing loop is one in which an increase (decrease) of a variable within the loop ultimately causes asubsequent decrease (increase) in that variable, ceteris paribus.
46
ManagementPush Accuracy of
DataDelay Usefulness of
Use of DSM + DSM
+ Process Im rovementEmployee Model
+ Resistance Eplyee Resolution
Perceived Cost Perceived +of DSM Benefit/Cost Ratio Perceived
of DSM Benefits of DSM
EmployeeParticipation
Figure 4-2 Employee Pull and Resistance
Now consider how these dynamics will be affected by an increase in Model Resolution, as
illustrated in Figure 4-3. As previously described, an increase in Model Resolution is not
expected to increase the Usefulness of DSM since the model is constrained by the Accuracy of
Data (see sections 3.2 and 3.3). However, increasing the Model Resolution causes an increase
in Model Complexity. Additional Model Complexity requires additional time to understand
the model, train others in its use, gather input data, and perform analysis. All of these effects
increase the required Effort to Implement DSM. The Perceived Cost of DSM increases,
causing a decrease in the Perceived Benefit/Cost Ratio of DSM, thereby strengthening the
Employee Resistance loop and weakening the Employee Pull loop. Thus, we see that an
increase in Model Resolution could ultimately decrease, rather than increase, the benefits of
Process Improvement that could potentially be reaped by use of such a tool.
47
ManagementPush Accuracy of
Usefulness of ; DataK DSMUse of DSM + Ae00 +S
C + Process Im rovement
Employee Employee Model
+ Resistance Pull + Resolution
Perceived PerceivedPerceived Cost Benefit/Cost Ratio Benefits of DSM
of DSM N
EmployeeParticipation
ModelComplexity
Figure 4-3 Effect of Model Resolution on Employee Pull
From this discussion, one should understand that optimum benefit from any a DSM model, or
any analytical tool, would come from an appropriate "balance" of Model Resolution and
Accuracy of Data. In this case, I consider the Model Resolution to be out of balance with the
Accuracy of Data. As illustrated in Figure 4-1, an increase in Accuracy of Data could
improve the benefits of the model. Unfortunately, increasing the Accuracy of Data in this
case would be difficult, if not impossible. Thus, only one option remains to obtain the
balance necessary for maximum long run Process Improvement: decrease the Model
Resolution.
One means of reducing the resolution of the model is to cut the modeling process off early.
Simply outlining the steps of the design process and documenting those steps for future use
could yield benefits at little cost. Taking the time to define the process will force engineers
and managers to think about the big picture when it is easy to become mired in the details.
Beyond outlining the process steps, identifying which design tasks are interdependent (an
48
early step in the modeling process) could also result in process insights at little cost. As the
engineering manager in this example noted early in the modeling process (before simulations
had been run): "This [process] is having the effect you intended, but not in the way you
anticipated," noting that the process of identifying the interdependent activities forced
thinking about the long term consequences of decisions.
5. Conclusion
In this paper, I extend the DSM project model developed by Browning (1998) to include
partial overlapping of design activities, permitting concurrent engineering to be modeled.
Additionally, I extend the DSM method to include a learning curve to permit rework
probabilities to be dynamic, rather than static. Applying the DSM model to the thermal
management system of an electric vehicle, I show how sensitivity analysis can be used to
identify drivers of project duration and variability, which can then be examined for potential
process improvements. Upon identifying a modified process, I use the DSM model to
compare the modified process with the existing process. Finally, I show that data accuracy is
a severe limitation to the effective implementation of DSM to a new product such as an
electric vehicle. I show how having a model resolution that is inconsistent with the accuracy
of data input to the model can increase employee resistance to using such modeling
techniques, limiting model effectiveness. As a solution to the imbalance of model resolution
and data accuracy, I explain how simplification of the DSM model could actually result in
improved long-term results.
6. References
Black, T. A. (1990a). "A Systems Design Approach to Automotive Brake Design"Unpublished Massachusetts Institute of Technology S.M. Thesis. Cambridge, MA.
49
Black, T. A., C. H. Fine and E. Sachs (1990b). "A Method for Systems Design UsingPrecedence Relationships: An Application to Automotive Brake Systems,"Massachusetts Institute of Technology working paper #3208, Cambridge, MA.
Browning, T. R. (1998). "Modeling and Analyzing Cost, Schedule, and Performance inComplex System Product Development," Massachusetts Institute of Technology PhDThesis. Cambridge, MA.
Carrascosa, M., S. D. Eppinger and D. E. Whitney (1998). "Using the Design StructureMatrix to Estimate Product Development Time," Proceedings of the ASME DesignEngineering Technical Conferences (Design Automation Conference), Atlanta, GA,Sept. 13-16.
Eppinger, S. D., D. E. Whitney, R. P. Smith and D. A. Gebala (1994). "A Model-BasedMethod for Organizing Tasks in Product Development," Research in EngineeringDesign 6, 1-13.
Ford, D. N. and J. D. Sterman (1998a). "Dynamic Modeling of Product DevelopmentProcesses," System Dynamics Review 14(1), 31-68.
Ford, D. N. and J. D. Sterman (1998b). "Expert knowledge elicitation to improve formal andmental models," System Dynamics Review 14(4), 309-340.
Forrester, J. W. (1961). Industrial Dynamics. Cambridge, MA: The MIT Press.
Goldratt, E. M. (1992). The Goal: A Process of Ongoing Improvement North River Press, Inc.
Hogg, R.V. and J. Ledolter (1992). Applied Statistics for Engineers and Physical ScientistsMacmillan Publishing Company, New York.
Keating, E. K., R. Oliva, N. Repenning, S. Rockart and J. Sterman (1999). "Overcoming theImprovement Paradox," European Management Journal 17(2): 120-134.
Osborne (1993). Product development cycle time characterization through modeling ofprocess iteration. Unpublished Master's Thesis, Massachusetts Institute ofTechnology, Cambridge, MA.
Smith, R. P. and S. D. Eppinger (1997a). "A Predictive Model of Sequential Iteration inEngineering Design," Management Science 43(8): 1104-1120.
Smith, R. P. and S. D. Eppinger (1997b). "Identifying Controlling Features of EngineeringDesign Iteration," Management Science 43(3): 276-293.
50
Smith, R. P. and S. D. Eppinger (1998). "Deciding Between Sequential and Parallel Tasks inEngineering Design," Concurrent Engineering: Research and Applications 6(1): 15-25.
Steward, D. V. (1981 a). "The Design Structure System: A Method for Managing the Designof Complex System," IEEE Transactions on Engineering Management 28(3): 71-74.
Steward, D. V. (1981 b). Systems Analysis and Management: Structure, Strategy, and DesignPetrocelli Books, Inc., New York.
Teplitz, C. (1991). The Learning Curve Deskbook: A Reference Guide to Theory,Calculations, and Applications Quorum Books, New York.
Ulrich, K. and S. Eppinger (2000). Product Design and Development, 2nd ed., McGraw-HillInc., New York, NY.
Wheelwright, S. and K. Clark (1995). Leading Product Development, The Free Press, NewYork, NY.
51
Appendix 1: Design Task Descriptions
0 Define Vehicle Technical Specifications (VTS)Vehicle Technical Specifications are high level requirements that the vehicle must bedesigned to satisfy. Examples of VTS are operating temperature range (of particularimportance for electric vehicle design), acceleration, driving range (i.e., miles betweenrecharging - also of particular importance to electric vehicles), and cabin comfort goals.
* Determine Thermal Management System RequirementsOnce the VTS is sufficiently identified, it must be determined what the requirements are ofthe thermal management system that will ensure the VTS is satisfied. For example, the heatremoval or generation capacity of the thermal management system depends on higher levelVTS such as the ambient temperature range the vehicle is expected to operate in as well as thecabin comfort goals for those temperatures.
* Define MechanizationDefining the mechanization of the thermal management system means determining thearchitecture of the system that will satisfy the system requirements. The schematicarrangement of the system, including the design of coolant loops, valve placement, etc. isdetermined in this step.
0 Thermal AnalysisThe thermal analysis design task entails computer modeling of the thermal managementsystem. Coding is used to simulate the mechanization developed in the previous design task.Knowing the heat loads, required component temperatures, fluid flow rates, etc., the ability ofthe thermal management system to control temperatures within the required range is testedanalytically.
0 Build Bench SkeletonIn addition to analytical testing of the thermal management system, physical testing of thesystem must also be done. Preliminary tests of the system, such as tests that might prove outa mechanization not used in previous designs, are done through testing of select sections ofthe thermal management system built on a bench. Before the precise components that will beintegrated into the system are known, a rough mechanization can be built using genericcomponents to test design concepts. The "Build Bench Skeleton" task refers to the buildingof this rough mechanization.
0 Preliminary PackagingPackaging refers to the physical layout of components within the space envelope of thevehicle. Before specific components are selected for inclusion in the thermal managementsystem, an idea of the size of the components is determined based on known componentrequirements such as motor power and total heat load. These rough component sizes are usedto develop a preliminary plan for packaging the components within the allotted spaceenvelope.
52
0 Component SelectionIn some design processes, this step would instead be "component design." However, for theprocess being considered (thermal management system for electric vehicles), components areinstead outsourced due to low anticipated manufacturing volumes. This step refers to theacquisition from a supplier of components that will ensure the system requirements are met.
0 Define Electrical Interface Control Document (EICD)Each electrical component in the vehicle has certain inputs and outputs that must be known sothat they may be integrated with the rest of the system. The interfacing of all the electricalcomponents is a complicated task that requires a comprehensive summary of theinterconnections. The EICD is the document that captures all of these interfaces.
0 Integrate Bench ComponentsOnce the components have been selected, the specific components can then take the place ofmore generic components that have been used for preliminary bench testing. This step refersto the integration of the specific components into the bench.
0 Define Control StrategyThe thermal management system is essentially a feedback control system in the vehicle.When temperatures reach predefined setpoints, the system must operate in the appropriatemanner to remove or add heat to the components (or cabin) to maintain them within therequired temperature ranges. The logic scheme used to control the system is referred as thecontrol strategy. This step is the definition of that control strategy.
0 Bench Test Subsystem
Once the components have been selected and integrated and the control strategy defined,more comprehensive testing may be performed on the thermal management system. Thistesting provides preliminary indication of performance of the system before it is actuallyintegrated into a vehicle prototype.
0 Code SoftwareOnce the control strategy has been identified, the actual software code that will be used tocontrol the thermal management system must be written. This step refers to the writing ofthat software code.
0 Build MuleA "Mule" is simply the first vehicle prototype. This step refers to the building of the mulevehicle.
0 Integrate Mule SoftwareAs can be seen in Chapter 5, this step is a very simple step of negligible duration. It simplyrefers to the task of incorporating the written software code that will control the thermalmanagement system into the mule vehicle.
53
0 Mule Testing
Mule testing refers to the various tests performed using the early prototype (Muleprototype) vehicle that provide indication of how the system and components will workwhen integrated with the vehicle.
* Detailed Packaging
Upon identifying the precise mechanization and components to be included in the vehicle,more detailed computer modeling of the layout of components can commence. This stepis the refinement of the "Preliminary Packaging" step described earlier.
0 Alpha Hardware/Software ReleaseBefore building the Alpha prototype vehicle, appropriate steps must be taken to ensurethat the components going into this prototype meet the requirements for this testing stage.Various signatures are required, for example, from finance, purchasing, engineering,marketing, and program management so agreement from the design team is documented.Once the appropriate signatures have been obtained, the component is "released" forAlpha prototype. This step refers to that release process.
0 Build AlphaAn Alpha prototype is the next stage of vehicle prototype. This step refers to the buildingof the Alpha prototype vehicle.
0 Alpha TestingAlpha testing refers to the various tests performed using the Alpha prototype vehicle thatprovide indication of how the system and components will work when integrated with thevehicle.
54
Appendix 2: Aggregation of Design Tasks
Electric Vehicle Thermal Subsystem (Modified)
I"lei"T .r'5WI'I L''IEstablish preliminary VTSDetermine Thermal System Functional RequirementsDetermine ETS cooling requirementDetermine Battery Pack thermal requirementsDetermine Occupancy Comfort RequirementsDefine Thermal SSTSExplore Alternate Therma IMechanizationsPeer Review of Proposed Thermal MechanizationsThermal model analysis of the mechanization(s)Final Review of Proposed Thermal Mech.Define Preliminary PackagingDefine thermal CTSEstimate Thermal Power RequirementsDetermine Electrical /O for ThermalBuild thermal bench (Stage I, non prod. intent)
Stage I Thermal Bench Testing (non priDetermine Thermal Energy EfficiencyDefine thermal component EICDSelect Thermal ComponentsProvide Mass EstimationDefine thermal control interface with baDefine Thermal Control StrategyBuild/Modify Stage I Thermal BenchStage I Bench Testing/Development (pand hardware)Finalize Thermal System Requirementstrategy)Control Software release (part of controDefine Thermal Packaging
Thermal Design ReleasePurchase Thermal Components for AlpBuild Alpha VehicleAlpha Vehicle testing for control calibra
Alpha Vehicle testing for design validatUpdate Requirements (VTS req'mts freAlpha Exit ReviewBuild Beta VehicleBeta Integration / Test / Design ValBeta Exit ReviewThermal System Production ReleaseGamma Integration / Product ValGamma Exit Review
,d. Intent component
First Cut Aagregate VersionEstablish preliminary VTS (Statement ofRequirements?)betermine Functional RequirementsDetermine Cooling/Heating RequirementsDefine SSTSDefine MechanizationThermal AnalysisPreliminary PackagingDefine CTSEstimate Power RequirementsDetermine Electrical I/O
tage I Bench TestingEstimate Energy EfficiencyDefine EICDSelect Thermal ComponentsEstimate MassDefine Control Strategy/Interfaces
s)........ r.
Control Software ReleaseDefine PackagingMockup
ttery controller Release/Purchase Alpha ComponentsAlpha testingVTS freeze
rod. intent softwareBeta Testing
s (hardware and controlProduction Release
lier) Gamma Testing
ha Build
tion
ioneze)
Establish Preliminary VTS------- *Determine System Requirements
Define MechanizationThermal AnalysisPreliminary PackagingCTS/Component SelectionBench Testing/Mule TestingDefine EICDVTs FreezeDefine Control Strategy/InterfaceiCode SoftwareComplete PackagingAlpha Hardware/Software ReleasAlpha TestingBeta TestingProduction Release
Gamma Testing.
Determine System RequirementsDefine Mechanization
Thermal AnalysisBuild Bench Skeleton
Preliminary PackagingComponent SelectionDefine EICDIntegrate Bench Components
3 Define Control StrategyBench Test SubsystemCode Software
e Build MuleIntegrate Mule SotweareMule Testing
etailed Packaging
Alpha Hardware/Software ReleaseBuild AlphaAlpha Testing
Bst TenPo' RtonRlea&"Gamma Tasting
Second Cut Aggregate Version Final Aggregation
(11(11
Appendix 3: Matrix Sensitivity Results
Baseline Project Mean I831 Runs per simulation 5000Percent Probability Reduction 99%
1 2 3 4 5 6 7 8 9 10 11 1213 14 15 17 18 19VTS 1 _ -0.1% _ _ _ 0.3% _
Determine System Requirements 2 0.0% -0.3% -0.4% -0.5% -0.1% -03% 0.1%Define Mechanization 3 -0.5% -1.4% . -0.9% -1.7% _-16% -1.0% -0.9%
Thermal Ana sis 4 -1 2 ___
Build Bench Skeleton 5 0.3% 0.0% -0.7% -0.7%Preliminary Packaging 6 -1.3% -0.2%-0.5%oComponent Selection 7 -0.3% 0.0% _ -0.1% -0.6% -03% -2.4% -1.0% -0.8%
Define EICD 8 0.0% -. 1%1-0.3% 10.1Integrate Bench Components 9 _ -0.2% _ -0.2% 1 1
Define Control Strategy 10 -2.9% 0.0% -).2%1 -2.1% -1.7%Bench Test Subsystem 111 -0.6% 0.0% 0.1%
Code Software 121 0.0% -2.4%Build Mule 131 _ 0.1%
Integrate Mule Software 141 0.0% 0.1%Mule Testing 151 _-.1 0.2%
Detailed Packaging 16 -0.3% -0.5% -0.8%1Alpha Hardware/Software Release 17 -0.1% 0.0% 0.0% 0.1%
Build Alpha 18 0.0%,10-Alpha Testing 19, -0.5% 19-1.3% 0.1% .
01
Appendix 4: Baseline Process Task Durations, Learning Curve
Task Task Duration (Weeks) *
BCV MLV WCV LC
1 VTS 6 9 12 0.75
2 Determine System Requirements 12 18 24 0.5
3 Define Mechanization 4 6 8 0.3
4 Thermal Analysis 2 4 6 0
5 Build Bench Skeleton 6 12 18 0.75
6 Preliminary Packaging 12 16 24 0
7 Component Selection 18 24 30 0.5
8 Define EICD 27 36 52 0.3
9 Integrate Bench Components 4 6 8 010 Define Control Strategy 10 16 20 0.2
11 Bench Test Subsystem 6 8 10 0
12 Code Software 8 14 20 0.1
13 Build Mule 2 4 6 0
14 Integrate Mule Software 0.2 0.2 0.2 0
15 Mule Testing 8 12 16 0
16 Detailed Packaging 0.2 0.2 0.2 0
17 Alpha Hardware/Software Release 0.5 1 2 0
18 Build Alpha 2 4 6 019 Alpha Testing 8 12 16 0
* BCV-Best Case Value
MLV-Most Likely ValueWCV-Worst Case Value
57
Appendix 5: Modified Process Task Durations, Learning Curve
Task Task Duration (Weeks) *
BCV MLV WCV LC1 VTS 6 9 12 0.752 Determine System Requirements 12 18 24 0.53 Define Mechanization 4 6 8 0.34 Thermal Analysis 2 4 6 05 Build Bench Skeleton 6 12 18 0.756 Preliminary Packaging 12 16 24 07 Component Selection 18 24 30 0.58 High Level Control Strategy Development 4 4 4 09 Define EICD 27 36 52 0.3
10 Integrate Bench Components 4 6 8 011 Refine Control Strategy 6 14 20 0.2
12 Bench Test Subsystem 6 8 10 013 Code Software 8 14 20 0.114 Build Mule 2 4 6 015 Integrate Mule Software 0.2 0.2 0.2 016 Mule Testing 8 12 16 017 Detailed Packaging 0.2 0.2 0.2 018 Alpha Hardware/Software Release 0.5 1 2 019 Build Alpha 2 4 6 020 AlphaTesting 8 12 16 0
* BCV-Best Case ValueMLV-Most Likely ValueWCV-Worst Case Value
58
Appendix 6: Modified Process Input Matrices
Highlighted cells indicate values that were added or modified
Rework ProbabilityRework Probability 1 2 3 4 5 6 7 8 9 10 1i 12 13 14 15 16 17 18 19 20
VTS 1 0.05 0.10 1Determine System 2 0.25 0.50 0.75 0.10 0.40 0.05 0.10 0.05
Define Mechanization 3 0.20 0.50 0.10 0.10 0.10 0.05 0.10 0.05Thermal Analysis 4 1.00
Build Bench Skeleton 5 0.10 0.10 0.10 0.10Preliminary Packaging 8, 1.00 0.05 0.05Component Selection 7 10.10 0.50 0.05 0.05 0.20 0.30 0.10 0.05
High Level Control Strategy 8 0.80 0.40 0.20Define EICD 9 0.20 0.10 0.10 0.10
Integrate Bench Components 10 1.00 1.00Refine Control Strategy 11 0.80 0.20 1.00 0.20 0.40 0.10Bench Test Subsystem 12 1.00 0.05 1.00 1.00 1
Code Software 13 1.00 0.25Build Mule 14 0.10
Integrate Mule Software 15 1.00 0.50Mule Testing 16- 0.50 1.00
Detailed Packaging 17 1.00 1.00 1.00Alpha HardwarelSoftware 18 1.00 1.00 .10 0.1o
Build Alpha 19 0.25Alpha Testing 20 0.50 1.00 1.00 0.10 .
Rework ImpactRework Impact 1 2 3 41 5 6 7 8 9 10 11 12 13 141 15 16 17 18 19 20
VT 1 0.05 1 0.05Determine System 2 0.20 0.10 0.10 0.05 0.10 0.05 0.10 0.05
Define Mechanization 3 0.20 0.10 0.10 0.10 0.05 1 0.05 0.10 0.05Thermal Analysis 4 0.20
Build Bench Skeleton 5 0.10 0.10 0.10 0.10Preliminary Packaging 6 0.20 0.10 0.10Component Selection 7 0.20 0.10 0.05 0.05 0.10 0.10 0.10 0.10
High Level Control Strategy 8 0.20 0.10 0.20Define EICD 9 0.05 0.05 0.05 0.05
Integrate Bench Components 10 0.10 0.10Refine Control Strate 11 0.80 0.100.10 0.10 0.05 0.05Bench Test Subsystem 12 .0.10 0.05 0.10 0.20 _ ____ I
Code Software 13 0.0! 0.25Build Mule 14 0.01
Integrate Mule Software 15 1.00 1.00Mule Testing 16 0.20 0.10
Detailed Packagin 17 0.10 0.05 0.10Alpha Hardware/Software 18 0.05 0.05 0.1__ _ .10
Build Alpha 19 0.1Alpha Testing 20 0.10 0.05 0.10 0.05.
OverlapOverlap Matrix | 1 2 31 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
VrS 1= IDetermine System Requirements 2 0.10
Define Mechanization 3 0.25Thermal Analysis 4 0.20
Build Bench Skeleton 5 0.80 0.00Preliminary Packaging 6 1.00Com onent Selection 7 7 0.20 0.50
High Level Control Strategy 8 1.00 0.50 0.60Define ElCD 9 0.00
Integrate Bench Components 10 1.00 0.50Refine Control Strategy/Interfaces 11 0.50 0.20 0.501
Bench Test Subsystem 12 1.00 0.10 1.00 0.80Code Software 13 0.501
Build Mule 14 1.00Integrate Mule Software 15, 1.00 1.00
Mule Testing 16- 1.00 1.00Detailed Packaging 17 1.00 1.00 1.00
Alpha Hardware/Software Release 18 1.00 1.00 0.33 1.00Build Alpha 19 - 1 = I I I 1 1.00
Alpha Testing 20 1.00 1.00 1 1.00 1.I 0
59
Appendix 7: Model Code
The code provided in this appendix was obtained from <http://web.mit.edu/dsm>, whichwas based on that developed by Tyson Browning (MIT PhD Thesis, 1998). The code hasbeen modified extensively from that obtained from <http://web.mit.edu/dsm>. Mostnotably, all references to variables and code pertaining to project cost have been deleted.Additionally, modules are included that permit sensitivity analysis on reworkprobabilities. This code is used in conjunction with a Microsoft@ Excel 2000spreadsheet for data input. Modifications that significantly alter the algorithm, such asinclusion of a learning curve on rework probabilities and inclusion of an overlap matrix,have been highlighted in the code.
The model can be obtained from the author.
Simulation Module
Option ExplicitOption Base 1
'all variables must be explicitly declared'all arrays begin at one (instead of zero, which is default)
Public x, y As Integer 'Used in the maximum functionPublic DSM As Variant 'A . containing the binary
'interaction matrix, the rework probabilit matrix,'the rework impact matrix, and the
Dim r As IntegerDim rMax As IntegerDim WO As SingleDim WinitO As SingleDim WN() As BooleanstepDim LCO As SingleDim t As IntegerDim deltajt As SingleDim S As SingleDim ActSO As SingleDim ActS_3pt() As SingleDim i, j, k As Integer
'run number'number of runs to do
'the work vector; activities with work remaining'the initial values for the work vector
'the "work now" vector; activites to work in current time
'the learning curve vector'time step number
'time step size'cumulative schedule duration
'duration samples for newly beginning activities'duration min, likely, max
'countersDim band-complete As Boolean 'indicates that all activities comprising a band for atime step have been foundDim CSPSamples() As Single 'dynamic array (9, r) of CSP samples resulting from runs
'(C,S,P,Pmin,Plikely,Pmax,PtechMin,PtechLikely,PtechMax)Public NumberRuns As Single 'Contains the number of runs that each simulation will doPublic ProbTemp As Variant 'Assigns probabilities to a temporary
60
'array so that they can be re-assigned at the end of each'run, since they were reduced during the simulation.
Dim ActSeq() As Integer 'DSM sequencing vector
Sub CSPModel()
Initialization'Initializes the values in the DSM matrix.'Prevents having to go through'the entire data re-entry sequence whenever any'error occurs or just after the program has been'opened.
'Selects Sensitivity Worksheet or Matrix Sensitivity Worksheet'so the user can view the results as they are generated.'Otherwise, selects SIM Results worksheet.
Sheets("SIM Results").SelectCells(1, 1).Select'Sheets("Histogram").SelectIf SensitivityOn = True Then
Worksheets("Sensitivity").SelectEnd IfIf MatrixSensitivityOn = True Then
Worksheets("MatrixSensitivity").SelectEnd If
Dim finishrun As BooleanDim MoreRuns As BooleanDim SingleRunData As Boolean
'indicates a run is complete'indicates simulation is complete
'get data for a single, sample run?
Randomize 'initialize random-number generatorr = 0 'initialize run #ReDim ActS(n)ReDim ActS_3pt(n, 3) As SingleReDim W(n)ReDim Winit(n)ReDim WN(n)ReDim LC(n)ReDim CSPSamples(9, 1)ReDim ActSeq(n)ReDim ProbTemp(n, n) 'Dimensions a probability matrix for temporary
'storage of probability values, which can then be're-assigned after each run.
'This assigns the probability values in the DSM matrix to a temporary array
61
'so that the DSM matrix can be re-initialized after each run. This is necessary'since the probabilities are reduced with each iteration during a run.For i = 1 To n
For j = 1 To nProbTemp(i, j) = DSM(i, j, 2)
Next jNext i
'Prompts the user for the number of runs to perform per simulation'if Sensitivity analysis is not being performed. If Sensitivity'Analysis is being performed, the prompt will come from the'sensitivity procedure.
If SensitivityOn = False ThenNumberRuns = InputBox(Prompt:="How many runs would you like per
simulation?")End If
MoreRuns = Truedeltat = Worksheets("SIM Input").Cells(3, 2) 'initialize time step sizer_Max = NumberRuns 'assigns max runs to the user defined numberSingleRunData = Worksheets("SIM Input").Cells(4, 2) 'collect single run data?Worksheets("SIM Results").Range("B3:B65503 ").ClearContents 'erase all prior data
For i = 1 To n
ActS_3pt(i, 1) = Worksheets("SIM Input").Cells(i + 10, 3) 'collect activity'min. duration
ActS_3pt(i, 2) = Worksheets("SIM Input").Cells(i + 10, 4) 'collect activity'most likely duration
ActS_3pt(i, 3) = Worksheets("SIM Input").Cells(i + 10, 5) 'collect activity'max. duration
Winit(i) = I 'Initializes work vector to have all work remainingLC(i) = Worksheets("SIM Input").Cells(i + 10, 6) 'initialize learning
vectorActSeq(i) = Worksheets("SIM Input").Cells(i + 10, 1) 'initialize activity
sequencingNext iApplication.DisplayStatusBar = True
Do While (r <= rMax)'This double loop re-assigns the probabilities in the DSM matrix to their'original values. This is necessary since the probabilities will be reduced'during each run with each iteration of the task.For i = 1 To n
Forj = I To n
62
DSM(i, j, 2) = ProbTemp(i, j)Next j
Next i
r = r + 1 'increment run #
Application.StatusBar =" Run # " & r & " of " & rMax & " Task Number " & TaskNumber &
of " & n'displays run # on status bar
t=l 'initialize time step #finishrun = False 'initialize run as not finishedIf (r = 1) And (SingleRunData) And (SensitivityOn = False) Then
'won't do single run data if sens analysis ison
Worksheets("Single Run Data").Range("D6:CZ16000").ClearContents'erase all prior, single run data
Worksheets("Single Run Data").Range("DA7:IV16000").ClearContentsWorksheets("Single Run Data").Range("B 10:D 16000").ClearContentsWorksheets("Single Run Data").Cells(10, 2).Value = 0 'place initial valuesWorksheets("Single Run Data").Cells(10, 3).Value = 0For i= 1 To n
Worksheets("Single Run Data").Cells(l0 + t, 3 + i).Value = Winit(i)Worksheets("Single Run Data").Cells(8, i + 3).Value = ActSeq(i)
Next iEnd If
For i = I To nW(i) = Winit(i) 'initialize W vectorActS(i) = SampleTriPDF(ActS_3pt(i, 1), ActS_3pt(i, 2), ActS_3pt(i, 3))
'samples duration for each activityNext i
For i = I To nIf (r = 1) And (Single_RunData) And (SensitivityOn = False) Then _
Worksheets("Single Run Data").Cells(7, i + 3).Value = ActS(i)ActS(i) = max(1, CInt(ActS(i) / delta t))
'convert duration to integer time steps (round off)
'Won't paste data during sensitivity analysisIf (r = 1) And (SingleRunData) And (SensitivityOn = False) Then _
Worksheets("Single Run Data").Cells(6, i + 3).Value = ActS(i)Next i
Do While (finishrun = False) 'loop for each time stepBanding 'subroutine to choose activities to work during time step
63
For i = I To nIf WN(i) = True Then 'if activity doing work this time step
W(i) = max(0, W(i) - (1 / ActS(i))) 'do work on these activities (work to docan't go below zero)
If W(i) < 0.01 Then W(i) = 0 'prevents rounding errors from'prolonging activities
End IfIf (WN(i) = True) And (W(i) = 0) Then 'if activity just finished its work, then
For j = 1 To i - 1 'loop through column above newly finished activityIf DSM(j, i, 2) > 0 Then 'if there is a probability of iteration, then
If Rnd <= DSM(j, i, 2) Then 'if this iteration is required, thenW(j) = W(j) + DSM(j, i, 3) 'Adds rework'WOj) =WOj) (DSM(J, i, 3) 1 LC(j))
If W(j) > 1 Then W(j) = 0.9 'but keep work from expanding'beyond original scope
For k = j + I To n 'loop through column below newly'reworked activity
If DSM(k, j, 2) > 0 Then 'if there is a probability of'second-order rework, then
If Rnd <= DSM(k, j, 2) Then 'if this rework is required, thenIf W(k) = 1 Then W(k) = 2 'signify that activity has not
'yet been workedW(k) = W(k) + DSM(k, j, 3) 'Add rework'W(k) = W(k) + (DSM(k, j, 3) * LC(k))
'add rework, diminished by learning curve (oldalgorithm)
If W(k) >= 2 Then W(k) = 1'if downstream activity has not been worked, no
reworkIf W(k) > I Then W(k) = 0.9 'keep work from expanding
'beyond original scopeEnd If
End IfNext k
End IfEnd If
Next jEnd If
Next ifinishrun = True 'pass is now finished...For i = 1 To n
64
to dofinishrun = FalseExit For
End IfNext i
If (r = 1) And (SingleRunData) And (SensitivityOn = False) Then'if desired, calculate single run data
For i = 1 To nWorksheets("Single Run Data").Cells(10 + t, 2).Value = tWorksheets("Single Run Data").Cells(6 + t, 105).Value = t * delta_tWorksheets("Single Run Data").Cells(10 + t, 3).Value = t * delta_tWorksheets("Single Run Data").Cells(10 + t, 3 + i).Value = W(i)If WN(i) Then Worksheets("Single Run Data").Cells(6 + t, 105 + i) = n - i +
I 'build Gantt chartNext iIf rMax = 1 Then Application.StatusBar =" Time Step: " & t 'display time
step # on status barEnd If
t = t + 1 'increment time step
Loop 'loop for time step
S = (t - 1) * deltat 'determine schedule durationCSPSamples(2, r) = S 'put run result for schedule into arrayWorksheets("SIM Results").Cells(r + 2, 2).Value = CSPSamples(2, r)
'paste results into SIM Results worksheetReDim Preserve CSPSamples(9, r + 1) 'enlarge dynamic array to hold next
run results
If r_Max = 0 ThenIf r = rMax Then MoreRuns = False 'if using max # runs, check
for itEnd If
Loop
Application.StatusBar = False 'clear status barWorksheets("SIM Results").Calculate 'recalculate worksheet nowIf SingleRunData And (SensitivityOn = False) Then Worksheets("Single Run
Data").Calculate'recalculate SRD worksheet now
Worksheets("SIM Results").SelectRange(Cells(2, 6), Cells(26, 8)).SelectSelection.ClearContents
65
' ... unless any activity has more workIf W(i) <> 0 Then
With Selection.Interior.ColorIndex = 2.Pattern = xlPatternNone
End With
'Will output a histogram if Sensitivity Analysis is not'being performed.
Worksheets("SIM Input").SelectIf SensitivityOn = False Then
Worksheets("SIM Results").SelectWorksheets("SIM Results").DrawingObjects.DeleteApplication.Run "ATPVBAEN.XLA!Histogram",
ActiveSheet.Range("$B$3:$B$10003") -
, ActiveSheet.Range("$F$2"), ActiveSheet.Range("$E$3:$E$25"), False, True _, False, False
'Changes the number format of the histogram to zero decimal placesRange("F3:F25 ").SelectSelection.NumberFormat = "0.0"Selection.NumberFormat = "0"Range("E4").Select
End IfSheets("Histogram").Select
End Sub
Sub Bandingo 'finds the most upstream set of activities that can be workedconcurrently in the time step
For i = I To nWN(i) = False 'initialize all activities to do NO work during this ti
Next ij = n + I 'keeps from looking for full band when no activities le
me step
ft
For i = 1 To n 'find first activity that can do work during the current timestep
If W(i) <> 0 ThenWN(i) = Truej = i + 1 'sets j to the following activityExit For 'leave loop once the first activity is found
End IfNext i
bandcomplete = False 'all activities for band have not beenfound
66
Do While (band_complete = False) And (j <= n)
activities in the bandIf W(j) <> 0 Then 'if n
For k = i To j - 1'if (DSMOj, k, 2) <> 0) 1Ad (W(k) <> 0)
4idepende*WerlE
band-complete = TrueExit For
End IfNext kIf bandcomplete = False Then
WN(j) = TrueElse
Exit DoEnd If
End Ifj= j +l
Loop
'begin to identify remaining
ext activity needs work
Thenit on an upstream aetivity needing
'If dependent on an upstream activity'needing work,
'and the upstream activity is sufficientlycompleted,
'then. . . This permits overlapping'tasks that are dependent on each other'The %work that must be accomplished
before'the dependent task can be completed is
contained'in the DSM array (4th dimension) and
comes from'the user defined overlap percentage.
'then the complete band has been found
'keep checking vs. activities in band'if complete band not yet found...
'...then add activity j to "work now"
'if complete band found, then finished banding
'see if next activity can be added to the band
End Sub
Function max(x, y)If x > y Then
max = xElse
max = yEnd If
End Function
'returns the greater of the two values
67
'Random Number Generator for a Triangular DistributionFunction SamplejTriPDF(a, b, c) 'returns a random sample from a TriPDF
Dim y As Single
y = Rnd 'choose a random number in [0,1)SampleTriPDF = a + Sqr(y * (b - a) * (c - a)) 'find appropriate point along
base of TriPDFIf SampleTriPDF > b Then 'if point is greater than MLV, then..
SampleTriPDF = c - Sqr((1 - y) * (c - a) * (c - b)) 'use a different formulaEnd If
End Function
Sensitivity Analysis Module
Public MatrixSensitivityOn As Boolean 'Permits having the CSPModel choose'the MatrixSensitivity worksheet'as the visible worksheet during'matrix sensitivity calculations
Public TaskNumber As Integer 'Permits keeping track of the task number during'sensitivity analysis
Public SensitivityOn As Boolean 'Permits turning off certain features of the
'CSPModel procedure to speed repeat calculationsSub RowSensitivityO
'Sensitivity Macro'Macro recorded 9/21/00 by Cory Welch'This macro reduces the probabilities in each row of the'Design Structure Matrix one at a time by a user defined percentage'reduction. The resulting mean and standard deviation associated withreducing each row by a user defined percentage are calculated and
'deposited to the Sensitivity worksheet. This sensitivity analysis serves'to measure the sensitivity of mean project duration as well as project'duration variance to the probabilities of rework associated with' each task.'
SensitivityOn = True 'Identifies that sensitivity analysis will be'be performed so the CSPModel can be simplified
'Prompts the user to tell how many runs to perform per simulationNumberRuns = InputBox(Prompt:="How many runs would you like per simulation?")
Dim i As Integer 'i will be a counter used to step through each matrix row
68
Dim p As Single 'the percentage reduction of row probabilitiesp = InputBox(Prompt:="By what percentage would you like to reduce the rowprobabilities?")Sheets("Sensitivity").Select
'Runs the simulation with baseline probabilities and then inputs'the resultant mean and std dev project duration in the sensitivity'worksheetCSPModelSheets("SIM Results").SelectRange("C 17:D 17 ").SelectSelection.CopySheets("Sensitivity").SelectRange("B5:C5 ").SelectSelection.PasteSpecial Paste:=xlValues, Operation:=xlNone, SkipBlanks:=
False, Transpose:=False
'Deletes the previous values in the data table so the user'can view the progression of the data inputRange(Cells(8, 2), Cells(108, 3)).SelectSelection.ClearContents
Sheets("Sensitivity"). SelectCells(5, 4) = p / 100 'Inputs probability reduction into data table on Sensitivity worksheetCells(3, 2) = p / 100 'assigns p to the percent reduction factor in the Sensitivity worksheetCells(5, 5) = NumberRuns 'Pastes the number of runs per simulation into
'the sensitivity worksheet
For i = 1 To n 'n is the number of tasks in the matrixTaskNumber = i 'TaskNumber will be used in the CSPModel on the statusbar displaySheets("Probability").Select'Selects row i and replaces it with a percentage of its valueRange(Cells(i + 3, 4), Cells(i + 3, n + 4)).SelectSelection.CopySheets("Sensitivity").SelectRange("B 1 ").SelectSelection.PasteSpecial Paste:=xlValues, Operation:=xlNone, SkipBlanks:=
False, Transpose:=FalseRange(Cells(2, 2), Cells(2, n + 1)).SelectSelection.CopySheets("Probability").SelectCells(i + 3, 4).SelectSelection.PasteSpecial Paste:=xlValues, Operation:=xlNone, SkipBlanks:=
False, Transpose:=False
'Runs the simulation with the new probabilities in row i
69
CSPModel
'Replaces row i probabilities with original probabilitiesSheets("Sensitivity").SelectRange(Cells(l, 2), Cells(l, n + 2)).SelectSelection.CopySheets("Probability").SelectCells(i + 3, 4).SelectSelection.PasteSpecial Paste:=xlValues, Operation:=xlNone, SkipBlanks:=
False, Transpose:=False
'Delivers the output of mean and standard deviation to the Sensitivity worksheetSheets("SIM Results").SelectRange("C 17:D 17").SelectSelection.CopySheets("Sensitivity").SelectCells(i + 7, 2).SelectSelection.PasteSpecial Paste:=xlValues, Operation:=xlNone, SkipBlanks:=
False, Transpose:=FalseNext i
Sheets("Sensitivity").Select 'Returns user to the Sensitivity worksheet.Range("E5 ").Select
SensitivityOn = False 'Returns this variable to False so that single'simulations will still perform all functions
End Sub
Sub ColumnSensitivity()
'Column Sensitivity Macro'Macro recorded 9/21/00 by Cory Welch'This macro reduces the probabilities in each column of the'Design Structure Matrix one at a time by a user defined percentage'reduction. The resulting mean and standard deviation associated with'reducing each column by a user defined percentage are calculated and'deposited to the Sensitivity worksheet. This sensitivity analysis serves'to measure the sensitivity of mean project duration as well as project'duration variance to the probabilities of each task.
SensitivityOn = True 'turns off certain features of the CSPModel to'speed the calculations (e.g., histogram, singlerun'data
TaskNumber = 0 'Re-zeroes the initial task number
'Prompts the user to tell how many runs to perform per simulation
70
NumberRuns = InputBox(Prompt:="How many runs would you like per simulation?")
Dim i As Integer 'i will be a counter used to step through each matrix rowDim p As Single 'the percentage reduction of column probabilitiesp = InputBox(Prompt:="By what percentage would you like to reduce the columnprobabilities?")Sheets("Sensitivity").Select
'Runs the simulation with baseline probabilities and then inputs'the resultant mean and std dev project duration in the sensitivity'worksheetCSPModelSheets("SIM Results").SelectRange("C 17:D 17").SelectSelection.CopySheets("Sensitivity"). SelectRange("G5:H5 ").SelectSelection.PasteSpecial Paste:=xlValues, Operation:=xtNone, SkipBlanks:=
False, Transpose:=False
'Deletes the previous values in the data table so the user'can view the progression of the data inputRange(Cells(8, 7), Cells(108, 8)).SelectSelection.ClearContents
Cells(5, 9) = p / 100 'Inputs the percent reduction into the data tableCells(3, 2) = p / 100 'assigns p to the percent reduction factor in the Sensitivity worksheetCells(5, 10) = NumberRuns 'pastes the number of runs per simulation into
'the sensitivity worksheet
For i = 1 To n 'n is the number of tasks in the matrixTaskNumber = i 'tracks which task is having a sensitivity analysis performedSheets("Probability").Select'Selects column i and replaces it with a percentage of its valueRange(Cells(4, i + 3), Cells(n + 4, i + 3)).SelectSelection.CopySheets("Sensitivity").SelectRange("B I ").SelectSelection.PasteSpecial Paste:=xlValues, Operation:=xlNone, SkipBlanks:=
False, Transpose:=TrueRange(Cells(2, 2), Cells(2, n + 2)).SelectSelection.CopySheets("Probability").SelectCells(4, i + 3).SelectSelection.PasteSpecial Paste:=xlValues, Operation:=xlNone, SkipBlanks:=
71
False, Transpose:=True'Runs the simulation with the new probabilities in row iCSPModel'Replaces column i probabilities with original probabilitiesSheets("Sensitivity").SelectRange(Cells(1, 2), Cells(1, n + 2)).SelectSelection.CopySheets("Probability").SelectCells(4, i + 3).SelectSelection.PasteSpecial Paste:=xlValues, Operation:=xlNone, SkipBlanks:=
False, Transpose:=True'Delivers the output of mean and standard deviation to the Sensitivity worksheetSheets("SIM Results").SelectRange("C 17:D 17").SelectSelection.CopySheets("Sensitivity").SelectCells(i + 7, 7).SelectSelection.PasteSpecial Paste:=xlValues, Operation:=xlNone, SkipBlanks:=
False, Transpose:=FalseNext i
Range("J5 ").Select
SensitivityOn = False 'Turns features of the CSPModel back onEnd Sub
Sub MatrixSensitivityo
Dim i, j As Integer 'counters for searching each cell of the'probability matrix
Dim OldProb, NewProb As Single 'temporary holders of the probability'value of each cell
Dim p As Single 'The user defined percentage by which each probability'will be reduced to measure its effect on project duration
Dim BaseMean As Single 'Baseline project meanDim Reduction As Single 'The percent change from the baseline
'project mean durationDim Newmean As Single 'The new mean after the probability of a given
'cell has been changed
SensitivityOn = True 'turns off certain features of the CSPModel to'speed the calculations (e.g., histogram, singlerun'data)
MatrixSensitivityOn = True 'Designates that the CSPModel should select'MatrixSensitivity as the visible worksheet'so the user can view each value being
72
'input into the matrix
'Prompts the user to tell how many runs to perform per simulationNumberRuns = InputBox(Prompt:="How many runs would you like per simulation?")
p = InputBox(Prompt:="By what percentage would you like to reduce the rework probabilities in the
probability matrix?")
'Simulates the model to calculate the baseline project durationCSPModelSheets("Sim Results").SelectBaseMean = Cells(17, 3).ValueSheets("MatrixSensitivity").SelectCells(7, 4) = BaseMean 'pastes the baseline mean into the worksheetCells(8, 4) = p / 100 'pastes the percent reduction into the worksheetCells(7, 9) = NumberRuns 'pastes the number of runs per simulation into sheet
'Deletes previous data from the Matrix Sensitivity worksheetRange(Cells(9, 2), Cells(120, 120)).SelectSelection. ClearContents
'Labels the Tasks, rows, and columns on the MatrixSensitivity worksheetSheets("DSM").SelectRange(Cells(2, 1), Cells(1 + n, 2)).Select 'task names and numbersSelection.CopySheets("MatrixSensitivity").SelectCells(10, 2).SelectSelection.PasteSpecial Paste:=xlValues, Operation:=xlNone, SkipBlanks:=
False, Transpose:=False 'pastes task names and numbersRange(Cells(10, 3), Cells(10 + n, 3)).SelectSelection.CopyCells(9, 4).SelectSelection.PasteSpecial Paste:=xlValues, Operation:=xlNone, SkipBlanks:=
False, Transpose:=True 'pastes task numbers across the top of'the matrix
Cells(I .4).Select
'Searches for non-zero cells in the probability matrix,'reduces each probability by a user defined percentage'and simulates the effect of reducing individual probability'values. Outputs the results into the MatrixSensitivity worksheet'as a percentage reduction of project duration from baseline.Sheets("Probability").SelectFor i = 1 To n
Forj = I Ton
73
If (Cells(i + 3, j + 3)) <> 0 And i <> j ThenOldProb = Cells(i + 3, j + 3).ValueNewProb = (1 - p /100) * OldProbCells(i + 3, j + 3) NewProbCSPModelSheets("Sim Results").SelectNewmean = Cells(17, 3)Reduction = (Newmean - BaseMean) / BaseMean '% reduction in project mean
durationSheets("MatrixSensitivity").SelectCells(i + 9, j + 3) = ReductionSheets("Probability").SelectCells(i + 3, j + 3) = OldProb
End IfNext j
Next i
Sheets("MatrixSensitivity").SelectCells(10, 4).Select
SensitivityOn = False 'turns features of the CSPModel back onMatrixSensitivityOn = False
End Sub
Initialization Module
'Module Created by Cory Welch, October 2000.
Sub Initializationo
'This subroutine initializes all 4 dimensions of the DSM array.
Determinen 'Subroutine to determine the number of tasks in the matrix
ReDim DSM(n, n, 4)
'Initializes dimension 1 of the DSM array, which include the'binary values from the DSM matrixFor i = 1 To n
Forj = I TonDSM(i, j, 1) = Worksheets("DSM").Cells(i + 1, j + 2)
Next jNext i
74
~~13
'Initializes dimension 2 of the DSM array, which includes the'rework probabilities from the DSMWorksheets("Probability").SelectFor i = I To n
Forj = I To nDSM(i, j, 2) = Cells(i + 3, j + 3)
Next jNext i
'Initializes dimension 3 of the DSM array, which include the'rework impacts from the DSMWorksheets("Impact").SelectFor i = I To n
Forj = 1 TonDSM(i, j, 3) = Cells(i + 3, j + 3)
Next jNext i
For i = I To nFor j = 1 To n
DSM(i, j, 4) = Worksheets("Overlap").Cells(i + 3, j + 3)Next j
Next iEnd Sub
Public Sub Determine no'This part is to assign an initial value of "n" so that the usesr'does not have to input it each time a simulation is run.
Worksheets("DSM").SelectCells(1, 1).Select
For i = 2 To 200If IsEmpty(Cells(i, 1)) = True Then
n=i-2Exit For
End IfNext iEnd Sub
Sub InitializeDSM1 ()'This subroutine only initializes dimension I of the DSM array.
75
Determine n 'Determines the number of tasks in the matrixReDim DSM(n, n, 4)
'Initializes dimension I of the DSM array, which includes the'binary values from the DSM matrixFor i = I To n
Forj = I TonDSM(i, j, 1) = Worksheets("DSM").Cells(i + 1, j + 2)
Next jNext i
End Sub
Auto - Open Module
Public n As IntegerSub AutoOpenO
Determinen 'Initializes the number of tasks in the DSM by looking'for how many tasks are listed on the DSM worksheet
Worksheets("DSM").SelectCells(1, 1).Select
End Sub
Sub ClearDSMO
'Macro2 Macro'Macro recorded 3/17/99 by Dr. Ali Yassine
n = InputBox(Prompt:="Please enter the number of DSM elements")
Worksheets("DSM").SelectRange(Cells(1, 1), Cells(200, 200)).SelectSelection.ClearContentsWith Selection.Interior
.ColorIndex = 2
.Pattern = xlPatternNoneEnd WithCells(1, 1) = "Name"
For i = I To nCells(i + 1, 2) = iCells(1, i + 2) = i
76
Cells(i + 1, i + 2).SelectWith Selection.Interior
.ColorIndex = I
.Pattern = xlSolidEnd WithCells(i + 1, i + 2).Value = iWith Selection.Font
.ColorIndex = 2End WithNext i
Worksheets("DSM").SelectCells(2, 1).Select
End Sub
77
Appendix 8: Model Instruction Manual
Design Structure Matrix Simulation (DSMSIM) Instructions
The model was created in Microsoft@ Excel 2000 and can be obtained from the author.
1. Getting Started
1. Open the file named "DSMSIM."
2. When prompted, Click on "Enable Macros."
3. When prompted, click on "No" to avoid referencing linked files, which do not exist.
2. Data Input
Noted: After any data has been input, it is generally a good idea to copy the data thatwere entered from this file into another file. The reason for this is that some of themacros manipulate that data and, if interrupted, may change the input values. Thus, aseparate file should be created that contains the input data for this program. Thisprogram may also be saved as any name you desire.
2.1 DSM Worksheet
1. To enter a new design process, select the DSM worksheet. Click on the macro buttonlabeled "Click here to input new DSM."
2. When requested, input the number of design tasks in the matrix and press Enter. Anew matrix will be automatically generated.
3. Enter the names of each design task beginning in cell A2. Be sure to enter a name foreach design task before proceeding to the next worksheet. If names of cells are leftblank, subsequent macros will assume fewer design tasks.
4. Enter a I in each cell of the matrix where an interdependence between tasks exist.Details of how to fill in this matrix are contained in Addendum 1.
5. Upon completing the DSM matrix, select the Probability worksheet.
2.2 Probability Worksheet
Note: Ensure the DSM matrix has been generated AND filled in prior togenerating the probability matrix. Otherwise, the appropriate cells will not behighlighted in the probability matrix.
78
1. Select the Probability worksheet.
2. To create a matrix of rework probabilities, click on the macro button labeled "Clickhere to input new probability matrix." A new matrix will be automatically generated thatwill contain the names of the tasks entered in the DSM worksheet. Additionally, cellswill be automatically highlighted that correspond with each cell in the DSM worksheetwhere a I was entered.
Note: Once the probability worksheet has been generated, the impact and overlapworksheets may also be generated in parallel. It is not necessary to input the values intothe probability matrix before generating the other matrices. It is necessary, however, toat least generate the probability matrix by clicking the macro button before generatingthe impact and overlap matrices.
3. In each of the highlighted cells, input the estimated rework probabilities consistentwith the directions given in Addendum 1. Probabilities should be entered as fractions(e.g., 50% would be entered as 0.5).
2.3 Impact Worksheet
1. Select the Impact worksheet.
2. To create a new Impact matrix, click on the macro button labeled "Click here to enternew impact matrix." An impact matrix will be automatically generated with theappropriate cells highlighted.
3. In each of the highlighted cells, input the estimated impact of rework consistent withthe directions given in Addendum 1. Probabilities should be entered as fractions (e.g.,50% would be entered as 0.5).
2.4 Overlap Worksheet
1. Select the Overlap worksheet.
2. To create a new Overlap matrix, click on the button labeled "Click here to input newoverlap matrix."
3. Enter the overlap percentages consistent with the directions given in Addendum 1.Overlap percentages should be entered as fractions (e.g., 50% would be entered as 0.5).
79
2.5 SIM Input Worksheet
1. Select the SIM Input worksheet.
2. Click on the macro button labeled "Click here to delete old activities and input a newse of activities and durations." The list of activities from the DSM worksheet will beautomatically input into the worksheet.
3. Estimate the duration of each design activity. In the appropriate column, estimate thebest case value, the most likely value, and the worst case value for estimated design tasktime.
4. In the learning curve column, insert an estimation of the percent reduction in reworkof that task with each iteration of the task. For example, if it is estimated that theprobability of reworking this task drops by 40% each time the task is iterated, enter 0.4 inthe cell corresponding with that design task.
Note: While there may be several different interactions that could cause reworkof a design task, only one learning curve value is specified per task. If a particularinteraction results in rework of that design task, only the probability associated with thatinteraction will be reduced by the learning curve.
Additionally, tasks that do not have any rework probabilities to the right of thediagonal need not have a learning curve value specified (leave it blank, or enter zero).The reason for this is that only the probabilities to the right of the diagonal are reducedwith each iteration. The probabilities to the left of the diagonal (feedforwardprobabilities) remain unchanged.
3. Analysis
3.1 SIM Input Worksheet
1. On the SIM Input Worksheet, it is necessary to input a time step for the simulationinto cell B3. A reasonable value for the time step might be 25% of the shortest task timein the process.
2. Additionally, enter either a 0 or a I in cell B5. If a I is entered, the simulation willrecord example data from 1 simulation to generate a Gantt chart of the process. This willslow the calculation somewhat, so this option may be turned off by entering a 0 in cellB5.
3.2 Histogram Worksheet
1. To begin simulating the project, click on the macro button labeled "Click here tosimulate the project."
80
2. When prompted, enter the number of times project runs you would like per simulation.The accuracy achieved with larger runs per simulation is achieved at the cost of longercalculation time.
The blue bars on the histogram correspond with the left y-axis and represent thepercentage of time the project finished in the time shown on the x-axis. The line showsthe cumulative distribution function and corresponds with the right y-axix. This linerepresents the probability that the project will be completed in the time shown on the x-axis.
3.3 Gantt Chart
This chart provides a Gantt chart of the process taking into consideration the reworkthat could occur during the process. The source data for this chart will have to bechanged if the number of design tasks is changed. To do this, right click on the chart andselect "source data." Select "Data Range." Click on the red arrow to the right of the datarange box. This arrow will direct you to the "single run data" worksheet, which will havea range highlighted. If you wish to change this range, highlight the range from cell DA6until you have highlighted all design tasks. Highlight down far enough to capture alldata.
3.4 SIM Results Worksheet
This worksheet stores the output of the project simulation and the data used togenerate the histogram. This worksheet is also used to calculate the project mean andstandard deviation.
Note: In order for the Histogram to work properly, Excel must have the AnalysisToolpack installed. Otherwise, the histogram will not be generated and an error will bedisplayed.
3.5 Sensitivity Worksheet
Note: Sensitivity Analyses can take a long time to run. If several thousand runsare performed per simulation, analyses could take up to several hours.
Note: If Sensitivity Analyses are interrupted in the middle of the analysis, it willbe necessary to re-copy the values of the probability matrix. This is necessary since theprogram changes the values of the probabilities, and if the program is interrupted in themiddle, it will not be able to re-enter the correct values.
This worksheet permits performing sensitivity analyses on the rework probabilities todetermine the tasks whose rework probabilities most significantly impact the projectduration and variability.
81
The Row Sensitivity Analysis reduces the entire row of rework probabilities for adesign task by a user defined percentage. It then simulates the effect of these reducedrework probabilities and outputs the results. This is repeated for each design task.
The Column Sensitivity Analysis reduces the entire column of rework probabilitiesfor a design task by a user defined percentage. It then simulates the effect of thesereduced rework probabilities and outputs the results. This is repeated for each designtask.
1. To perform a sensitivity analysis on the row of probabilities associated with eachdesign task, click on the macro button labeled "Row Sensitivity."
2. When prompted, enter the number of runs you would like per simulation.
3. When prompted, enter the percentage by which you would like to DECREASE therow of probabilities for each task. If you would like to decrease the probabilities by 60%,enter 60. If you would like to estimate the impact of an INCREASE in the probabilities,enter a negative number for the percentage decrease.
4. To perform a sensitivity analysis on the column of probabilities associated with eachdesign task, click on the macro button labeled "Column Sensitivity." Follow steps 2 and 3for data entry.
3.6 Matrix Sensitivity Worksheet
Note: Sensitivity Analyses can take a long time to run. If several thousand runsare performed per simulation, analyses could take up to several hours.
Note: If Sensitivity Analyses are interrupted in the middle of the analysis, it willbe necessary to re-copy the values of the probability matrix. This is necessary since theprogram changes the values of the probabilities, and if the program is interrupted in themiddle, it will not be able to re-enter the correct values.
This worksheet permits performing sensitivity analysis on each individual reworkprobability in the probability matrix. The program reduces each individual value by auser specified percentage, simulates the project with that reduced value, and enters thepercentage by which the project duration changed into the cell corresponding with therework probability value that was reduced.
1. Click on the macro button labeled "Click here to run Matrix Sensitivity."
2. When prompted, enter the number of runs you would like per simulation.
3. When prompted, enter the percentage by which you would like to DECREASE therow of probabilities for each task. If you would like to decrease the probabilities by 60%,
82
enter 60. If you would like to estimate the impact of an INCREASE in the probabilities,enter a negative number for the percentage decrease.
3.7 Single Run Data Worksheet
This worksheet contains the data that is collected to create a Gantt chart of a sample run.There is no need to adjust this worksheet except as noted in section 3.3.
83
Addendum 1 of Appendix 8
Instructions for Filling in the Design Structure Matrices
Identification of Interdependence
1. Identification of task interdependence is relatively straightforward. The existence ofan interdependence between tasks can be thought of in a couple of ways. One way is toask yourself, "For Task (fill in the blank), which tasks provide information to thistask?" As shown in the matrix below, Task B requires information from Tasks A, E andG (as indicated by the presence of a 1 in the matrix cell). Another way to think of thesame interdependence is to ask yourself, "Which tasks, when completed or reworked,might cause subsequent rework of Task (fill in the blank)?". Again, for Task B, wesee that, for example, completion or rework of Tasks A, E and G might result in Task Bhaving to be reworked as well.
< MO 0w LLCJI
H -H-H H HF-F-H H-12 3456_7 8
l's in row indicate thatTask B receivesinformation from tasksA, E, and G.
l's in column indicatethat Task E providesinformation to tasksB and H.
Caveat: Keep the information flows as direct as possible. For example, let's assumewe're working on filling out the row for Task C. We estimate that re-working Task Amight cause Task C to be re-worked, but only if re-working Task A first caused Task B tobe re-worked. In this case, we would want to indicate that Task A provides informationto Task B (hence the 1 in the box) and Task B provides information to Task C (another 1in that box). However, Task A does not directly provide information to Task C, and thusis left empty. The highlighted boxes below represent this situation.
84
Task A 1 = 1 1 1Task B 2 1 1 1Task C 3 1
Tas E 5 1 1Task F 6 .-Task G 171 1 11Task H 1 81 1 1 1 1 : 4
<mo M0aW LLbl
- - - ~- - ~I
4 5 6 7 8-Ili L I I I
1 1
Task D 4Task E 5 T 1Task F 6Task G 7 1 1 1Task H 8 1 1 1 1
Probability Matrix
After identifying the presence of a task interdependence, the next step in filling out thedesign structure matrix is to identify the likelihood that the completion or revision adesign task will cause re-work in another design task. As an example consider thehighlighted number below. The highlighted 0.1 indicates that there is an estimated 10%chance that completion of Task C will cause at least some re-work of Task A (the extentof the re-work is covered in Step 3). Likewise, upon completion of Task E, there is anestimated 20% chance that at least some re-work of Task B will be required. In general,numbers above the diagonal are estimates of the likelihood that completion of the task inthe same column as that number will cause re-work of the task in the same row as thenumber. These numbers represent the odds of feedback from one task to another, asillustrated below.
< cn I
CD -n KnC C nC
Task ATask BTask CTask DTasck EF
12345
.4'
0.5
3 41 5
05.20.1
I o9q
6
0.9
TsG 17_0.4 .QJQ.Task F 6Task G 7 0.4 0.2j.2Task H __ 0.3 1 0.61 E.7 7 .1!
Feedback
The numbers below the diagonal have similar meaning, but instead represent a situationoffeedforward. The presence of a number below the diagonal indicates that there isforward information flow from one task to another. Thus, if the upstream task is re-worked, there is a likelihood that the downstream task might also have to be re-worked.As an example, consider the highlighted numbers below. The highlighted 0.1 indicates
85
that if Task B has to be re-worked, there is an estimated 10% chance that Task C wouldhave to be subsequently re-worked to some extent. Likewise, if Task A had to be re-worked, there is an estimated 50% chance that Task B would have to be re-worked tosome extent. In general, numbers below the diagonal indicate the likelihood that re-workof the task in the same column as the number will cause subsequent re-work of the task inthe same row as the number.
Cu M CO Mu W C n O C
Task BTask CTask DTask ETask FTask GTask H
-1
4
617810.3
1 2 3 4 5 6 7 810.1 1 1
0.9
0.4
0.2
10.5
10.21 10.2
0.9
0.61 10.71 10.5
When filling out the DSM, it is easiest to start with the first task and proceedhorizontally. For instance, for Task B, ask yourself "What is the likelihood thatcompletion or re-work of Task "X" will cause me to have to re-work Task B?" Asseen belo, proceeding horizontally with Task B, I estimate that there is a 50% chance thatre-working Task A would cause some re-work of Task B. I then estimate that there is a20% chance that completing Task E would cause some re-work in Task B. And finally, Iconsider that there is a 90% chance that completing Task G will cause me to re-worksome of Task B. Zero probability entries are left blank.
i< HI O _ _
C O MU M C aU C CO COH _ H HIF
Task ATask BTask CTask D
Task FTask GTask H
34
67810.3
11 2 3 4 5 6 7 80.11 1i 1 1 I1
0.1
0.4 10.21
0A2
10.2
0.9
0.61 10.71 10.5
86
FeedforwardZ
, - , ,, , ,
Tak 11 0.91 0
1
Impact Matrix
So far, only the probability of one task causing re-work of another task has beenaddressed. However, we must also estimate the percentage of the task that must be re-worked. In some instances, a task may have to be nearly completely redone, whereas inother instances, only minor additional work will be required. For each of theprobabilities of one task causing re-work in another task, we must also estimate thepercentage of that task that would have to be re-worked. As an example, consider thematrix below, which is the impact counterpart to the above probability matrix. Each ofthe highlighted boxes will need to be filled in with an estimate of how much of each taskis likely to need to be re-worked, since each of the highlighted boxes had a nonzeroprobability of re-work in the probability matrix.
TO CO O sk U O O
u)2 41 5 61 7 8Task A.5PTask B 2Task C 3Task D 4Task E 5Task F 61Task G 7 1Task H 8
We know from the probability matrix that we estimated that completion of Task C has a-10% chance of causing Task A to be re-worked. Now, we must ask ourselves"Assuming Task C did cause Task A to be re-worked, for a typical case, whatpercentage of Task A might have to be re-worked" A typical case means the typemost often encountered, not necessarily the worst case. If the answer is "50% of Task Amight have to be re-done", put a 0.5 in the box, as shown above. Again, it is probablyeasiest to start with the first task and fill in the row for each task before proceeding to thenext task.
As another example consider the completed impact matrix below. The highlighted boxesmean that, assuming Task B was required to be re-worked and that this in fact resulted inre-work of Task E, it is estimated that 40% of Task E would have to be re-worked.Likewise, assuming Task D was required to be re-worked and that this resulted in re-work of Task E, it is estimated this would cause about 40% of Task E to be re-worked.
87
-Y- -X X - 14 - -e I -U) U) U) U) U) U)cdC o C a C a C C 5 Co UEH F-- H H H- H
1234
0.1
11 2 3 410.5 u 1 1
0.3
Task E .-Task F 161 1 1
7810.8~
I 08I Iloal
0.61 10.71
Overlap Matrix
The last step in filling out the design structure matrix is to identify the percentage of
overlap that can occur between tasks. While tasks may require information from another
task, it may be that the dependent task can start before the task on which is dependent is
100% complete. This matrix identifies the percentage of the preceding task (or task that
is providing information) that must be complete before the dependent task can start. The
matrix below shows and example of the input for this matrix. The highlighted 1.0 means
that 100% of Task A must be performed before Task C can be started. Similarly, the
highlighted 0.5 means that only 50% of Task B needs to be completed before Task C can
be started.
-,A x1 _X -V- .U) Un U n ( ) I UC) U)
Task ATask BTask CTask DTask ETask FTask G
123
45167
i1 2 3 4 5 6 71 8
ZJiiI~1~I~ liii
0.9
1.0
I U.I I
1.0 I0.1Task H 181.0 1.0 1.0 0.5
I
88
Task ATask BTask CTask D
5 61 7 8
0.8t1 0.9
Task GTask H
1 027 108 1031
10.1