Benefits of Higher Quality Level of the Software Process
-
Upload
mert-erkol -
Category
Documents
-
view
216 -
download
0
Transcript of Benefits of Higher Quality Level of the Software Process
-
8/7/2019 Benefits of Higher Quality Level of the Software Process
1/9
Software quality assurance professionals
believe that a higher quality level of software
development process yields higher quality
performance, and they seek quantitative
evidence based on empirical findings. The
few available journal and conference papers
that present quantitative findings use a
methodology based on a comparison of
before-after observations in the same
organization. A limitation of this before-after
methodology is the long observation period,
during which intervening factors, such as
changes in products and in the organization,
may substantially affect the results. The
authors study employed a methodology
based on a comparison of observations in
two organizations simultaneously (Alpha and
Beta). Six quality performance metrics were
employed: 1) error density, 2) productivity, 3)
percentage of rework, 4) time required for an
error correction, 5) percentage of recurrentrepairs, and 6) error detection effectiveness.
Key words: CMM level effects, CMM level
appraisal, software development perfor-
mance metrics
SQP References
Sustaining Best Practices: How
Real-World Software Organizations
Improve Quality Processes
vol. 7, issue 3
Diana Mekelburg
Making Effective Use of the CMM in an
Organization: Advice from a CMM Lead
Assessor
vol. 2, issue 4
Pat OToole
INTRODUCTIONSoftware quality assurance (SQA) professionals believe that
a higher quality level of software development process yields
higher quality performance. SQA professionals seek evidence
for positive results of investments in SQA systems to achieve
improved quality performance of the software development
process. Journal and conference papers provide such evidence
by presenting studies that show SQA investments result in
improved software development processes. Most of these stud-
ies are based on comparison of before-after observations in
the same organization. Only parts of these papers quantify the
performance improvement achieved by SQA system invest-
ments, presenting percentages of productivity improvement
and percentages of reduction of defects density, and so on.
Of special interest are papers that quantify performance
improvement and also measure software process quality level
advancement. Capability Maturity Model (CMM) and CMM
IntegrationSM (CMMI) level are the tools used for measur-
ing the software process quality level common in all of these
papers. According to this approach, the improvement of the
quality level of the software process is measured by attain-
ing a higher CMM (or CMMI) level in the organization. For
example, Jung and Goldenson (2003) found that software main-
tenance projects from higher CMM-level organizations typically
report fewer schedule deviations than those from organizations
S O F T W A R E Q U A L I T Y M A N A G E M E N T
Benefits of aHigher Quality
Level of theSoftware Process:Two Organizations
ComparedDaniel Galin
Ruppin Academic Center
Motti avrahaMi
Verifone
www.asq.org 27
-
8/7/2019 Benefits of Higher Quality Level of the Software Process
2/9
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared
assessed at lower CMM levels. For U.S. maintenance
projects the results are:
Mean deviation of 0.464 months for CMM level
1 organizations
Mean deviation of 0.086 months for CMM level
2 organizations
Mean deviation of 0.069 months for CMM level
3 organizations
A variety of metrics are applied to measure the
resulting performance improvement of the software
development process, relating mainly to quality, pro-
ductivity, and schedule keeping. Results of this nature
are presented by McGarry et al. 1999; Diaz and
King 2002; Pitterman 2000; Blair 2001; Keeni 2000;
Franke 1999; Goldenson and Gibson 2003; and Isaac,Rajendran, and Anantharaman 2004a; 2004b.
Galin and Avrahami (2005; 2006) performed an
analysis of past studies (meta analysis) based on results
presented in 19 published quantitative papers. Their
results, which are statistically significant, show an aver-
age performance improvement according to six metrics
that range from 38 percent to 63 percent for one CMM
level advancement. Another finding of this study is an
average return on investment of 360 percent for invest-
ments in one CMM level advancement. They found
similar results for CMMI level advancement, but thepublications that present findings for CMMI studies do
not provide statistically significant results.
Critics may claim that the picture portrayed by
the published papers is biased by the tendency not
to publish negative results. Even if one assumes some
bias, the multitude of published results proves that
a significant contribution to performance is derived
from SQA improvement investments, even if its real
effect is somewhat smaller.
The papers mentioned in Galin and Avrahamis study,
which quantify performance improvement and rank software
process quality level improvement, were formulated accord-
ing to the before-after methodology. An important limitation
of this before-after methodology is the long period of obser-
vations during which intervening factors, such as changes
in products, the organization, and interfacing requirements,
may substantially affect the results. In addition, the gradual
changes, typical to implementation of software process
improvements, cause changing performance achievements
during the observation period that may affect the study
results and lead to inaccurate conclusions.
An alternative study methodology that minimizes these
undesired effects is a methodology based on comparing the
performance of several organizations observed at the same
period (comparison of organizations methodology). The
observation period, when applying this methodology, ismuch shorter, and the observed organization is not expected
to undergo a change process during the observation period.
As a result, the software process is relatively uniform during
the observation period and the effects of uncontrolled soft-
ware development environment changes are diminished.
It is important to find out whether the results obtained
by research applying the comparison of organizations
methodology support findings of research that applied the
before-after methodology of empirical studies. Papers that
report findings of studies that use the comparison of orga-
nizations methodology are rare. One example is Herbslebet al. (1994), which presents comparative case study results
for two projects that have similar characteristics performed
at the same period by Texas Instruments. One of the
projects was performed by applying old software develop-
ment methodology, while the other used new (improved)
software development methodology. The authors report a
reduction of the cost per software line of code by 65 per-
cent. Another result was a substantial decrease in the defect
density, from 6.9 to 2.0 defects per 1,000 lines of code. In
addition, the average costs to fix a defect were reduced by
71 percent. The improved software development processwas the product of intensive activities of software perfor-
mance improvement (SPI), and was characterized by an
entirely different distribution of resources invested during
the software development process. However, Herbsleb et
al. (1994) provide no comparative details about the quality
level of the software process, that is, by appraisal of the
CMM level for the two projects.
The authors study applies the comparison of orga-
nizations methodology, which is based on empirical
data of two software developing organizations (develop-
ers) with similar characteristics, collected in the same
period. The empirical data that became available to the
authors enabled them to process comparative results for
each of the two developers, which include: 1) quantita-
tive performance results according to several software
process performance metrics; and 2) a CMM appraisal
of the developers quality level of its software processes.
In addition, the available data enable them to provide an
explanation for the performance differences based on the
differences in resource investment preferences during
the software development phases.
28 SQP VOL. 9, NO. 4/ 2007, ASQ
-
8/7/2019 Benefits of Higher Quality Level of the Software Process
3/9
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared
THE CASE STUDYORGANIZATIONSThe authors case study is based on records and observa-
tions of two software development organizations. The firstorganization, Alpha, is a startup firm that implements only
basic software quality assurance practices. The second
organization, Beta, is the software development depart-
ment in an established electronics firm that performs a
wide range of software quality assurance practices that are
employed throughout the software development process.
Both Alpha and Beta develop C++ real-time embedded
software in the same development environment: Alphas
software product serves the telecommunication security
industry sector, while Betas software product serves the
aviation security industry sector. Both organizationsemploy the waterfall methodology; however, during the
study Alphas implementation was crippled because
the resources invested in the analysis and design stage
were negligible. While the Alpha team adopted no soft-
ware development standard, Betas software development
department was certified according to the ISO 9000-3
standard (ISO 1997) and according to the aviation soft-
ware development standard for the aviation industry
DO-178B Software Considerations in Airborne Systems
and Equipment Certification (RTCA 1997). The Federal
Aviation Administration (FAA) accepts use of the standardas a means of certifying software in avionics. Neither soft-
ware development organization was CMM certified.
During the study period Beta developed one software
product, while Alpha developed two versions of the
same software product. The software process and the
SQA system of Beta were stable during the entire study
period. The SQA system of Alpha, however, experi-
enced some improvements during the study period that
became effective for the development of the second ver-
sion of its software product. The first and second parts
of the study period dedicated to the development of thetwo versions lasted six and eight months, respectively.
A preliminary stage of the analysis was done to test the
significance of the results of the improvements performed
in Alpha during the second part of the study period.
The Research HypothesesThe research hypotheses are:
H1: Alphas software process performance met-
ric for its second product will be similar to that
of its first product.
H2: Beta, as the developer of a higher quality
level of its software process, will achieve soft-
ware process performance higher than Alpha
according to all performance metrics.
H3: The results for the differences in perfor-mance achievements of the comparison of
organizations methodology will support the
results of studies performed according to the
before-after methodology.
METHODOLOGYThe authors comparative case study research was
planned for both a preliminary stage and a two-stage
comparison:
Preliminary stage: Comparison of software process
performance for Alphas first and second products
(first part of the study period vs. the second part).
Stage one: Comparison of software process
performance of Alpha and Beta.
Stage two: Comparison of the first stage findings
(of comparison of organizations methodology)
with the results of earlier research performed
according to the before-after methodology.
The Empirical DataThe study was based on original records of software
correction processes that the two developers made
available to the study team. The records cover a period
of about one year for each developer. The following six
software process performance metrics (performance
metrics) were calculated:
1. Error density (errors per 1,000 lines of code)
2. Productivity (lines of new code per working day)
3. Percentage of rework
4. Time required for an error correction (days)
5. Percentage of recurrent repairs
6. Error detection effectiveness
The detailed records enabled the authors to calculate
these performance metrics for each developer. The met-
rics were calculated on a monthly basis for the first five
performance metrics. For the sixth metric, only a global
metric calculated for the entire period could be processed
for each developer.
www.asq.org 29
-
8/7/2019 Benefits of Higher Quality Level of the Software Process
4/9
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared
2007,ASQ
Table 1 presents a comparison of the organization
characteristics and a summary of the development activi-
ties of Alpha and Beta.
The CMM AppraisalSince the studied organizations were not CMM certi-fied, the authors used an official SEI publication,
Maturity Questionnaire for CMM based appraisal of
internal process improvement - CBA IPI (Zubrow,
Hayes, and Goldenson 1994), to prepare an appraisal
of Alpha and Betas software process quality level. The
appraisal yielded the following: CMM level 1 for Alpha
and CMM level 3 for Beta. A summary of the appraisal
results for Alpha and Beta is presented in Table 2.
The Statistical AnalysisFor five of the six performance metrics, the calculated
monthly performance metrics for the two organizations
were compared and statistically tested applying t-test
procedure. For the sixth performance metric, error detection
effectiveness, only one global detection effectiveness metric
(calculated for the entire study period) was available, 90.3
percent and 99.7 percent for Alpha and Beta, respectively.
THE FINDINGS
The Preliminary StageA comparison of Alphas performance metrics for the
two parts of the study period is shown in Table 3.
Alphas performance results for the second study
period show some improvements (compared with
the results of the first study period) for all five per-
formance metrics that were calculated on a monthly
basis. However, the performance achievements of thesecond study period were found statistically insignifi-
cant for four out of five performance metrics. Only for
one performance metric, namely the percentage of
recurrent repairs, did the results show a significant
improvement.
Accordingly, H1 was supported for four out of five
performance metrics. H1 was rejected only for the
metric of the percentage of recurrent repairs.
Stage 1: The OrganizationComparison - Alpha vs. BetaAs Betas software process quality level was appraised
to be much higher than that of Alpha, according to H2,
the quality performance achievements of Beta were
expected to be significantly higher than Alphas. The
comparison of Alpha and Betas quality performance
results is presented in Table 4.
The results of the statistical analysis show that for
three out of the six performance metrics the performance
of Beta is significantly better than that of Alpha. It should
be noted that for the percentage of recurrent repairs,where Alpha demonstrated a significant performance
improvement during the second part of the study period,
Betas performance was significantly better than Alphas
Subject of comparisonThe developer
Alpha Beta
a) The organization characteristics
Type of software product Real-time embedded
C++ software
Real-time embedded
C++ softwareIndustry sector Aviation electronics Telecommunication security
Certification according to software development quality
standards
1. ISO 9001
2. DO-178B
None
CMM certification None None
CMM level appraisal CMM level 1 CMM level 3
b) Summary of development activities
Period of data collection Jan. 2002 Feb. 2003 Aug. 2001 July 2002
Team size 14 12
Man-days invested 2,824 2,315
New lines of code 56K 62K
Number of errors identified during development process 1,032 331
Number of errors identified after delivery to customers 111 1
table 1 Comparison of the organization characteristics and summary of development
activities of Alpha and Beta
30 SQP VOL. 9, NO. 4/ 2007, ASQ
-
8/7/2019 Benefits of Higher Quality Level of the Software Process
5/9
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared
2007,ASQ
2007,ASQ
for each of the two parts of the study period. For the
fourth performance metric, productivity, Betas results
were 35 percent better than those of Alpha, but no statis-
tical significance was found. Somewhat surprising results
were found for the fifth metric, the time required for errorcorrection, where the performance of Alpha was 14 per-
cent better than Beta, but the difference was found to be
statistically insignificant. The explanation for this finding
is probably in the much lower quality of Alphas software
product. The lower quality of Alpha could be demonstrat-
ed by the much higher percentages of recurrent repairs,
where Alphas results were found significantly higher than
Betas, fivefold and threefold higher for Alphas first part
and second part of the study period, respectively. Alphas
lower quality is especially evident when referring to the
sixth performance metric. For the sixth performancemetric, the error detection effectiveness, although only
global performance results for the entire study period are
available, a clear inferiority of Alpha is revealed, where
the error detection effectiveness of Alpha is only 90.3 per-
cent compared with Betas error detection effectiveness of
99.7 percent. In other words, 9.7 percent of Alphas errors
were discovered by its customers compared with only 0.3
percent of Betas errors.
To sum up stage 1, H2 was supported by statisti-
cally significant results for the following performance
metrics: 1) error density, 2) percentage of rework,and 3) percentage of recurrent repairs. For an addi-
tional metric, error detection effectiveness, though no
statistical testing is possiblethe global results that
clearly indicate performance superiority of Beta support
hypothesis H1. For two metrics H1 was not supported
statistically. As for the productivity metric, the results
show substantially better performance for Beta. As
for the time required for an error correctionAlphas
www.asq.org 31
No. Key Process Area Alpha grades Beta grades1. Requirements
management
1.67 10
2. Software project planning 4.28 10
3. Software project tracking
and oversight
5.74 8.57
4. Software subcontract
management
6.25 10
5. Software quality
assurance (SQA)
3.75 10
6. Software configuration
management (SCM)
5 8.75
Level 2 Average 4.45 9.55
7. Organization process
focus
1.42 10
8. Organization process
definition
0 8.33
9. Training program 4.28 7.14
10. Integrated software
management
0 10
11. Software product
engineering
1.67 10
12. Intergroup coordination 4.28 8.57
13. Peer reviews 0 8.33
Level 3 Average 1.94 9.01
14. Quantitative process
management
0 0
15. Software qualitymanagement
4.28 8.57
Level 4 Average 2.14 4.29
16. Defect prevention 0 0
17. Technology change
management
2.85 5.71
18. Process change
management
1.42 4.28
Level 5 Average 1.42 3.33
table 2 Summary of the maturity
questionnaire detailed appraisal
results for Alpha and Beta
SQA metrics
Alpha
First part of the
study period
(6 months)
Alpha
Second part of
the study period
(8 months)
t 0.05
Statistical
significance of
differences
t 0.05Mean s.d. Mean s.d.
1. Error density (errors per 1,000 lines of code) 17.9 3.8 15.8 4.2 t=0.964 Not significant
2. Productiv ity (l ines of code per working day) 16.8 11.8 21.9 18.4 t=-0.585 Not s igni fi cant
3. Percentage of rework 35.4 12.7 28.5 19.8 t=0.746 Not significant
4. Time required for an error correction (days) 35.9 33.3 16.9 8.4 t=1.570 Not significant
5. Percentage of recurrent repairs 26.7 11.8 13.8 8.7 t=2.200 Significant
6. Error detection effectiveness (global
performance metric for the entire study period)
90.3% 99.7% Statistical testing is not possible
table 3 Alphas performance comparison for the two parts of the study period
-
8/7/2019 Benefits of Higher Quality Level of the Software Process
6/9
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared
2007,ASQ
results are a little better than Betas, with no statistical
significance. To sum up, as the results for four of the
performance metrics support H1 and no result rejects
H1, one may conclude that the results support H1. The
authors would like to note that their results are typicalcase study results, where a hypothesis when clearly
supported is followed by some inconclusive results.
Stage 2: Comparison ofMethodologiesThe Comparisonof Organizations Methodologyvs. the Before-AfterMethodologyIn this stage the authors compared the results of thecurrent case study that was performed according to the
comparison of organizations methodology with results
of the commonly used before-after methodology. They
discovered that for this purpose, the work of Galin and
Avrahami (2005; 2006), which is based on combined
analysis of 19 past studies, is the suitable representative of
results obtained by applying the before-after methodology.
The comparison is applicable to four software
process performance metrics that are common to
the current case study and the findings of the com-
bined past studies analysis carried out by Galin andAvrahami. These common performance metrics are:
Error density (errors per 1,000 lines of code)
Productivity (lines of code per working day)
Percentage of rework
Error detection effectiveness
32 SQP VOL. 9, NO. 4/ 2007, ASQ
As Alphas and Betas SQA systems were appraised as
similar to CMM levels 1 and 3, respectively, their quality
performance gap is compared with Galin and Avrahamis
mean quality performance improvement for a CMM level
1 organization advancing to CMM level 3. The comparisonfor the four performance metrics is shown in Table 5.
The results of the comparison support hypothesis
H3 regarding all four performance metrics. For two of
the performance metrics (error density and percent-
age of rework) this support is based on statistically
significant results for the current test case. For the
productivity metric the support is based on substantial
productivity improvement, which is not statistical-
ly significant. The comparison results for the four
metrics reveal similarity in direction, where size dif-
ferences in achievement are expected when comparingmultiproject mean results with case study results.
To sum up, the results of the current test case
performed according to the comparison of organiza-
tions methodology conform to the published results
obtained by using the before-after methodology.
DISCUSSIONThe reason for the substantial differences in software
process performance achievement between Alpha
and Beta is the main subject of this discussion. The
two developers claimed to use the same methodology.The authors assume that substantial differences in
software process performance result from the actual
implementation differences by the developers. To
investigate the causes for the quality performance gap,
the authors first examine the available data relating to
the differences between Alpha and Betas distributions
SQA metrics Alpha
(14 months)
Beta
(12 months)
t 0.05 Statistical
significance ofdifferences
t 0.05Mean s.d. Mean s.d.
1. Error density (errors per 1,000 lines of code) 16.8 4.0 5.0 3.0 8.225 Significant
2. Productivity (lines of code per working day) 19.7 15.5 26.7 16.6 -1.111 Not significant
3. Percentage of rework 31.4 16.9 17.9 8.0 2.532 Significant
4. Time required for an error correction (days) 25.0 23.7 29.0 15.6 -0.497 Not significant
5. Percentage of recurrent repairs part 1
Percentage of recurrent repairs part 2
26.7
13.8
11.8
8.7
4.8
4.8
8.1
8.1
4.647
2.239
Significant
Significant
6. Error detection effectiveness - % discovered
by the customer (global performance metric for
the entire study period)
9.7 0.3 No statistical
analysis was
possible
table 4 Quality performance comparisonAlpha vs. Beta
-
8/7/2019 Benefits of Higher Quality Level of the Software Process
7/9
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared
www.asq.org 33
2007,ASQ
2007,ASQ
of error identification phases along the development
process. Table 6 presents for Alpha and Beta the
percentages of error identification for the various
development phases.
Table 6 reveals entirely different distributions of theerror identification phases for Alpha and Beta. While
Alpha identified only 11.5 percent of its errors in the
requirement definition, analysis, and design phases,
Beta managed to identify almost half of the total errors
during the same phases. Another delay in error identi-
fication is noticed in the unit testing phase. In the unit
testing phase Alpha identified fewer than 4 percent of
errors identified by testing while Beta identified in the
same phase more than 20 percent of the total errors
(almost 40 percent of errors identified by testing). The
delay in error identification by Alpha is again apparentwhen comparing the percentage of errors identified
during the integration and system tests: 75 percent
for Alpha compared to 35 percent for Beta. However,
the most remarkable difference between Alpha and
Beta is in the rate of errors detected by the customers:
9.7 percent for Alpha compared to only 0.3 percent
for Beta. This enormous difference in error detection
efficiency, as well as the remarkable difference in error
density, is the main contribution to the higher quality
level of Betas software process.
Further investigation of the causes of Betas higher
quality performance leads one to data related to
resources distribution along the development process.
Table 7 presents the distribution of the development
resources along the development process, indicatingnoteworthy differences between the developers.
Examination of the data presented in Table 7
reveals substantial differences in resource distribu-
tion between Alpha and Beta. While more than a third
of the resources are invested by Betas team in the
requirement definition, analysis, and design phases,
Alphas team investments during the same phases are
negligible. Furthermore, while Alpha invests about
half of the development resources in software testing
and the consequent software corrections, the invest-
ments of Beta in these phases are less than a quarterof the total project resources. It may be concluded that
the shift of resource invested downstream by Alpha
resulted in a parallel shift downstream of the distribu-
tion of error identification phases (see Table 6). The
very low resource investments of Beta in correction
of failures identified by customers as compared with
Alphas investments in this phase correspond well to the
differences in error identification distribution between
the developers. It may be concluded that the enormous
difference in the error detection efficiency as well as the
SQA metrics
Comparison of organizations
methodology
Before-after methodology
Betas performance compared
with Alphas
%
CMM level 1 advancement to CMM level 3
Mean performance improvement *
%
1. Error density (errors per 1,000 lines of code) 70% reduction (Significant) 76% reduction
2. Productivity ( lines of code per working day) 36% increase (Not significant) 72% increase
3. Percentage of rework 43% reduction (Significant) 65% reduction
4. Error detection effectiveness 97% reduction
(Not tested statistically)
84% reduction
* According to Galin and Avrahami (2005; 2006)
table 5 Quality performance improvement resultsmethodology comparison
Development phases
Alpha Beta
Identified errors
%
Identified errors
cumulative
%
Identified errors
%
Identified errors
cumulative
%
Requirement definition 5.8 5.8 33.8 33.8
Design 5.7 11.5 9.0 42.8
Unit testing 3.8 15.3 22.3 65.1
Integration and system testing 75.0 90.3 34.6 99.7
Post delivery 9.7 100.0 0.3 100.0
table 6 Error identification phaseAlpha vs. Beta
-
8/7/2019 Benefits of Higher Quality Level of the Software Process
8/9
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared
34 SQP VOL. 9, NO. 4/ 2007, ASQ
2007,ASQ
2007,ASQ
remarkable difference in error density are the product of
the downstream shift of the distribution of the software
process resource. In other words, they are the demon-
stration of the results of a crippled implementation
of the development methodology that actually beginsthe software process at the programming phase. This
crippled development methodology yields a software
process of a substantially lower productivity, followed
by a remarkable increase in error density and a colossal
reduction of error detection efficiency.
At this stage it would be interesting to compare the
authors findings regarding the differences of resourc-
es distribution between Alpha and Beta with those
of Herbsleb et al.s study. A comparison of findings
regarding resource distribution along the development
process for the current study and Texas Instrumentsprojects is presented in Table 8.
The findings by Herbsleb et al. related to Texas
Instruments projects indicate that the new (improved)
development methodology focuses on upstream devel-
opment phases, while the old methodology led the team
to invest in coding and testing. In other words, while
in the improved development methodology project 40
percent of the development resources were invested in
the requirement definition and design phases, only 8percent of the resources of the old methodology project
were invested in these development phases. Herbsleb
et al. also found a major difference in resource invest-
ments in unit testing: 18 percent of the total testing
resources by the old methodology project compared
to 90 percent by the improved methodology project.
Herbsleb et al. believe that the change of development
methodology, as evidenced by the change in resource
distribution along the software development process,
yielded the significant reduction in error density (from
6.9 to 2.0 defects per thousand lines of code) and to aremarkable reduction in resources invested in customer
support after delivery (from 23 percent of total project
resources to 7 percent). These findings by Herbsleb et
al. closely resemble the current case study findings.
Development phase
Resources invested
Alpha Beta
Resources invested
%
Resources invested
cumulative
%
Resources invested
%
Resources invested
cumulative
%Requirement definition and design Negligible 0 34.5 34.5
Coding 46.5 46.5 41.5 76.0
Software testing 26.0 72.5 14.0 90.0
Error corrections according to
testing results 22.5 95.0 9.5 99.5
Correction of failures identified
by customers 5.0 100.0 0.5 100.0
table 7 Project resources according to development phaseAlpha vs. Beta
Development phase
Resources invested
Old development methodology project New (improved) development
methodology project
Resources invested
%
Resources invested
cumulative
%
Resources invested
%
Resources invested
cumulative
%
Requirement definition 4% 4% 13% 13%
Design 4% 8% 27% 40%
Coding 47% 55% 24% 64%
Unit testing 4% 59% 26% 90.0%
Integration and system testing 18% 77% 3% 93%
Support after delivery 23% 100.0% 7% 100.0%
table 8 Texas Instruments project resources distribution according to development
phaseOld development methodology project vs. New (improved) development
methodology project. Source: Herbsleb et al. (1994)
-
8/7/2019 Benefits of Higher Quality Level of the Software Process
9/9
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared
www.asq.org 35
CONCLUSIONSThe quantitative knowledge of the expected soft-
ware process performance improvement is of great
importance to the software industry. The availablequantitative results are based solely on studies per-
formed according to the before-after methodology. The
current case study supports these results by apply-
ing an alternative methodologythe comparison of
organizations methodology. As the examination of the
results obtained by the use of an alternative study
methodology is important, the authors recommend
performing a series of case studies applying the com-
parison of organizations methodology. The results of
these proposed case studies may support the earlier
results and add substantially to their significance.The current case study is based on existing correc-
tion records and other data that became available to
the research team. Future case studies applying the
comparison of organizations methodology that will be
planned at earlier stages of the development project
may participate in the planning of the project manage-
ment data collection, and enable collection of data for a
wider variety of software process performance metrics.
REFERENCESBlair, R. B. 2001. Software process improvement: What is the cost?
What is the return on investment? In Proceedings of the Pittsburgh PMI
Conference, April 12.
Diaz, M., and J. King. 2002. How CMM impacts quality, productivity,
rework, and the bottom line. Crosstalk15, no. 1: 9-14.
Franke, R. 1999. Achieving Level 3 in 30 months: The Honeywell BSCE
Case. Presentation at the 4th European Software Engineering Process
Group Conference, London.
Galin, D., and M. Avrahami. 2005. Do SQA program work CMM works.
A meta analysis. In Proceedings of the IEEE International Conference
on Software Science, Technology & Engineering, Herzlia, Israel, 22-23February. IEEE Computer Society Press, Los Alamitos, Calif.: 95-100.
Galin, D., and M. Avrahami. 2006. Are CMM programs beneficial?
Analyzing past studies. IEEE Software23, no. 6: 81-87.
Goldenson, D. R., and D. L. Gibson. 2003. Demonstrating the impact and
benefits of CMMI: An update and preliminary results (CMU/SEI-2003-009).
Pittsburgh: Software Engineering Institute, Carnegie Mellon University.
Herbsleb, J., A. Carleton, J. Rozum, J. Siegel, and D. Zubrow. 1994.
Benefits of CMM-based software process improvement: Initial results
(CMU/SEI-94-TR-013). Pittsburgh: Software Engineering Institute,
Carnegie Mellon University. Available at: http://www.sei.cmu.edu/
publications/documents/94.reports/94.tr.013.html.
Isaac, G., C. Rajendran, and R. N. Anantharaman. 2004a. Does qual-
ity certification improve software industrys operational performance.
Software Quality Professional5, no. 1: 30-37.
Isaac, G., C. Rajendran, and R. N. Anantharaman. 2004b. Does qual-
ity certification improve software industrys operational performance
supplemental material. Available at http://www.asq.org.
ISO. 1997. ISO 9000-3 Guidelines for the application of ISO 9001:1994 to
the development, supply, installation and maintenance of computer soft-
ware. Geneva, Switzerland: International Organization for Standardization.
Jung, H. W., and D. R. Goldenson. 2003. CMM-based process improvement
and schedule deviation in software maintenance (CMU/SEI-2003-TN-015).
Pittsburgh: Software Engineering Institute, Carnegie Mellon University.
Keeni, G. 2000. The evolution of quality processes at Tata Consultancy
Services. IEEE Software17, no. 4: 79-88. Available at: http:// www.stsc.
hill.af.mil/crosstalk/1999/05/oldham.pdf.
McGarry, F., R. Pajerski, G. Page, S. Waligora, V. Basili, and M. Zelkowitz.
1999. Software process improvement in the NASA Software EngineeringLaboratory (CMU/SEI-94-TR-22). Pittsburgh: Software Engineering
Institute, Carnegie Mellon University. Available at: http://www.sei.cmu.
edu/publications/documents/94.reports/94.tr.022.html.
Pitterman, B. 2000. Telcordia Technologies: The journey to high maturity.
IEEE Software17, no. 4: 89-96.
RTCA. 1997. DO-178B Software considerations in airborne systems and
equipment certification, Radio Technical Commission for Aeronautics,
U.S. Federal Aviation Agency, Washington.
Zubrow D., W. Hayes, and D. Siegel Jand Goldenson. 1994. Maturity
questionnaire (CMU/SEI-94-SR-7). Pittsburgh: Carnegie Mellon
University, Software Engineering.
Carnegie Mellon, Capability Maturity Model, CMMI, and CMM are reg-istered trademarks of Carnegie Mellon University.
SM CMM Integration and SEI are service marks of Carnegie Mellon
University.
BIOGRAPHIES
Daniel Galin is the head of information systems studies at the
Ruppin Academic Center, Israel, and an adjunct senior teaching
fellow with the Faculty of Computer Science, the Technion, Haifa,
Israel. He has a bachelors degree in industrial and management
engineering, and masters and doctorate degrees in operations
research from the Israel Institute of Technology, Haifa, Israel. His
professional experience includes numerous consulting projects inthe areas of software quality assurance, analysis, and design of
information systems and industrial engineering. He has published
many papers in professional journals and conference proceed-
ings. He is also the author of several books on software quality
assurance and on analysis and design of information systems. He
can be reached by e-mail at [email protected].
Motti Avrahami is VeriFone Global supply chain quality manager.
He has more than nine years of experience in software quality
process and software testing. He received his masters degree
in quality assurance and reliability from the Technion, Israel
Institute of Technology. He can be contacted by e-mail at