Improving Software Development Tracking and Estimation Inside the

10
Improving Software Development Tracking and Estimation Inside the Cone of Uncertainty Pongtip Aroonvatanaporn, Thanida Hongsongkiat, and Barry Boehm Center for Systems and Software Engineering University of Southern California Los Angeles, CA, USA {aroonvat, thongson, boehm}@usc.edu ABSTRACT Software cost and schedule estimations are fundamental in software development projects as they determine the scopes and the resources required. With accurate estimations, the goals of project outcome can be assured within the avail- able resources. However, developing accurate and realistic estimates require high level of experience, expertise, and his- torical data. Oftentimes, once the resources have been esti- mated, little is done to reduce the uncertainties in the esti- mations as the project progresses through its life cycle. To address this issue, we have developed the COTIPMO tool, an implementation of the COnstructive Team Improvement Process MOdel framework, to help automate the recalibra- tion and estimation improvement processes. The tool allows software development teams to effectively track their devel- opment progress, assess the team’s performance, and adjust the project estimates based on the assessment results. The COTIPMO tool has been used by 13 software engineering projects and the results are presented in this paper. Categories and Subject Descriptors D.2 [Software Engineering]: Management—cost estima- tion, life cycle, time estimation, software process models General Terms Management, Measurement, Human Factors, Economics Keywords Cost Estimation, COCOMO II, Uncertainty, Continuous As- sessment, Project Planning 1. INTRODUCTION Good team performance and accurate software cost and sched- ule estimations are essential in determining the quality and timely delivery of the final product. The 2009 Standish Re- port reported that out of the 9000 projects surveyed, 32% were delivered with full capability within budget and sched- ule, 24% were cancelled, and 44% were either over budget, over schedule, or undelivered [22]. This shows that nearly half of the projects were unsuccessful due to issues related to cost and schedule estimations. With more accurate estimations and less uncertainties within the team, the number of failed or over budgeted projects could be reduced significantly. Yet, producing accurate and realistic estimates require high level of expertise and ex- perience as well as good historical data. This is a luxury that software development teams often lack of as these data and resources are not always readily available. Although the initial estimates for cost and schedule are important in determining the time, resources, and budget required for project completion, the ability to adapt the estimations to the changing environments, requirements, and performances allows teams to better control quality and help ensure timely deliveries of the products. In this paper, we introduce the COTIPMO tool, an imple- mentation of the COnstructive Team Improvement Process MOdel developed in [2]. The tool allows software devel- opment teams to quickly track their development progress, assess the team’s performance, and adjust their estimations based on the team’s status. With better tracking and es- timation mechanisms, the number of uncertainties in team performance and estimation can be reduced as the project progresses through its life cycle. This allows the team to continuously monitor their abilities to complete the project within the available resources, budget, and time. We have deployed the tool at the University of Southern Califor- nia (USC) and experimented with 13 software engineering projects. The results of the tool application are analyzed and discussed in this paper. The rest of this paper offers an approach for dealing with problems related to uncertainties in project tracking and estimation. We will discuss in detail about the common problems that occur due to these uncertainties and the mo- tivation behind developing this tool. It will be shown that the use of the COTIPMO tool significantly improves team’s performance and reduces estimation uncertainties within the project. Finally, we conclude with discussion of our plans for future work. 1.1 Terms and Definitions Development project refer to the type of project that the product must be developed from scratch. The development team must write the majority of the source code to imple-

Transcript of Improving Software Development Tracking and Estimation Inside the

Page 1: Improving Software Development Tracking and Estimation Inside the

Improving Software Development Tracking and EstimationInside the Cone of Uncertainty

Pongtip Aroonvatanaporn, Thanida Hongsongkiat, and Barry BoehmCenter for Systems and Software Engineering

University of Southern CaliforniaLos Angeles, CA, USA

{aroonvat, thongson, boehm}@usc.edu

ABSTRACTSoftware cost and schedule estimations are fundamental insoftware development projects as they determine the scopesand the resources required. With accurate estimations, thegoals of project outcome can be assured within the avail-able resources. However, developing accurate and realisticestimates require high level of experience, expertise, and his-torical data. Oftentimes, once the resources have been esti-mated, little is done to reduce the uncertainties in the esti-mations as the project progresses through its life cycle. Toaddress this issue, we have developed the COTIPMO tool,an implementation of the COnstructive Team ImprovementProcess MOdel framework, to help automate the recalibra-tion and estimation improvement processes. The tool allowssoftware development teams to effectively track their devel-opment progress, assess the team’s performance, and adjustthe project estimates based on the assessment results. TheCOTIPMO tool has been used by 13 software engineeringprojects and the results are presented in this paper.

Categories and Subject DescriptorsD.2 [Software Engineering]: Management—cost estima-tion, life cycle, time estimation, software process models

General TermsManagement, Measurement, Human Factors, Economics

KeywordsCost Estimation, COCOMO II, Uncertainty, Continuous As-sessment, Project Planning

1. INTRODUCTIONGood team performance and accurate software cost and sched-ule estimations are essential in determining the quality andtimely delivery of the final product. The 2009 Standish Re-port reported that out of the 9000 projects surveyed, 32%were delivered with full capability within budget and sched-ule, 24% were cancelled, and 44% were either over budget,

over schedule, or undelivered [22]. This shows that nearlyhalf of the projects were unsuccessful due to issues relatedto cost and schedule estimations.

With more accurate estimations and less uncertainties withinthe team, the number of failed or over budgeted projectscould be reduced significantly. Yet, producing accurate andrealistic estimates require high level of expertise and ex-perience as well as good historical data. This is a luxurythat software development teams often lack of as these dataand resources are not always readily available. Althoughthe initial estimates for cost and schedule are important indetermining the time, resources, and budget required forproject completion, the ability to adapt the estimations tothe changing environments, requirements, and performancesallows teams to better control quality and help ensure timelydeliveries of the products.

In this paper, we introduce the COTIPMO tool, an imple-mentation of the COnstructive Team Improvement ProcessMOdel developed in [2]. The tool allows software devel-opment teams to quickly track their development progress,assess the team’s performance, and adjust their estimationsbased on the team’s status. With better tracking and es-timation mechanisms, the number of uncertainties in teamperformance and estimation can be reduced as the projectprogresses through its life cycle. This allows the team tocontinuously monitor their abilities to complete the projectwithin the available resources, budget, and time. We havedeployed the tool at the University of Southern Califor-nia (USC) and experimented with 13 software engineeringprojects. The results of the tool application are analyzedand discussed in this paper.

The rest of this paper offers an approach for dealing withproblems related to uncertainties in project tracking andestimation. We will discuss in detail about the commonproblems that occur due to these uncertainties and the mo-tivation behind developing this tool. It will be shown thatthe use of the COTIPMO tool significantly improves team’sperformance and reduces estimation uncertainties within theproject. Finally, we conclude with discussion of our plansfor future work.

1.1 Terms and DefinitionsDevelopment project refer to the type of project that theproduct must be developed from scratch. The developmentteam must write the majority of the source code to imple-

Page 2: Improving Software Development Tracking and Estimation Inside the

Figure 1: The cone of uncertainty in software costand size estimation

ment the end user functionalities.

NDI-intensive project refers to the type of project thataims at integrating and/or tailoring either one or a set ofnon-developmental items (NDI) or commercial off-the-shelf(COTS) products. As defined in [14], this is when 30-90%of the end user features and capabilities are provided by theNDI or COTS products.

2. PROBLEMS AND MOTIVATIONThe main motivation behind this research is the well-known“cone of uncertainty” defined in [5] and calibrated to com-pleted projects in [6]. Figure 1 shows that until the projectis completed and delivered, there can be a wide range ofproducts that the project can result in due to the variouslevels of uncertainties.

In certain development paradigms where cost or schedule isfixed, the project scopes must be adjusted to compensatefor any changes in environment, resources, or uncertaintiesin the project. These paradigms include the Schedule AsAn Independent Variable (SAIV) and Cost As An Indepen-dent Variable (CAIV) covered in [8]. Because uncertaintiescannot be avoided, projects must be able to adapt to thesechanging environments; otherwise, schedules can slip, costsand budgets may overrun, and product qualities may suffersignificantly.

Menzies et al. [18] discussed about the two approaches ineffort estimation - model-based and expert-based methods.While the model-based method requires past data to makepredictions for future projects, the expert-based method uti-lizes experience and expertise of the estimators in order todevelop meaningful estimates. Our research framework andtool combine the two methodologies to create improvementsto the project estimates during its life cycle.

2.1 Inaccurate Project EstimationsSoftware development teams often do not have sufficientdata and information to develop accurate cost and sched-ule estimations for the project to be developed. Withoutthe necessary data, it is nearly impossible for teams to make

proper predictions with respect to project scope, complexity,and resources required. These data include aspects and at-tributes that are specified in the COCOMO II model in [6].Typically, projects progress through their life cycle basedon these inaccurate estimates. This means that regardlessof how well or poorly the projects progress, the estimatesremain constant.

Once the project proceeds into its life cycle, the status andprogress of the project are often not properly assessed bythe team in order to analyze the accuracies of the estimates.There are significant numbers of uncertainties at the begin-ning of the project as there are instability in requirementsand many directions that the project can proceed on. This isclearly shown in the well-known “cone of uncertainty”. Withthese levels of uncertainties, project estimates are typicallynot realistic or accurate.

2.2 Lack of Effective Tracking and AssessmentTools

To date, there have been no tools or data that monitor theevolution of the project’s progression in the “cone of uncer-tainty”. In order to collect enough information for usefulassessment data, the teams are required to perform varioussurveys and reviews. Due to the tediousness and complexityof assessing project status and performance, these processesare discouraging to the teams to perform them regularly [15].

Furthermore, in traditional processes, to accurately reportthe progress of software development projects, the teams areusually required to carefully count the source lines of code(SLOC) developed, analyze the logical lines of code, andcompare them to the potentially inaccurate estimates dis-cussed in section 2.1. These tasks require significant amountof effort to perform manually, thus, the processes are dis-couraging to the teams. The more discouraging the pro-cesses are, the less they get done effectively.

Without the proper tools or mechanisms to assess the teamand project’s status and performance, the project wouldprogress through its life cycle with high level of uncertain-ties. As the cone of uncertainty remains wide, the projectestimates also remain uncertain and inaccurate, while theproject performance remains unimproved.

2.3 Limitations in Software Cost EstimationModels

There is little that software cost estimation models can com-pensate for when software projects lack the necessary infor-mation and knowledge at the time when cost and scheduleare estimated. Moreover, most estimation models requirethorough understandings of the model parameters as wellas certain level of expertise in order to use them effectively.Without proper understandings, teams may potentially endup overstating the capability of the team’s personnel or un-derstating the complexities of the project. These misrepre-sentations lead to inaccurate and non-realistic estimationsdiscussed in 2.1.

Additionally, software projects are prone to changes in re-quirements, designs, and specifications as they progress inthe life cycle; therefore, the uncertainties in resource estima-

Page 3: Improving Software Development Tracking and Estimation Inside the

tions are constantly changing. This may be more apparentin agile projects or when clients are over enthusiastic. Soft-ware estimation models cannot automatically adapt to thesevarying environments.

3. BACKGROUND AND RELATED WORKTo date, there are various techniques used for project track-ing and assessment as well as for software sizing and esti-mation. All of the methods that we describe in this sectionare commonly used in the industry and have certain levelsof tool support.

3.1 Project Tracking and Assessment ToolsPresented in [24], the Program Evaluation and Review Tech-nique (PERT) network charts enable projects to effectivelymanage uncertainties by providing mechanisms to identifycritical paths and dependencies between tasks and activi-ties. The tasks and their corresponding paths are updatedso the progress of the project is visible to the developers andother stakeholders. However, even with proper tool support,the number of tasks and dependencies can grow very largefairly quickly especially when tasks and dependencies arenot properly defined. As the charts grow large, they requirehigh overhead to maintain and may be disregarded by thedevelopment teams due to their complexity.

The Goal-Question-Metric (GQM) in [4] is another popu-lar method for progress tracking and measurement with theability to capture them from the conceptual, operational,and quantitative levels. This allows the assessment processto align with the organization environment as well as theproject context. Various tools have been developed to lever-age the use of GQM method since GQM plans can grow verylarge and complex. The “GQM tool” developed in [17] au-tomates the GQM process to help manage the GQM plans,while the GQM-PLAN tool developed in [1] integrates theuse of GQM into each stages and phases of the project lifecycle. However, the GQM approach is only useful and effec-tive when used correctly by specifying the appropriate goals,questions, and measurements to be monitored.

Furthermore, the Earned-Value Management (EVM), Burn-up, and Burn-down charts in [9] are good for capturing theproject progress based on team’s velocity and completed fea-tures. However, these approaches are not effective at re-sponding to major changes during each iteration. EVM re-quires high overhead to accurately report the progress and tomap them to the requirements for the earned-value analysis.When projects constantly change, the development teamsmay end up spending more time updating the earned-valuesinstead of spending them towards the actual developmentactivities. On the other hand, in Burn-up and Burn-downcharts, when major changes occur, the charts may end upshowing no progress as the shift in requirement prioritiesmay prevent certain features from being completed. Thisprevents the team from accurately determining the actualprogress and the productivity rates of the developers.

3.2 Software Sizing and Estimation ToolsStory points in [10] are commonly used among agile develop-ment processes. The method allows the team to analyze theteam’s velocity, or productivity, based on the story points

completed and use those data for planning future iterations.Planning Poker in [12] is the tool often used for estimatingthe complexity of the story points. However, the methodrequires expert opinions and analogies in order to estimateaccurately. Furthermore, in most cases, re-estimations onlytake place when story sizes change and not based on thechanges in the development progress.

The PERT sizing method in [20] focuses on sizing the in-dividual components based on the size distributions (opti-mistic, most likely, and pessimistic) provided by the develop-ers. Many tools such as SEER-SEM [11], COCOMO 81 [5],and PRICE-S [19] utilizes PERT for sizing and estimation.The method reduces the bias towards overestimation andunderestimation, although people tend to choose the “mostlikely” estimates towards the lower limit, while the actualproduct sizes cluster towards the upper limit. Based on [5],this underestimation bias is due to the following reasons:

• People are optimistic and tend to have the desire toplease.

• People tend to not have a complete recall of past ex-periences.

• People are generally not familiar with the entire soft-ware job.

The COCOMO-U developed in [25] extends the COCOMOII model to allow creating estimations even when some of pa-rameters are unknown or uncertain. It utilizes the BayesianBelief Network (BBN) to handle these uncertainties. How-ever, in order for the tool to compute correctly, the usersmust be knowledgeable and have expertise in specifying theuncertainty of the unknown cost drivers.

4. THE COTIPMO TOOLThe COTIPMO tool is an implementation of the COTIPMOframework in [2]. Having an effective tool is essential inenabling the potentials of the framework. We focused onthe ease and usability of the tool as its tediousness can bediscouraging to the development teams to use.

4.1 The Process Model FrameworkThe COTIPMO framework model relies heavily on the CO-COMO II estimation model developed in [6] and the UnifiedCodeCount (UCC) tool in [23] for effective software trackingand estimation. In addition, it uses the concepts from IBMSelf-Check in [15] and [16] for team retrospectives as wellas quick assessments on team’s status and performance. Asmentioned earlier in section 2, the COTIPMO frameworkbridges the gap of using model-based and expert-based es-timation methods. The framework utilizes data during theproject progression in the life cycle, assesses and analyzesthem, and automatically suggests adjustments to the CO-COMO II estimation parameters for improved expert judg-ments. Moreover, the framework utilizes the pre-calibratedCOCOMO II model, which further contributes to the model-based aspect of the framework.

The model consists of three main parts:

• Project progress tracking• Continuous team assessment• COCOMO II estimation adjustments

Page 4: Improving Software Development Tracking and Estimation Inside the

In the framework model, the project tracking and assess-ments are to be done consistently as the project progresses.The framework is expected to be used on a per project basis,so it does not require data from past projects but uses datacontinuously collected since the beginning of the project andanalyzes them for future improvements to team performanceand estimations. This means that regardless of how inexpe-rienced the team may be in estimating project cost or howpoorly and inaccurate the cost and schedule were estimated,the framework enables these estimates to improve over timethroughout the project’s life cycle.

Details of the COTIPMO process framework can be foundin [3] and [2].

4.2 Powered by JazzWe have chosen IBM Jazz [13] as the supporting platformfor the COTIPMO tool for its scalability and strong supportfor high collaborative environment. Jazz provides the follow-ing foundational capabilities that are currently utilized byCOTIPMO:

• User management• Project and team management• Resource and repository management• RESTful web services architecture

Moreover, Jazz has a highly extensible architecture that al-lows collaboration and integration with other life cycle ap-plications and tools working under a single logical server.This means that the COTIPMO tool can be extended to beused as part of existing project management tools such asthe Rational Team Concert [21]. The potential extensionsto the COTIPMO tool will be discussed later in this paper.

4.3 Use with Development ProjectsThe COTIPMO tool was developed to provide substantialbenefits for development projects due to the integration ofthe UCC tool. The automatic code counting capability takesaway the complexity of size and progress reporting pro-cess enabling the development teams to quickly assess theirprogress and productivity. The tool uses the COCOMO IIestimation model to convert SLOC to effort by using theformula shown in equations 1 and 2. Instead of estimatingthe Size variable, we use the SLOC of developed source codereported by the UCC tool to compute the equivalent effort.This allows the tool to calculate the amount of effort spenttowards development and use that as a basis to estimate theeffort required to complete the project.

PM = A× SizeE ×17∏i=1

EMi (1)

E = 0.91 + 0.01 ×5∑

j=1

SFi (2)

Where:

• A = 2.94 (a constant derived from historicalproject data)

• PM is for Person-Months• Size is in KSLOC

• EM is the effort multiplier for ith cost driver• SF is the scale factor used to compensate

for the economies or diseconomies of scale

Figure 2 shows the main screen of the COTIPMO tool for adevelopment project. The screen consists of 3 main sections:1) the “Initial Project Estimates”, 2) the “Iteration List”,and 3) the “Project Progress”. The “Initial Project Esti-mates” section allows the team to enter the initial estimatesfor the project. Based on the information known at the be-ginning of the project, the development team specifies themodules planned for development and all the correspondingCOCOMO II information for each module.

As the project progresses, the team periodically adds iter-ations to the tool, which show up in the ”Iteration List”section. For each iteration, if the development of sourcecode has not started, the team can update their estimatesbased on the team’s status and knowledge. Otherwise, theycan upload the source code files and the COTIPMO toolautomatically counts the number of logical lines of code foreach file. The tool accumulates all the lines of code for allmodules and, using the COCOMO II model, computes theequivalent effort. The developers then enter the percentagedeveloped, tested, and integrated for each module. All ofthese data are used to compute the new estimated effortrequired to complete the project. The progression of theproject with respect to the team’s accumulated effort spentand estimated effort are shown in graphical format in the”Project Progress” section, which allows the teams to seetheir development progress as well as the improvements totheir estimations. The detailed implementation of the sourcecode tracking framework was developed and discussed in [3].

For every iteration created, the COTIPMO tool automati-cally generates a corresponding survey for each team mem-ber to complete. Figure 3 shows the survey ballot to besubmitted by each member. As mentioned in section 4.1,the concept of the survey assessment is based largely on theIBM Self-Check methodology [16]. Each team member fillsout and submits the survey individually without knowingeach other’s answers in order to reduce bias in the answers.Figure 4 displays the result of the survey showing the an-swers given by each team member. The standard deviationis computed to detect any inconsistencies between the an-swers for each question. A high deviation in the answersmeans that there are differences in opinions or understand-ings within the team; thus, a flag is triggered raising anissue for the team to discuss about that specific question.The team then identifies actions to take in order to resolveor prevent those issues in the upcoming iterations. Figure5 shows the list of all the actions and the corresponding it-eration that they were identified in. The team is able tomark each action as ”resolved” once they have successfullyaddressed the originating issue and completed that specificaction. This allows the team to effectively keep track of thetasks that they need to perform as well as any outstandingproblems that exist within the team and project.

Finally, based on the survey results, the COTIPMO toolanalyzes the survey data and automatically computes theadjustments that should be made to the COCOMO II pa-rameters. These adjustments are suggested to the develop-

Page 5: Improving Software Development Tracking and Estimation Inside the

Figure 2: The iteration list for development projects

Figure 3: Survey ballot

Figure 4: Survey result - answers

Figure 5: List of identified actions.

ment team as shown in figures 6 and 7. The suggestionsare reflective of the answers given by all the team members.Since each survey question contains different levels of impacton each of the COCOMO II parameters, these suggestionsare calculated based on the model discussed in section 4.1.The number of arrows (1, 2, or 3 arrows) represents thelevel of adjustments that should be made to the correspond-ing COCOMO II parameter. One arrow represents a minorincrease or decrease in the rating, while three arrows suggestthat a major change is required. The developers must thenuse judgements to make any necessary adjustments to theCOCOMO II ratings based on these recommendations.

4.4 Use with NDI-intensive ProjectsThe COTIPMO tool also provides strong support for NDI-intensive projects. It utilizes the COCOMO II ApplicationPoint estimation model in [6] for effort estimation and track-ing. The process of reporting progress in the ApplicationPoint model is much less complex compared to the regu-lar COCOMO II model. Instead of reporting the numberof SLOC written, the development team reports the num-

Page 6: Improving Software Development Tracking and Estimation Inside the

Figure 6: Survey Result - COCOMO II scale factorsadjustment suggestions

Figure 7: Survey Result - COCOMO II cost driversadjustment suggestions

ber of screens, reports, and third generation language (3GL)components developed, customized, and configured. Theseare called application points. Additionally, the Applica-tion Point model uses the developer’s capability and experi-ence and the integrated computer-aided software engineer-ing (ICASE) maturity and experience levels for calculatingthe productivity rate as well as the capability level of theteam. The effort spent on development and estimated ef-fort required to complete the project are computed basedon these information.

Similar to a development project, the team continuouslyadds iterations as the project progresses. Figure 8 shows themain screen listing the iterations for NDI-intensive projects.They report the number of application points completed upto each iteration as well as the percentage developed andtested for each application point. The tool uses the CO-COMO II Application Point model to compute the New Ap-plication Point (NAP), converts them into equivalent effort,and computes the new estimates based on these data.

For every iteration, the team members are also required tocomplete the survey assessments individually. However, in-stead of suggesting adjustments to the scale factors and costdrivers, the COTIPMO tool analyzes the assessment dataand suggests changes to the developer’s capability and ex-perience and ICASE maturity and experience levels, whichare the two dynamic parameters that affect the productivity

rate of the team. Figure 9 shows the suggestions computedby the COTIPMO tool for NDI-intensive projects.

5. OBTAINING THE DATAThe experimentation of the tool was done in a classroom en-vironment using the data obtained from the graduate soft-ware engineering course at USC. In the two-semester teamproject based course sequence CSCI577ab, students learn touse best software engineering practices to develop softwaresystems from the Exploration phase to Operations phaseadopting the Incremental Commitment Spiral Model (ICSM)[7] for development process. Each team consists of five orsix on-campus students with generally less than 2 years ofworking experience, and one or two off-campus students whoare full-time professionals with at least 5 years of industryexperience. Typically, the on-campus students act as oper-ational concept engineers, requirements engineers, softwarearchitects, UML modelers, coders, life cycle planners, andfeasibility analysts, while the off-campus students take onthe roles of Integrated Independent Verification and Valida-tion (IIV&V) personnel, quality assurance personnel, andtesters. The course consists of both development projectsand NDI-intensive projects and are completed either withina 12-week (1 semester) or 24-week (2 semesters) scheduledepending on their scopes and complexities [14].

The COTIPMO tool was deployed at USC during the Fall2011 semester. The semester consisted of 79 graduate stu-dents making up 13 project teams of which 5 were devel-opment projects and 8 were NDI-intensive projects. Theteams started using the COTIPMO tool immediately afterthe requirements had been gathered and continued to usethe tool weekly to report development progress and to re-calibrate their project estimates throughout the semester.By the end of the semester, 4 projects were completed withproducts completely delivered to the clients, while the re-maining projects continued onto the next semester.

Throughout the life cycle of the projects, we collected datafrom various aspects including the following:

• COCOMO II estimation data (i.e. scale drivers, costdrivers, application points)

• Project issues and defects• Individual and team effort spent on project activities• Client satisfactions

6. ANALYSIS AND DISCUSSIONWe had analyzed the data obtained from the software engi-neering projects and compared them with the previous yearsto observe the differences in project performances.

6.1 Improved Software Estimation AccuraciesFirst, we focused on the correctness of the COCOMO II es-timation parameters. We analyzed the parameter ratingsprovided by each team keeping track of the mistakes madeby them. The ratings are considered to be incorrect whenthey are not appropriate to the team or project status. Forexample, if the team provided that their programmers’ ex-periences (APEX, PLEX, and LTEX) were high, but theteam consisted of members with less than 2 years of indus-try experience, we consider these as mistakes.

Page 7: Improving Software Development Tracking and Estimation Inside the

Figure 8: The estimation list for NDI-intensive projects

Figure 9: Survey Result - Developer’s capability andexperience level and ICASE maturity and experi-ence level adjustment suggestions

Figure 10 shows the average mistakes in COCOMO II scalefactors and cost drivers ratings of all the teams. Since eachproject had different number of modules, we took the aver-age of the cost driver mistakes across all modules for eachproject. Both Fall 2009 and Fall 2010 semesters showedconsistent number of errors in the ratings and showed noimprovements as the projects progressed. However, in Fall2011, the projects showed less number of mistakes in theirestimations after the COTIPMO tool was introduced. Moreimportantly, though, the projects showed improvements over-time with better accuracies towards the end of the semesterduring the Foundations phase. To ensure the validity ofthe data, we selected sample sets from each year and hadthem evaluated and analyzed by an independent party. Theresults of the mistakes identified were consistent with ourinitial analysis.

For all three years, the projects were carefully reviewed bythe stakeholders at every major milestone. The review pro-

Figure 10: Average mistakes in the COCOMO IIratings for scale factors and cost drivers.

cess includes analyzing the accuracy, correctness, and ap-propriateness of the project estimates (i.e. the COCOMOII parameter ratings). Based on these results, it shows thateven though the projects’ estimates were periodically eval-uated by the stakeholders to point out any errors, withoutproper guidance and direction for corrections, the estimatesand COCOMO II parameter ratings were not effectively im-proved.

6.2 Improved Project Issues and Defects De-tection

Page 8: Improving Software Development Tracking and Estimation Inside the

Figure 11: Average defects and issues filed by theteam each year

Throughout the project life cycle, the IIV&V personnel in-dependently review the projects and their artifacts to de-tect any inconsistencies and defects in the documentationsas well as the team understandings in general. We analyzedthe issues and defects that were filed and resolved by theteam, which were categorized into the following severities:1) blocker, 2) critical, 3) major, 4) normal, 5) minor, 6)trivial, and 7) enhancement. Since these are project relatedissues, we normalized the data and re-categorized them intonormal and critical severities in order to make the observa-tions more visible. The blocker, critical, and major issueswere considered to be critical, which included any inconsis-tencies, misunderstandings, or errors that impact the projectat the conceptual level. The rest of the data were catego-rized as normal, which included insignificant defects such asgrammatical, typographical, and formatting errors.

Figure 11 shows that the average number of critical defectsand issues had decreased during the Fall 2011 semester eventhough the total number filed remained consistent with theprevious years. We looked further into the behavior of theissues and defects by observing the rates that they werefiled during each project phase. Figure 12 shows that in Fall2011, the average number of critical defects and issues re-mained less than the previous years through all the phases.It is especially interesting to observe the significant reduc-tion during the Valuation phase. Since the requirementswere negotiated and gathered during this phase, the num-ber of uncertainties and inconsistencies were expected to befairly high. However, with the use of the COTIPMO tool,the number of issues and defects recorded were significantlyreduced. This is possibly due to the fact that the assessmentmechanisms of the tool helped detect any inconsistencies andpotential problems that existed in the team early before theyturned into critical issues.

6.3 Improved Project TrackingWith better progress tracking mechanism of the COTIPMOtool, 2 projects were able to deliver before the anticipateddeadline during the Fall 2011 semester. The first project wasinitially planned for a 24-week schedule, but based on theprogress tracking and re-estimations reported by the tool,they were able to determine that the project only requiredhalf the resources and could be completed within a 12-week

Figure 12: Average defects and issues filed by theteam during each phase

schedule instead. The project immediately proceeded intothe construction phase and the product was delivered to theclient with 100% of end user functionalities implemented. Inthe previous years, when projects had to switch from a 24-week to 12-week schedule, they required major re-scoping offeatures and capabilities in order to meet the new deadlines.

In addition, another project had to be re-scoped due toclient’s shift in goals and needs. The project was initiallyplanned to deliver a fully developed system in a 24-weekschedule; however, the client changed the project scope andasked for a prototype of the system with only a subset ofthe end-user functionalities instead. The team updated theirproject sizes and estimates in the COTIPMO tool. Based onthe project progress and prototypes developed at that time,the tool reported that they had enough resources to com-plete the newly scoped project within a 12-week timeframe.The project was completed with the prototyped functional-ities delivered to the client.

6.4 Reduced Project EffortThe use of the COTIPMO tool also showed benefits in otheraspects of the software development process. Figure 13shows a clear reduction in the effort spent on the projectsduring the Fall 2011 semester when the COTIPMO tool wasintroduced to the projects. We looked into details of the ef-fort spent on the projects by breaking down the efforts intocategories of project activities shown in figure 14. Since thedevelopment process and projects’ scopes, sizes, and com-plexities were similar for all 3 years, it is expected that theefforts required for the various activities were also similar.However, the Fall 2011 semester showed a significant reduc-tion in the effort spent in the areas of communication andteam synchronization. The continuous use of the COTIPMOtool allowed the teams to properly assess their progress andperformances so issues and inconsistencies can be detectedearly before they turn into critical problems. The earlierthese are detected, the less effort is required to resolve themand to synchronize team understandings.

7. THREATS TO VALIDITYRepresentativeness of projects. Most projects were smalle-services projects, which may not represent the industryat a larger scale. Nonetheless, the projects were done for

Page 9: Improving Software Development Tracking and Estimation Inside the

Figure 13: Average effort spent in hours by individ-uals on the project

Figure 14: Average effort spent in hours by individ-uals for each project activity

real clients with real fixed schedules and costs. Also, allprojects followed the same incremental development processand project activities that are used in the industry.

Representativeness of personnel. The student teamsand their members may not be representative of the industrypersonnel since the majority of them had less than 2 years ofindustry experience. However, even though the on-campusstudents may be less experienced, the off-campus studentsand the clients were working professionals.

The unavoidable Hawthorne effect. The student teamsof Fall 2011 were aware that the experiments were beingconducted with the COTIPMO tool. This may have hadsome level of impact on the way the teams developed theirproject estimates using the COCOMO II model built intoCOTIPMO and caused the classic Hawthorne effect on theexperiment. However, for all three years, the student teamswere evaluated and graded based on their performances.This means that all the teams were equally motivated to pro-duce correct and accurate estimation results. Furthermore,we were more interested in the improvements to the CO-COMO II parameter ratings that occurred overtime duringthe Fall 2011 semester, whereas Fall 2009 and 2010 semestersshowed no signs of improvements.

8. CONCLUSION AND FUTURE WORK

We have developed the COTIPMO tool, an implementa-tion of the COTIPMO framework developed in [2], to aidsoftware development teams in tracking their developmentprogress, assessing their team’s performance, and improv-ing their project estimations throughout the project life cy-cle. The tool provides strong support for both develop-ment projects and NDI-intensive projects. For developmentprojects, the team can benefit substantially from the integra-tion of the UCC tool for automated sizing of the developedsoftware and the use of the COCOMO II model for SLOC-effort conversion. For NDI-intensive projects, the tool uti-lizes the COCOMO II Application Point model where theteam reports the number of screens, reports, and 3GL com-ponents developed and customized.

As teams continuously assess themselves with the COTIPMOtool, issues and inconsistencies can be detected early helpingthem reduce the number of uncertainties as they progress.The tool analyzes these assessment data to suggest adjust-ments to the either the COCOMO II or COCOMO II Ap-plication Point estimation parameters. This creates morerealistic and accurate estimations reflecting the team’s sta-tus instead of the potential “guess” work done by the teams.

The COTIPMO tool had been deployed at USC and wasused by 13 software engineering projects. The teams hadshown significant improvements in project estimations andperformances. Their estimates became more accurate andrealistic over time reflecting the actual teams’ performances,while the team members were better synchronized and stabi-lized because potential problems could be detected early onbefore they became critical issues or defects. Furthermore,some projects were able to deliver earlier than planned asthey were able to effectively track their development progress,recalibrate their resource estimations, and realize the abilityto complete within a shorter timeframe.

Our plan for future work is to extend the COTIPMO toolto be integrated with some configuration management toolssuch as Subversion, CVS, or Rational Team Concert. Thiswould enhance the software development progress trackingprocess to be more automated as the tool can constantlymonitor the source code checked in to the version controlsystem. The tool would also be able to utilize the differ-encing function of the UCC allowing it to count the numberof SLOC that were added, modified, and deleted from theprevious version in addition to the total logical SLOC. Thiswould make the tracking of progress even more accurate andmore realistic.

Another target for our future work is to experiment theCOTIPMO tool in the industry to verify and validate theframework and the tool when used on the industry projectsat a larger scale. To observe its effectiveness, we will alsoapply the tool to projects of different sizes and domains asthe majority of the projects that we have experimented withwere only small e-services projects.

9. ACKNOWLEDGMENTSThe authors would like to thank Bhargav Rajagopalan andSergio Romulo Salazar for their effort in helping develop theCOTIPMO tool.

Page 10: Improving Software Development Tracking and Estimation Inside the

10. REFERENCES[1] J. C. Abib and T. G. Kirner. A GQM-based tool to

support the development of software qualitymeasurement plans. SIGSOFT Softw. Eng. Notes,24:75–80, July 1999.

[2] P. Aroonvatanaporn, S. Koolmanojwong, andB. Boehm. COTIPMO: A COnstructive TeamImprovement Process MOdel. In Proc. of the 2012 Int.Conf. on Software and Systems Process (ICSSP’12),Zurich, Switzerland, 2012.

[3] P. Aroonvatanaporn, C. Sinthop, and B. Boehm.Reducing estimation uncertainty with continuousassessment: tracking the “cone of uncertainty”. InProc. of the IEEE/ACM Int. Conf. on AutomatedSoftware Engineering (ASE’10), pages 337–340,Antwerp, Belgium, 2010.

[4] V. R. Basili. Applying the Goal/Question/Metricparadigm in the experience factory. In N. Fenton,R. Whitty, and Y. Lizuka, editors, Software QualityAssurance and Measurement: Worldwide Perspective,pages 21–44. International Thomson Computer Press,1995.

[5] B. Boehm. Software Engineering Economics.Prentice-Hall, 1981.

[6] B. Boehm, C. Abts, A. W. Brown, S. Chulani,E. Horowitz, R. Madachy, D. J. Reifer, and B. Steece.Software Cost Estimation with COCOMO II.Prentice-Hall, 2000.

[7] B. Boehm and J. A. Lane. Using the IncrementalCommitment Model to integrate system acquisition,systems engineering, and software engineering, 2006.

[8] B. Boehm, D. Port, L.-G. Huang, and W. Brown.Using The Spiral Model and MBASE to generate newacquisition process models: SAIV, CAIV, andSCQAIV. CrossTalk, pages 20–25, January 2002.

[9] A. Cockburn. Earned-value and burn charts (burn upand burn down). In Crystal Clear. Addison-Wesley,2004.

[10] M. Cohn. Agile Estimating and Planning.Prentice-Hall, 2006.

[11] D. D. Galorath and M. W. Evans. Software Sizing,Estimation, and Risk Management. AuerbachPublications, 1 edition, March 2006.

[12] J. Grenning. Planning poker.http://renaissancesoftware.net/files/articles/

PlanningPoker-v1.1.pdf, April 2002.

[13] Jazz Foundation.https://jazz.net/projects/jazz-foundation/.

[14] S. Koolmanojwong and B. Boehm. The IncrementalCommitment Model process patterns for rapid-fieldingprojects. In Proc. of the 2010 Int. Conf. on NewModeling Concepts for Today’s Software Processes:Software Process (ICSP’10), pages 150–162,Paderborn, Germany, 2010.

[15] W. Krebs, P. Kroll, and E. Richard. Un-assessmentsreflections by the team, for the team. In Proc. of theAgile 2008, pages 384 –389, Washington, DC, USA,aug 2008.

[16] P. Kroll and W. Krebs. Introducing IBM Rational SelfCheck for software teams.http://www.ibm.com/developerworks/rational/

library/edge/08/may08/kroll_krebs, May 3 2008.[Online; accessed 20-January-2012].

[17] L. Lavazza. Providing automated support for theGQM measurement process. Software, IEEE, 17(3):56–62, may/jun 2000.

[18] T. Menzies, Z. Chen, J. Hihn, and K. Lum. Selectingbest practices for effort estimation. SoftwareEngineering, IEEE Transactions on, 32(11):883 –895,nov. 2006.

[19] PRICE Systems. Your guide to PRICE-S: Estimatingcost and schedule of software development andsupport, 1998.

[20] L. Putnam and A. Fitzsimmons. Estimating softwarecosts. Datamation, pages 189–198, September 1979.

[21] Rational Team Concert. https://jazz.net/projects/rational-team-concert/.

[22] Standish Group. Chaos summary 2009.http://standishgroup.com, 2009.

[23] Unified CodeCounter.http://sunset.usc.edu/research/CODECOUNT/.

[24] J. D. Wiest and F. K. Levy. A Management Guide toPERT/CPM. Prentice-Hall, Englewood Press, 1977.

[25] D. Yang, Y. Wan, Z. Tang, S. Wu, M. He, and M. Li.COCOMO-U: An extension of COCOMO II for costestimation with uncertainty. In Q. Wang, D. Pfahl,D. Raffo, and P. Wernick, editors, Software ProcessChange, volume 3966 of Lecture Notes in ComputerScience, pages 132–141. Springer Berlin / Heidelberg,2006.