Software Testing and Quality Assurance Software Quality Metrics

43
Software Testing and Software Testing and Quality Assurance Quality Assurance Software Quality Software Quality Metrics Metrics 1

description

Software Testing and Quality Assurance Software Quality Metrics. Reading Assignment. Stephen H. Kan "Metrics and Models in Software Quality Engineering", Addison Wesley, Second Edition, 2002. Chapter 4: Sections 1, 2, 3 and 4. Objectives. Software Metrics Classification - PowerPoint PPT Presentation

Transcript of Software Testing and Quality Assurance Software Quality Metrics

Page 1: Software Testing and Quality  Assurance Software Quality Metrics

Software Testing and Quality Software Testing and Quality AssuranceAssurance

Software Quality MetricsSoftware Quality Metrics

1

Page 2: Software Testing and Quality  Assurance Software Quality Metrics

2

Reading AssignmentReading AssignmentStephen H. Kan "Metrics and

Models in Software Quality Engineering", Addison Wesley, Second Edition, 2002.◦Chapter 4: Sections 1, 2, 3 and 4.

Page 3: Software Testing and Quality  Assurance Software Quality Metrics

3

ObjectivesObjectivesSoftware Metrics ClassificationExamples of Metric Programs

Page 4: Software Testing and Quality  Assurance Software Quality Metrics

Software MetricsSoftware Metrics Software metrics can be classified into three

categories: ◦ Product Metrics: Describe the characteristics of the

product such as size, complexity, design features, performance, and quality level.

◦ Process Metrics: Used to improve software development and maintenance. Examples include the effectiveness of defect removal during development, the pattern of testing defect arrival, and the response time of the fix process.

◦ Project Metrics: Describe the project characteristics and execution. Examples include the number of software developers, the staffing pattern over the life cycle of the software, cost, schedule, and productivity.

Some metrics belong to multiple categories. For example, the inprocess quality metrics of a project are both process metrics and project metrics.

4

Page 5: Software Testing and Quality  Assurance Software Quality Metrics

Product Quality MetricsProduct Quality MetricsMean time to failureDefect densityCustomer problemsCustomer satisfaction

5

Page 6: Software Testing and Quality  Assurance Software Quality Metrics

Product Quality MetricsProduct Quality MetricsMean time to failure (MTTF)

◦ Safety-critical systems airline traffic control systems, avionics, and weapons.

◦ U.S. government mandates that its air traffic control system cannot be unavailable for more than three seconds per year.

Defect density◦ Related to MTTF, but different.◦ Can be looked at from the development team

perspective (discussed here) or from the customer perspective (discussed in the book)

◦ To define a rate, we first have to operationalize the numerator and the denominator, and specify the time frame. Observed failures can be used to approximate the

number of defects in the software (numerator). The denominator is the size of the software, usually

expressed in thousand lines of code (KLOC) or in the number of function points. 6

Page 7: Software Testing and Quality  Assurance Software Quality Metrics

Product Quality MetricsProduct Quality MetricsDefect density (cont.)

◦How to count the Lines of Code metric? executable lines. executable lines plus data definitions. executable lines, data definitions, and

comments. executable lines, data definitions,

comments, and job control language. physical lines on an input screen. lines terminated by logical delimiters.

7

Page 8: Software Testing and Quality  Assurance Software Quality Metrics

Product Quality MetricsProduct Quality MetricsDefect density (cont.)

◦Function points: A collection of executable statements that performs a certain task, together with declarations of the formal parameters and local variables manipulated by those statements. The defect rate metric is indexed to the

number of functions a software provides. Ultimate measure of software productivity Measuring functions is theoretically

promising but realistically very difficult.

8

Page 9: Software Testing and Quality  Assurance Software Quality Metrics

Product Quality MetricsProduct Quality MetricsDefect density (cont.)

At IBM Rochester, lines of code data is based on instruction statements (logical LOC) and includes executable code and data definitions but excludes comments. Because the LOC count is based on source

instructions, the two size metrics are called shipped source instructions (SSI) and new and changed source instructions (CSI), respectively.

The relationship between SSI and CSI:

9

Page 10: Software Testing and Quality  Assurance Software Quality Metrics

Product Quality MetricsProduct Quality MetricsDefect density (cont.)

◦The several postrelease defect rate metrics per thousand SSI (KSSI) or per thousand CSI (KCSI) are: Total defects per KSSI (a measure of code

quality of the total product) Field defects per KSSI (a measure of defect

rate in the field) Release-origin defects (field and internal) per

KCSI (a measure of development quality) Release-origin field defects per KCSI (a

measure of development quality per defects found by customers)

10

Page 11: Software Testing and Quality  Assurance Software Quality Metrics

Product Quality MetricsProduct Quality MetricsCustomer problems

◦Measures the problems customers encounter when using the product.

◦from the customers' standpoint, all problems they encounter while using the software product, not just the valid defects, are problems with the software. For example: usability problems, unclear

documentation or information, duplicates of valid defects, or user errors.

11

Page 12: Software Testing and Quality  Assurance Software Quality Metrics

Product Quality MetricsProduct Quality MetricsCustomer problems (cont.)

Problems per user month (PUM):

where

◦ Approaches to achieve a low PUM include: Improve the development process and reduce

the product defects. Reduce the non-defect-oriented problems by

improving all aspects of the products (such as usability, documentation), customer education, and support.

Increase the sale (the number of installed licenses) of the product.

12

Page 13: Software Testing and Quality  Assurance Software Quality Metrics

Product Quality MetricsProduct Quality MetricsCustomer problems (cont.)

The customer problems metric can be regarded as an intermediate measurement between defects measurement and customer satisfaction.

13

Defects

CustomerProblems

CustomerSatisfaction

Defect Rate Problems Per User-Month

Numerator Valid and unique product defectsAll customer problems (defects and nondefects, first time and repeated)

Denominator

Size of product (KLOC or function point)Customer usage of the product (user-months)

Measurement

perspective Producer—software development organization

Customer

Scope Intrinsic product qualityIntrinsic product quality plus other factors

Page 14: Software Testing and Quality  Assurance Software Quality Metrics

Product Quality MetricsProduct Quality Metrics Customer satisfaction

◦ Measured by customer survey data via the five-point scale: Very satisfied, Satisfied, Neutral, Dissatisfied and Very dissatisfied.

◦ Specific parameters of customer satisfaction in software monitored by IBM include the CUPRIMDSO categories (capability,

functionality, usability, performance, reliability, installability, maintainability, documentation/information, service, and overall)

Hewlett-Packard include FURPS (functionality, usability, reliability, performance, and service).

◦ Based on the five-point-scale data, several metrics with slight variations can be constructed and used: Percent of completely satisfied customers Percent of satisfied customers (satisfied and completely

satisfied) Percent of dissatisfied customers (dissatisfied and completely

dissatisfied) Percent of nonsatisfied (neutral, dissatisfied, and completely

dissatisfied)

14

Page 15: Software Testing and Quality  Assurance Software Quality Metrics

Product Quality MetricsProduct Quality MetricsCustomer satisfaction

Some companies use the net satisfaction index (NSI) to facilitate comparisons across product.

The NSI has the following weighting factors: Completely satisfied = 100% Satisfied = 75% Neutral = 50% Dissatisfied = 25% Completely dissatisfied = 0%

This weighting approach may mask the satisfaction profile of one's customer set. It is inferior to the simple approach of calculating

percentage of specific categories. A weighted index is for data summary when

multiple indicators are too cumbersome to be shown.

15

Page 16: Software Testing and Quality  Assurance Software Quality Metrics

In-Process Quality MetricsIn-Process Quality MetricsIn-process quality metrics are less

formally defined than end-product metrics, and their practices vary greatly among software developers.◦From tracking defect arrival during formal

machine testing to covering various parameters in each phase of the development cycle.

In-process Quality Metrics◦Defect Density During Machine Testing◦Defect Arrival Pattern During Machine

Testing◦Phase-Based Defect Removal Pattern◦Defect Removal Effectiveness

16

Page 17: Software Testing and Quality  Assurance Software Quality Metrics

Defect Density During Machine Testing

Higher defect rates found during testing is an indicator that either◦ The software has experienced higher error

injection during its development process,◦ Extraordinary testing effort has been exerted,

due to additional testing new testing approach that was deemed more

effective in detecting defects. This simple metric of defects per KLOC or

function point is a good indicator of quality while the software is still being tested. ◦ Also useful to monitor subsequent releases of

a product in the same development organization.

17

Page 18: Software Testing and Quality  Assurance Software Quality Metrics

Defect Density During Machine Testing

The development team or the project manager can use the following scenarios to judge the release quality: ◦ If the defect rate during testing is the same or

lower than that of the previous release (or a similar product), then ask: Does the testing for the current release deteriorate? If the answer is no, the quality perspective is positive,

otherwise more testing is needed (e.g., add test cases to increase coverage, customer testing, stress testing, etc.).

◦ If the defect rate during testing is substantially higher than that of the previous release (or a similar product), then ask: Did we plan for and actually improve testing effectiveness? If the answer is no, the quality perspective is negative,

implying the need for more testing (which can result in higher defect rates!!!!). Otherwise, the quality perspective is the same or positive.

18

Page 19: Software Testing and Quality  Assurance Software Quality Metrics

Defect Arrival Pattern During Machine Testing

The pattern of defect arrivals (or for that matter, times between failures) gives more information than defect density during testing.

The objective is always to look for defect arrivals that stabilize at a very low level, or times between failures that are far apart, before ending the testing effort and releasing the software to the field.

19

Page 20: Software Testing and Quality  Assurance Software Quality Metrics

20

Page 21: Software Testing and Quality  Assurance Software Quality Metrics

Defect Arrival Pattern During Machine Testing

Three different quality metrics need to be looked at simultaneously:◦The defect arrivals (defects reported)

during the testing phase by time interval (e.g., week).

◦The pattern of valid defect arrivals◦The pattern of defect backlog

overtime.

21

Page 22: Software Testing and Quality  Assurance Software Quality Metrics

Phase-Based Defect Removal Pattern

An extension of the test defect density metric.

In addition to testing, it requires the tracking of defects at all phases of the development cycle, including the design reviews, code inspections, and formal verifications before testing.

The pattern of phase-based defect removal reflects the overall defect removal ability of the development process.

Quality metrics include defect rates, inspection coverage and inspection effort.

22

Page 23: Software Testing and Quality  Assurance Software Quality Metrics

Phase-Based Defect Removal Pattern

23

I0 : high-level design reviewI1 : low-level design reviewI2 : code inspectionUT: unit testCT: component testST: system test

Page 24: Software Testing and Quality  Assurance Software Quality Metrics

Defect Removal Effectiveness

24

100product in thelatent Defects

phaset developmen a using removed DefectsDRE

Because the total number of latent defects in the product at anygiven phase is not known, the denominator of the metric can only be approximated. It is usually estimated by:

Defects removed during the phase + Defects found later

Page 25: Software Testing and Quality  Assurance Software Quality Metrics

Defect Removal Effectiveness

25

I0 : high-level design reviewI1 : low-level design reviewI2 : code inspectionUT: unit testCT: component testST: system test

Page 26: Software Testing and Quality  Assurance Software Quality Metrics

Metrics for Software Metrics for Software MaintenanceMaintenanceDuring this phase the defect arrivals by

time interval and customer problem calls by time interval are the de facto metrics.◦ Largely determined by the development

process before the maintenance phase.◦ Hence, not much can be done w.r.t. the quality

of the product at the maintenance phase.◦ What is needed is a measure of how quick and

efficient defects are fixed.Metrics for Software Maintenance

◦ Fix Backlog and Backlog Management Index◦ Fix Response Time and Fix Responsiveness◦ Percent Delinquent Fixes◦ Fix Quality

26

Page 27: Software Testing and Quality  Assurance Software Quality Metrics

Fix Backlog and Backlog Fix Backlog and Backlog Management IndexManagement IndexA simple count of reported

problems that remain at the end of each month or each week.

◦If BMI > 100%, the backlog is reduced

27

100month theduring arrivals problem ofNumber

month theduring closed problems ofNumber BMI

Page 28: Software Testing and Quality  Assurance Software Quality Metrics

Fix Backlog and Backlog Fix Backlog and Backlog Management IndexManagement Index

28

Page 29: Software Testing and Quality  Assurance Software Quality Metrics

Fix Response Time and Fix Fix Response Time and Fix ResponsivenessResponsivenessMean time of all problems from

open to closed.Usually depends on the severity

of the problems.◦Less for severe problems and more

for minor problems.

29

Page 30: Software Testing and Quality  Assurance Software Quality Metrics

Percent Delinquent FixesPercent Delinquent Fixes

Captures the latency in fixing software that was beyond the time allotted.

Accounts for closed problems◦What about still open problems? Active

backlog refers to all opened problems for the week. The sum of the existing backlog at the beginning

of the week and new problem arrivals during the week.

30

100

time

specified ain delivered fixes #

levelseverity by criteria time

response exceeding fixes #

fixes delinquentPercent

Page 31: Software Testing and Quality  Assurance Software Quality Metrics

Fix QualityFix Quality

Finding a defect by the customer is bad, however receiving a defective fix or a fix that introduced a defect in another component is even worse.

The metric of percent defective fixes is the percentage of all fixes in a time interval (e.g., 1 month) that are defective.

Why not use percentages for defective fixes?

31

Page 32: Software Testing and Quality  Assurance Software Quality Metrics

Examples of Metric Examples of Metric ProgramsProgramsThe book presents three sample

programs◦Motorola◦HP◦IBM Rochester

We will only look at one, viz., Motorola

32

Page 33: Software Testing and Quality  Assurance Software Quality Metrics

Motorola’s Software Metrics Motorola’s Software Metrics ProgramProgramFollowed the

Goal/Question/Metric paradigm of Basili and Weiss as follows:◦goals were identified, ◦questions were formulated in

quantifiable terms, and ◦metrics were established

33

Page 34: Software Testing and Quality  Assurance Software Quality Metrics

Motorola’s Software Metrics Motorola’s Software Metrics ProgramProgramGoal 1: Improve Project Planning

Question 1.1: What was the accuracy of estimating the actual value of project schedule? Metric 1.1 : Schedule Estimation Accuracy (SEA)

Question 1.2: What was the accuracy of estimating the actual value of project effort? Metric 1.2 : Effort Estimation Accuracy (EEA)

34

Page 35: Software Testing and Quality  Assurance Software Quality Metrics

Goal 2: Increase Defect Containment Question 2.1: What is the currently known

effectiveness of the defect detection process prior to release? Metric 2.1: Total Defect Containment Effectiveness (TDCE)

Question 2.2: What is the currently known containment effectiveness of faults introduced during each constructive phase of software development for a particular software product? Metric 2.2: Phase Containment Effectiveness for phase i

(PCEi)

35

Motorola’s Software Metrics Motorola’s Software Metrics ProgramProgram

Page 36: Software Testing and Quality  Assurance Software Quality Metrics

Goal 3: Increase Software Reliability

Question 3.1: What is the rate of software failures, and how does it change over time? Metric 3.1: Failure Rate (FR)

36

Motorola’s Software Metrics Motorola’s Software Metrics ProgramProgram

Page 37: Software Testing and Quality  Assurance Software Quality Metrics

Goal 4: Decrease Software Defect Density

Question 4.1: What is the normalized number of in-process faults, and how does it compare with the number of in-process defects? Metric 4.1a: In-process Faults (IPF) Metric 4.1b: In-process Defects (IPD)

37

Motorola’s Software Metrics Motorola’s Software Metrics ProgramProgram

Page 38: Software Testing and Quality  Assurance Software Quality Metrics

Question 4.2: What is the currently known defect content of software delivered to customers, normalized by Assembly-equivalent size? Metric 4.2a: Total Released Defects (TRD) total

Metric 4.2b: Total Released Defects (TRD) delta

38

Motorola’s Software Metrics Motorola’s Software Metrics ProgramProgram

Page 39: Software Testing and Quality  Assurance Software Quality Metrics

Question 4.3: What is the currently known customer-found defect content of software delivered to customers, normalized by Assembly-equivalent source size? Metric 4.3a: Customer-Found Defects (CFD)

total Metric 4.3b: Customer-Found Defects (CFD)

delta

39

Motorola’s Software Metrics Motorola’s Software Metrics ProgramProgram

Page 40: Software Testing and Quality  Assurance Software Quality Metrics

Goal 5: Improve Customer Service Question 5.1 What is the number of new

problems opened during the month? Metric 5.1: New Open Problems (NOP)

Question 5.2 What is the total number of open problems at the end of the month? Metric 5.2: Total Open Problems (TOP)

Question 5.3: What is the mean age of open problems at the end of the month? Metric 5.3: Mean Age of Open Problems (AOP)

Question 5.4: What is the mean age of the problems that were closed during the month? Metric 5.4: Mean Age of Closed Problems (ACP)

40

Motorola’s Software Metrics Motorola’s Software Metrics ProgramProgram

Page 41: Software Testing and Quality  Assurance Software Quality Metrics

Goal 6: Reduce the Cost of Nonconformance

Question 6.1: What was the cost to fix postrelease problems during the month? Metric 6.1: Cost of Fixing Problems (CFP)

Goal 7: Increase Software Productivity

Question 7.1: What was the productivity of software development projects (based on source size)? Metric 7.1a: Software Productivity total (SP total) Metric 7.1b: Software Productivity delta (SP delta)

41

Motorola’s Software Metrics Motorola’s Software Metrics ProgramProgram

Page 42: Software Testing and Quality  Assurance Software Quality Metrics

Other In-Process MetricsOther In-Process Metrics Life-cycle phase and schedule tracking metric: Track

schedule based on lifecycle phase and compare actual to plan.

Cost/earned value tracking metric: Track actual cumulative cost of the project versus budgeted cost, and actual cost of the project so far, with continuous update throughout the project.

Requirements tracking metric: Track the number of requirements change at the project level.

Design tracking metric: Track the number of requirements implemented in design versus the number of requirements written.

Fault-type tracking metric: Track causes of faults. Remaining defect metrics: Track faults per month for the

project and use Rayleigh curve to project the number of faults in the months ahead during development.

Review effectiveness metric: Track error density by stages of review and use control chart methods to flag the exceptionally high or low data points.

42

Page 43: Software Testing and Quality  Assurance Software Quality Metrics

Key PointsKey Points Software quality can be grouped according to the

software life cycle into: end-product, in-process, and maintenance quality metrics.

Product quality metrics◦ Mean time to failure◦ Defect density◦ Customer-reported problems◦ Customer satisfaction

In-process quality metrics◦ Phase-based defect removal pattern◦ Defect removal effectiveness◦ Defect density during formal machine testing◦ Defect arrival pattern during formal machine testing

Maintenance quality metrics◦ Fix backlog◦ Backlog management index◦ Fix response time and fix responsiveness◦ Percent delinquent fixes◦ Defective fixes 43