Software Testing and Quality Assurance Software Quality Metrics 1.

of 43/43
Software Testing and Software Testing and Quality Assurance Quality Assurance Software Quality Software Quality Metrics Metrics 1
  • date post

    27-Dec-2015
  • Category

    Documents

  • view

    274
  • download

    0

Embed Size (px)

Transcript of Software Testing and Quality Assurance Software Quality Metrics 1.

  • Software Testing and Quality Assurance

    Software Quality Metrics*

  • *Reading AssignmentStephen H. Kan "Metrics and Models in Software Quality Engineering", Addison Wesley, Second Edition, 2002.Chapter 4: Sections 1, 2, 3 and 4.

  • *ObjectivesSoftware Metrics ClassificationExamples of Metric Programs

  • Software MetricsSoftware metrics can be classified into three categories: Product Metrics: Describe the characteristics of the product such as size, complexity, design features, performance, and quality level.Process Metrics: Used to improve software development and maintenance. Examples include the effectiveness of defect removal during development, the pattern of testing defect arrival, and the response time of the fix process. Project Metrics: Describe the project characteristics and execution. Examples include the number of software developers, the staffing pattern over the life cycle of the software, cost, schedule, and productivity. Some metrics belong to multiple categories. For example, the inprocess quality metrics of a project are both process metrics and project metrics.*

  • Product Quality MetricsMean time to failureDefect densityCustomer problemsCustomer satisfaction

    *

  • Product Quality MetricsMean time to failure (MTTF)Safety-critical systemsairline traffic control systems, avionics, and weapons. U.S. government mandates that its air traffic control system cannot be unavailable for more than three seconds per year. Defect densityRelated to MTTF, but different.Can be looked at from the development team perspective (discussed here) or from the customer perspective (discussed in the book)To define a rate, we first have to operationalize the numerator and the denominator, and specify the time frame.Observed failures can be used to approximate the number of defects in the software (numerator).The denominator is the size of the software, usually expressed in thousand lines of code (KLOC) or in the number of function points.*

  • Product Quality MetricsDefect density (cont.)How to count the Lines of Code metric?executable lines.executable lines plus data definitions.executable lines, data definitions, and comments.executable lines, data definitions, comments, and job control language.physical lines on an input screen.lines terminated by logical delimiters.

    *

  • Product Quality MetricsDefect density (cont.)Function points: A collection of executable statements that performs a certain task, together with declarations of the formal parameters and local variables manipulated by those statements.The defect rate metric is indexed to the number of functions a software provides.Ultimate measure of software productivityMeasuring functions is theoretically promising but realistically very difficult.*

  • Product Quality MetricsDefect density (cont.)At IBM Rochester, lines of code data is based on instruction statements (logical LOC) and includes executable code and data definitions but excludes comments. Because the LOC count is based on source instructions, the two size metrics are called shipped source instructions (SSI) and new and changed source instructions (CSI), respectively. The relationship between SSI and CSI:

    *

  • Product Quality MetricsDefect density (cont.)The several postrelease defect rate metrics per thousand SSI (KSSI) or per thousand CSI (KCSI) are:Total defects per KSSI (a measure of code quality of the total product)Field defects per KSSI (a measure of defect rate in the field)Release-origin defects (field and internal) per KCSI (a measure of development quality)Release-origin field defects per KCSI (a measure of development quality per defects found by customers)

    *

  • Product Quality MetricsCustomer problemsMeasures the problems customers encounter when using the product.from the customers' standpoint, all problems they encounter while using the software product, not just the valid defects, are problems with the software.For example: usability problems, unclear documentation or information, duplicates of valid defects, or user errors. *

  • Product Quality MetricsCustomer problems (cont.)Problems per user month (PUM):

    where

    Approaches to achieve a low PUM include:Improve the development process and reduce the product defects.Reduce the non-defect-oriented problems by improving all aspects of the products (such as usability, documentation), customer education, and support.Increase the sale (the number of installed licenses) of the product.

    *

  • Product Quality MetricsCustomer problems (cont.)The customer problems metric can be regarded as an intermediate measurement between defects measurement and customer satisfaction. *

    Defect RateProblems Per User-MonthNumeratorValid and unique product defectsAll customer problems (defects and nondefects, first time and repeated)DenominatorSize of product (KLOC or function point)Customer usage of the product (user-months)Measurementperspective Producersoftware development organizationCustomerScopeIntrinsic product qualityIntrinsic product quality plus other factors

  • Product Quality MetricsCustomer satisfactionMeasured by customer survey data via the five-point scale: Very satisfied, Satisfied, Neutral, Dissatisfied and Very dissatisfied.Specific parameters of customer satisfaction in software monitored by IBM include the CUPRIMDSO categories (capability, functionality, usability, performance, reliability, installability, maintainability, documentation/information, service, and overall)Hewlett-Packard include FURPS (functionality, usability, reliability, performance, and service).Based on the five-point-scale data, several metrics with slight variations can be constructed and used:Percent of completely satisfied customersPercent of satisfied customers (satisfied and completely satisfied)Percent of dissatisfied customers (dissatisfied and completely dissatisfied)Percent of nonsatisfied (neutral, dissatisfied, and completely dissatisfied)*

  • Product Quality MetricsCustomer satisfactionSome companies use the net satisfaction index (NSI) to facilitate comparisons across product. The NSI has the following weighting factors:Completely satisfied = 100%Satisfied = 75%Neutral = 50%Dissatisfied = 25%Completely dissatisfied = 0%This weighting approach may mask the satisfaction profile of one's customer set. It is inferior to the simple approach of calculating percentage of specific categories.A weighted index is for data summary when multiple indicators are too cumbersome to be shown. *

  • In-Process Quality MetricsIn-process quality metrics are less formally defined than end-product metrics, and their practices vary greatly among software developers.From tracking defect arrival during formal machine testing to covering various parameters in each phase of the development cycle. In-process Quality MetricsDefect Density During Machine TestingDefect Arrival Pattern During Machine TestingPhase-Based Defect Removal PatternDefect Removal Effectiveness*

  • Defect Density During Machine TestingHigher defect rates found during testing is an indicator that eitherThe software has experienced higher error injection during its development process,Extraordinary testing effort has been exerted, due to additional testing new testing approach that was deemed more effective in detecting defects. This simple metric of defects per KLOC or function point is a good indicator of quality while the software is still being tested. Also useful to monitor subsequent releases of a product in the same development organization. *

  • Defect Density During Machine TestingThe development team or the project manager can use the following scenarios to judge the release quality: If the defect rate during testing is the same or lower than that of the previous release (or a similar product), then ask: Does the testing for the current release deteriorate?If the answer is no, the quality perspective is positive, otherwise more testing is needed (e.g., add test cases to increase coverage, customer testing, stress testing, etc.).If the defect rate during testing is substantially higher than that of the previous release (or a similar product), then ask: Did we plan for and actually improve testing effectiveness?If the answer is no, the quality perspective is negative, implying the need for more testing (which can result in higher defect rates!!!!). Otherwise, the quality perspective is the same or positive.

    *

  • Defect Arrival Pattern During Machine TestingThe pattern of defect arrivals (or for that matter, times between failures) gives more information than defect density during testing.The objective is always to look for defect arrivals that stabilize at a very low level, or times between failures that are far apart, before ending the testing effort and releasing the software to the field.*

  • *

  • Defect Arrival Pattern During Machine TestingThree different quality metrics need to be looked at simultaneously:The defect arrivals (defects reported) during the testing phase by time interval (e.g., week). The pattern of valid defect arrivalsThe pattern of defect backlog overtime.

    *

  • Phase-Based Defect Removal PatternAn extension of the test defect density metric.In addition to testing, it requires the tracking of defects at all phases of the development cycle, including the design reviews, code inspections, and formal verifications before testing.The pattern of phase-based defect removal reflects the overall defect removal ability of the development process.Quality metrics include defect rates, inspection coverage and inspection effort.*

  • Phase-Based Defect Removal Pattern*I0 : high-level design reviewI1 : low-level design reviewI2 : code inspectionUT: unit testCT: component testST: system test

  • Defect Removal Effectiveness*Because the total number of latent defects in the product at anygiven phase is not known, the denominator of the metric can only be approximated. It is usually estimated by:Defects removed during the phase + Defects found later

  • Defect Removal Effectiveness*I0 : high-level design reviewI1 : low-level design reviewI2 : code inspectionUT: unit testCT: component testST: system test

  • Metrics for Software MaintenanceDuring this phase the defect arrivals by time interval and customer problem calls by time interval are the de facto metrics.Largely determined by the development process before the maintenance phase.Hence, not much can be done w.r.t. the quality of the product at the maintenance phase.What is needed is a measure of how quick and efficient defects are fixed.Metrics for Software MaintenanceFix Backlog and Backlog Management IndexFix Response Time and Fix ResponsivenessPercent Delinquent FixesFix Quality*

  • Fix Backlog and Backlog Management IndexA simple count of reported problems that remain at the end of each month or each week.

    If BMI > 100%, the backlog is reduced*

  • Fix Backlog and Backlog Management Index*

  • Fix Response Time and Fix ResponsivenessMean time of all problems from open to closed.Usually depends on the severity of the problems.Less for severe problems and more for minor problems.*

  • Percent Delinquent FixesCaptures the latency in fixing software that was beyond the time allotted.

    Accounts for closed problemsWhat about still open problems? Active backlog refers to all opened problems for the week.The sum of the existing backlog at the beginning of the week and new problem arrivals during the week.

    *

  • Fix QualityFinding a defect by the customer is bad, however receiving a defective fix or a fix that introduced a defect in another component is even worse.The metric of percent defective fixes is the percentage of all fixes in a time interval (e.g., 1 month) that are defective.Why not use percentages for defective fixes?*

  • Examples of Metric ProgramsThe book presents three sample programsMotorolaHPIBM RochesterWe will only look at one, viz., Motorola*

  • Motorolas Software Metrics ProgramFollowed the Goal/Question/Metric paradigm of Basili and Weiss as follows:goals were identified, questions were formulated in quantifiable terms, and metrics were established*

  • Motorolas Software Metrics ProgramGoal 1: Improve Project PlanningQuestion 1.1: What was the accuracy of estimating the actual value of project schedule?Metric 1.1 : Schedule Estimation Accuracy (SEA)

    Question 1.2: What was the accuracy of estimating the actual value of project effort?Metric 1.2 : Effort Estimation Accuracy (EEA)

    *

  • Goal 2: Increase Defect ContainmentQuestion 2.1: What is the currently known effectiveness of the defect detection process prior to release?Metric 2.1: Total Defect Containment Effectiveness (TDCE)

    Question 2.2: What is the currently known containment effectiveness of faults introduced during each constructive phase of software development for a particular software product?Metric 2.2: Phase Containment Effectiveness for phase i (PCEi)

    *Motorolas Software Metrics Program

  • Goal 3: Increase Software ReliabilityQuestion 3.1: What is the rate of software failures, and how does it change over time?Metric 3.1: Failure Rate (FR)*Motorolas Software Metrics Program

  • Goal 4: Decrease Software Defect DensityQuestion 4.1: What is the normalized number of in-process faults, and how does it compare with the number of in-process defects?Metric 4.1a: In-process Faults (IPF)Metric 4.1b: In-process Defects (IPD)

    *Motorolas Software Metrics Program

  • Question 4.2: What is the currently known defect content of software delivered to customers, normalized by Assembly-equivalent size?Metric 4.2a: Total Released Defects (TRD) total

    Metric 4.2b: Total Released Defects (TRD) delta

    *Motorolas Software Metrics Program

  • Question 4.3: What is the currently known customer-found defect content of software delivered to customers, normalized by Assembly-equivalent source size?Metric 4.3a: Customer-Found Defects (CFD) totalMetric 4.3b: Customer-Found Defects (CFD) delta

    *Motorolas Software Metrics Program

  • Goal 5: Improve Customer ServiceQuestion 5.1 What is the number of new problems opened during the month?Metric 5.1: New Open Problems (NOP)Question 5.2 What is the total number of open problems at the end of the month?Metric 5.2: Total Open Problems (TOP)Question 5.3: What is the mean age of open problems at the end of the month?Metric 5.3: Mean Age of Open Problems (AOP)Question 5.4: What is the mean age of the problems that were closed during the month?Metric 5.4: Mean Age of Closed Problems (ACP)

    *Motorolas Software Metrics Program

  • Goal 6: Reduce the Cost of NonconformanceQuestion 6.1: What was the cost to fix postrelease problems during the month?Metric 6.1: Cost of Fixing Problems (CFP)Goal 7: Increase Software ProductivityQuestion 7.1: What was the productivity of software development projects (based on source size)?Metric 7.1a: Software Productivity total (SP total)Metric 7.1b: Software Productivity delta (SP delta)

    *Motorolas Software Metrics Program

  • Other In-Process MetricsLife-cycle phase and schedule tracking metric: Track schedule based on lifecycle phase and compare actual to plan.Cost/earned value tracking metric: Track actual cumulative cost of the project versus budgeted cost, and actual cost of the project so far, with continuous update throughout the project.Requirements tracking metric: Track the number of requirements change at the project level.Design tracking metric: Track the number of requirements implemented in design versus the number of requirements written.Fault-type tracking metric: Track causes of faults.Remaining defect metrics: Track faults per month for the project and use Rayleigh curve to project the number of faults in the months ahead during development.Review effectiveness metric: Track error density by stages of review and use control chart methods to flag the exceptionally high or low data points.*

  • Key PointsSoftware quality can be grouped according to the software life cycle into: end-product, in-process, and maintenance quality metrics.Product quality metricsMean time to failureDefect densityCustomer-reported problemsCustomer satisfactionIn-process quality metricsPhase-based defect removal patternDefect removal effectivenessDefect density during formal machine testingDefect arrival pattern during formal machine testingMaintenance quality metricsFix backlogBacklog management indexFix response time and fix responsivenessPercent delinquent fixesDefective fixes

    *

    ************