SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE...

Click here to load reader

  • date post

    10-Apr-2020
  • Category

    Documents

  • view

    13
  • download

    1

Embed Size (px)

Transcript of SOFTWARE QUALITY & TESTING - 123seminarsonly.com · MSIT 32 Software Quality and Testing 1 SOFTWARE...

  • 1MSIT 32 Software Quality and Testing

    SOFTWARE QUALITY & TESTING(MSIT - 32)

    : Contributing Author :

    Dr. B.N. SubrayaInfosys Technologies Ltd.,

    Mysore

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 2

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 3MSIT 32 Software Quality and Testing

    a

    Contents

    Chapter 1

    INTRODUCTION TO SOFTWARE TESTING 1

    1.1 Learning Objectives.......................................................................... 11.2 Introduction...................................................................................... 11.3 What is Testing?............................................................................... 31.4 Approaches to Testing....................................................................... 51.5 Importance of Testing....................................................................... 61.6 Hurdles in Testing............................................................................. 61.7 Testing Fundamentals........................................................................ 7

    Chapter 2

    SOFTWARE QUALITY ASSURANCE 10

    2.1 Learning Objectives.......................................................................... 102.2 Introduction...................................................................................... 102.3 Quality Concepts............................................................................... 112.4 Quality of design............................................................................... 122.5 Quality of Conformance.................................................................... 122.6 Quality Control (QC)......................................................................... 132.7 Quality Assurance (QA).................................................................... 132.8 Software Quality ASSURANCE (SQA)............................................. 142.9 Formal Technical Reviews (FTR)....................................................... 212.10 Statistical Quality Assurance.............................................................. 272.11 Software Reliability........................................................................... 302.12 The SQA Plan.................................................................................. 31

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 4

    Chapter 3

    PROGRAM INSPECTIONS, WALKTHROUGHS AND REVIEWSQUALITY ASSURANCE 36

    3.1 Learning Objectives.......................................................................... 363.2 Introduction...................................................................................... 363.3 Inspections and Walkthroughs............................................................ 373.4 Code Inspections.............................................................................. 383.5 An Error Check list for Inspections.................................................... 393.6 Walkthroughs.................................................................................... 42

    Chapter 4

    TEST CASE DESIGN 43

    4.1 Learning Objectives.......................................................................... 434.2 Introduction...................................................................................... 434.3 White Box Testing............................................................................ 444.4 Basis Path Testing............................................................................ 454.5 Control Structure testing.................................................................... 494.6 Black Box Testing............................................................................ . 534.7 Static Program Analysis.................................................................... 574.8 Automated Testing Tools................................................................... 58

    Chapter 5

    TESTING FOR SPECIALIZED ENVIRONMENTS 60

    5.1 Learning Objectives.......................................................................... 605.2 Introduction...................................................................................... 605.3 Testing GUIs.................................................................................... 605.4 Testing of Client/Server Architectures................................................ 635.5 Testing documentation and Help facilities............................................ 63

    Chapter 6

    SOFTWARE TESTING STRATEGIES 65

    6.1. Learning Objectives.......................................................................... 656.2. Introduction...................................................................................... 656.3 A Strategic Approach To Software Testing......................................... 696.4 Verification and Validation.................................................................. 706.5 Organizing for software testing.......................................................... 716.6 A Software Testing Strategy.............................................................. 726.7 Strategic issues................................................................................. 756.8 Unit Testing...................................................................................... 75

    Contents

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 5MSIT 32 Software Quality and Testing

    6.9 Integration Testing............................................................................ 806.10 Validation Testing.............................................................................. 856.11 System Testing.................................................................................. 866.12 Summary.......................................................................................... 89

    Chapter 7

    TESTING OF WEB BASED APPLICATIONS 91

    7.1 Introduction...................................................................................... 917.2 Testing of Web Based Applications: Technical Peculiarities.................. 917.3 Testing of Static Web- based applications........................................... 927.4 Testing of Dynamic Web based applications........................................ 947.5 Future Challenges............................................................................. 96

    Chapter 8

    TEST PROCESS MODEL 97

    8.0 Need for Test Process Model............................................................ 978.1 Test Process Cluster.......................................................................... 98

    Chapter 9

    TEST METRICS 103

    9.0 Introduction...................................................................................... 1039.1 Overview of the Role and Use of Metrics........................................... 1049.2 Primitive Metric and Computed Metrics.............................................. 1049.3 Metrics typically used within the Testing Process................................ 1059.4 Defect Detection Effectiveness percentage (DDE)............................. 1069.5 Setting up and administering a Metrics Program.................................. 106

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 6

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 7MSIT 32 Software Quality and Testing

    Chapter 1

    Introduction to Software Testing

    1.1 LEARNING OBJECTIVES

    You will learn about:l What is Software Testing?

    l Need for software Testing,

    l Various approaches to Software Testing,

    l What is the defect distribution,

    l Software Testing Fundamentals.

    1.2 INTRODUCTIONSoftware testing is a critical element of software quality assurance and represents the ultimate process

    to ensure the correctness of the product. The quality product always enhances the customer confidencein using the product thereby increases the business economics. In other words, a good quality productmeans zero defects, which is derived from a better quality process in testing.

    The definition of testing is not well understood. People use a totally incorrect definition of the wordtesting, and that this is the primary cause for poor program testing. Examples of these definitions are suchstatements as “Testing is the process of demonstrating that errors are not present”, “The purpose of

    MSIT 32 Software Quality and Testing 1

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 8

    testing is to show that a program performs its intended functions correctly”, and “Testing is the process ofestablishing confidence that a program does what it is supposed to do”.

    Testing the product means adding value to it, which means raising the quality or reliability of theprogram. Raising the reliability of the product means finding and removing errors. Hence one should nottest a product to show that it works; rather, one should start with the assumption that the program containserrors and then test the program to find as many errors as possible. Thus a more appropriate definition is:

    Testing is the process of executing a program with the intent of finding errors.

    Purpose of Testing

    To show the software works: It is known as demonstration-oriented

    To show the software doesn’t work: It is known as destruction-oriented

    To minimize the risk of not working up to an acceptable level: it is known as evaluation-oriented

    Need for Testing

    Defects can exist in the software, as it is developed by human beings who can make mistakes duringthe development of software. However, it is the primary duty of a software vendor to ensure that softwaredelivered does not have defects and the customers day-to-day operations do not get affected. This can beachieved by rigorously testing the software. The most common origin of software bugs is due to:

    l Poor understanding and incomplete requirements

    l Unrealistic schedule

    l Fast changes in requirements

    l Too many assumptions and complacency

    Some of major computer system failures listed below gives ample evidence that the testing is animportant activity of the software quality process.

    l In April of 1999, a software bug caused the failure of a $1.2 billion military satellite launch, thecostliest unmanned accident in the history of Cape Canaveral launches. The failure was thelatest in a string of launch failures, triggering a complete military and industry review of U.S.space launch programs, including software integration and testing processes. Congressionaloversight hearings were requested.

    l On June 4, 1996, the first flight of the European Space Agency’s new Ariane 5 rocket failedshortly after launching, resulting in an estimated uninsured loss of a half billion dollars. It wasreportedly due to the lack of exception handling of a floating-point error in a conversion from a64-bit integer to a 16-bit signed integer.

    Chapter 1 - Introduction to Software Testing

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 9MSIT 32 Software Quality and Testing

    l In January of 2001 newspapers reported that a major European railroad was hit by the aftereffectsof the Y2K bug. The company found that many of their newer trains would not run due to theirinability to recognize the date ’31/12/2000'; the trains were started by altering the control system’sdate settings.

    l In April of 1998 a major U.S. data communications network failed for 24 hours, crippling a largepart of some U.S. credit card transaction authorization systems as well as other large U.S.bank, retail, and government data systems. The cause was eventually traced to a software bug.

    l The computer system of a major online U.S. stock trading service failed during trading hoursseveral times over a period of days in February of 1999 according to nationwide news reports.The problem was reportedly due to bugs in a software upgrade intended to speed online tradeconfirmations.

    l In November of 1997 the stock of a major health industry company dropped 60% due to reportsof failures in computer billing systems, problems with a large database conversion, and inadequatesoftware testing. It was reported that more than $100,000,000 in receivables had to be writtenoff and that multi-million dollar fines were levied on the company by government agencies.

    l Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be creditedwith $924,844,208.32 each in May of 1996, according to newspaperreports. The American Bankers Association claimed it was the largest such error in bankinghistory. A bank spokesman said the programming errors were corrected and all funds wererecovered.

    All the above incidents only reiterate the importance of thorough testing of software applications andproducts before they are put on production. It clearly demonstrates that cost of rectifying defect duringdevelopment is much less than rectifying a defect in production.

    1.3 WHAT IS TESTING?l “Testing is an activity in which a system or component is executed under specified conditions;

    the results are observed and recorded and an evaluation is made of some aspect of the systemor component” - IEEE

    l Executing a system or component is known as dynamic testing.

    l Review, inspection and verification of documents (Requirements, design documents Test Plansetc.), code and other work products of software is known as static testing.

    l Static testing is found to be the most effective and efficient way of testing.

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 10

    l Successful testing of software demands both dynamic and static testing.

    l Measurements show that a defect discovered during design that costs $1 to rectify at that stagewill cost $1,000 to repair in production. This clearly points out the advantage of early testing.

    l Testing should start with small measurable units of code, gradually progress towards testingintegrated components of the applications and finally be completed with testing at the applicationlevel.

    l Testing verifies the system against its stated and implied requirements, i.e., is it doing what it issupposed to do? It should also check if the system is not doing what it is not supposed to do, ifit takes care of boundary conditions, how the system performs in production-like environmentand how fast and consistently the system responds when the data volumes are high.

    Reasons for Software Bugs

    Following are the reasons for Software Bugs:

    l Miscommunication or no communication - as to specifics of what an application should orshouldn’t do (the application’s requirements).

    l Software complexity - the complexity of current software applications can be difficult tocomprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormousrelational databases, and sheer size of applications have all contributed to the exponential growthin software/system complexity. And the use of object-oriented techniques can complicate insteadof simplify a project unless it is well-engineered.

    l Programming errors - programmers, like anyone else, can make mistakes.

    l Changing requirements - the customer may not understand the effects of changes, or mayunderstand and request them anyway - redesign, rescheduling of engineers, effects on otherprojects, work already completed that may have to be redone or thrown out, hardwarerequirements that may be affected, etc. If there are many minor changes or any major changes,known and unknown dependencies among parts of the project are likely to interact and causeproblems, and the complexity of keeping track of changes may result in errors. Enthusiasm ofengineering staff may be affected. In some fast-changing business environments, continuouslymodified requirements may be a fact of life. In this case, management must understand theresulting risks, and QA and test engineers must adapt and plan for continuous extensive testingto keep the inevitable bugs from running out of control.

    l time pressures - scheduling of software projects is difficult at best, often requiring a lot ofguesswork. When deadlines loom and the crunch comes, mistakes will be made.

    Chapter 1 - Introduction to Software Testing

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 11MSIT 32 Software Quality and Testing

    l Poorly documented code - it’s tough to maintain and modify code that is badly written or poorlydocumented; the result is bugs. In many organizations management provides no incentive forprogrammers to document their code or write clear, understandable code. In fact, it’s usually theopposite: they get points mostly for quickly turning out code, and there’s job security if nobodyelse can understand it (‘if it was hard to write, it should be hard to read’).

    l Software development tools - visual tools, class libraries, compilers, scripting tools, etc. oftenintroduce their own bugs or are poorly documented, resulting in added bugs.

    1.4 APPROACHES TO TESTINGMany approaches have been defined in literature. The importance of any approache depends on the

    type of the system in which you are testing. Some of the approaches are given below:

    Debugging-oriented:

    This approach identifies the errors during debugging the program. There is no difference betweentesting and debugging.

    Demonstration-oriented:

    The purpose of testing is to show that the software works. Here most of the time, the software isdemonstrated in a normal sequence/flow. All the branches may not be tested. This approach is mainly tosatisfy the customer and no value added to the program.

    Destruction-oriented:

    The purpose of testing is to show the software doesn’t work.

    It is a sadistic process, which explains why most people find it difficult. It is difficult to design testcases to test the program.

    Evaluation-oriented:

    The purpose of testing is to reduce the perceived risk of not working up to an acceptable value.

    Prevention-oriented:

    It can be viewed as testing is a mental discipline that results in low risk software. It is always better toforecast the possible errors and rectify it earlier.

    In general, program testing is more properly viewed as the destructive process of trying to find theerrors (whose presence is assumed) in a program. A successful test case is one that furthers progress inthis direction by causing the program to fail. However, one wants to use program testing to establish some

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 12

    degree of confidence that a program does what it is supposed to do and does not do what it is notsupposed to do, but this purpose is best achieved by a diligent exploration for errors.

    1.5 IMPORTANCE OF TESTINGTesting activity cannot be eliminated in the life cycle as the end product must be bug free and reliable

    one. Testing is important because:

    l Testing is a critical element of software Quality Assurance

    l Post-release removal of defects is the most expensive

    l Significant portion of life cycle effort expended on testing

    In a typical service oriented project, about 20-40% of project effort is spent on testing. It is much morein the case of “human-rated” software.

    For example, at Microsoft, tester to developer ratio is 1:1 whereas at NASA shuttle developmentcenter (SEI Level 5), the ratio is 7:1. This shows that how testing is an integral part of Quality assurance.

    1.6 HURDLES IN TESTINGAs in many other development projects, testing is not free from hurdles. Some of the hurdles normally

    encountered are:

    l Usually late activity in the project life cycle

    l No “concrete” output and therefore difficult to measure the value addition

    l Lack of historical data

    l Recognition of importance is relatively less

    l Politically damaging as you are challenging the developer

    l Delivery commitments

    l Too much optimism that the software always works correctly

    Defect Distribution

    In a typical project life cycle, testing is the late activity. When the product is tested, the defects may bedue to many reasons. It may be either programming error or may be defects in design or defects at anystages in the life cycle. The overall defect distribution is shown in fig 1.1 .

    Chapter 1 - Introduction to Software Testing

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 13MSIT 32 Software Quality and Testing

    Fig 1.1: Software Defect Distribution

    1.7 TESTING FUNDAMENTALSBefore understanding the process of testing software, it is necessary to learn the basic principles of

    testing.

    1.7.1 Testing Objectivesl “Testing is a process of executing a program with the intent of finding an error.

    l A good test is one that has a high probability of finding an as yet undiscovered error.

    l A successful test is one that uncovers an as yet undiscovered error.”

    The objective is to design tests that systematically uncover different classes of errors and do so witha minimum amount of time and effort.

    Secondary benefits include:

    l Demonstrate that software functions appear to be working according to specification.

    l Those performance requirements appear to have been met.

    l Data collected during testing provides a good indication of software reliability and some indicationof software quality.

    Rqmts. 56%

    Other 10%

    Code 7%

    Design 27%

    Rqmts. Design Code Other

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 14

    Testing cannot show the absence of defects, it can only show that software defects are present.

    1.7.2 Test Information FlowA typical test information flow is shown in Fig 1.2.

    Fig 1.2: Test information flow in a typical software test life cycle

    In the above figure:

    l Software configuration includes a Software Requirements Specification, a Design Specification,and source code.

    l A test configuration includes a test plan and procedures, test cases, and testing tools.

    l It is difficult to predict the time to debug the code, hence it is difficult to schedule.

    1.7.3 Test Case DesignSome of the points to be noted during the test case design are:

    l Can be as difficult as the initial design.

    l Can test if a component conforms to specification - Black Box Testing.

    l Can test if a component conforms to design - White box testing.

    Chapter 1 - Introduction to Software Testing

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 15MSIT 32 Software Quality and Testing

    l Testing cannot prove correctness as not all execution paths can be tested.

    Consider the following example shown in fig 1.3,

    Fig: 1.3

    A program with a structure as illustrated above (with less than 100 lines of Pascal code) has about100,000,000,000,000 possible paths. If attempted to test these at rate of 1000 tests per second, would take3170 years to test all paths. This shows that exhaustive testing of software is not possible.

    QUESTIONS

    1. What is software testing? Explain the purpose of testing?

    2. Explain the origin of the defect distribution in a typical software development life cycle?

    _________

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 16

    Chapter 2

    Software Quality Assurance

    2.1 LEARNING OBJECTIVES

    You will learn about:l Basic principles about the Software Quality,

    l Software Quality Assurance and SQA activities

    l Software Reliability

    2.2 INTRODUCTION

    The quality is defined as “a characteristic or attribute of something”. As an attribute of an item,quality refers to measurable characteristics-things we are able to compare to known standards such aslength, color, electrical properties, malleability, and so on. However, software, largely an intellectualentity, is more challenging to characterize than physical objects.

    Quality design refers to the characteristic s that designers specify for an item. The grade of materials,tolerance, and performance specifications all contribute to the quality of design.

    Quality of conformance is the degree to which the design specification s are followed duringmanufacturing. Again, the greater the degree of conformance, the higher the level of quality ofconformance.

    16 Chapter 2 - Software Quality Assurance

    Chapter 2 - Software Quality Assurance

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 17MSIT 32 Software Quality and Testing

    Software Quality Assurance encompasses

    q A quality management approach

    q Effective software engineering technology

    q Formal technical reviews

    q A multi tiered testing strategy

    q Control of software documentation and changes made to it

    q A procedure to assure compliance with software development standards

    q Measurement and reporting mechanisms

    Software quality is achieved as shown in figure 2.1:

    Figure 2.1: Achieving Software Quality

    2.3 QUALITY CONCEPTSWhat are quality concepts?

    l Qualityl Quality controll Quality assurancel Cost of quality

    Quality

    Software Engineering Methods

    Formal Technical

    Review

    Standards

    And

    And

    SCM & & SQA

    Testing

    Measurement

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 18

    The American heritage dictionary defines quality as “a characteristic or attribute of something”. As anattribute of an item quality refers to measurable characteristic-things, we are able to compare to knownstandards such as length, color, electrical properties, and malleability, and so on. However, software,largely an intellectual entity, is more challenging to characterize than physical object.

    Nevertheless, measures of a programs characteristics do exist. These properties include

    1. Cyclomatic complexity

    2. Cohesion

    3. Number of function points

    4. Lines of code

    When we examine an item based on its measurable characteristics, two kinds of quality may beencountered:

    l Quality of design

    l Quality of conformance

    2.4 QUALITY OF DESIGN

    Quality of design refers to the characteristics that designers specify for an item. The grade of materials,tolerance, and performance specifications all contribute to quality of design. As higher graded materialsare used and tighter, tolerance and greater levels of performance are specified the design quality of aproduct increases if the product is manufactured according to specifications.

    2.5 QUALITY OF CONFORMANCE

    Quality of conformance is the degree to which the design specifications are followed duringmanufacturing. Again, the greater the degree of conformance, the higher the level of quality of conformance.

    In software development, quality of design encompasses requirements, specifications and design ofthe system.

    Quality of conformance is an issue focused primarily on implementation. If the implementation followsthe design and the resulting system meets its requirements and performance goals, conformance quality ishigh.

    Chapter 2 - Software Quality Assurance

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 19MSIT 32 Software Quality and Testing

    2.6 QUALITY CONTROL (QC)QC is the series of inspections, reviews, and tests used throughout the development cycle to ensure

    that each work product meets the requirements placed upon it. QC includes a feedback loop to theprocess that created the work product. The combination of measurement and feedback allows us to tunethe process when the work products created fail to meet their specification. These approach views QC aspart of the manufacturing process QC activities may be fully automated, manual or a combination ofautomated tools and human interaction. An essential concept of QC is that all work products have definedand measurable specification to which we may compare the outputs of each process the feedback loop isessential to minimize the defect produced.

    2.7 QUALITY ASSURANCE (QA)QA consists of the editing and reporting functions of management. The goal of quality assurance is to

    provide management with the data necessary to be informed about product quality, there by gaininginsight and confidence that product quality is meeting its goals. Of course, if the data provided through QAidentify problems, it is management’s responsibility to address the problems and apply the necessaryresources to resolve quality issues.

    2.7.1 Cost of QualityCost of quality includes all costs incurred in the pursuit of quality or in performing quality related

    activities. Cost of quality studies are conducted to provide a base line for the current cost of quality, toidentify opportunities for reducing the cost of quality, and to provide a normalized basis of comparison.The basis of normalization is usually money. Once we have normalized quality costs on a money basis, wehave the necessary data to evaluate where the opportunities lie to improve our process further more wecan evaluate the effect of changes in money based terms.

    QC may be divided into cost associated with

    l Prevention

    l Appraisal

    l Failure

    Prevention costs include

    q Quality Planning

    q Formal Technical Review

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 20

    q Test Equipment

    q Training

    Appraisal costs include activity to gain insight into product condition the “First time through” eachprocess.

    Examples for appraisal costs include:

    l In process and inter process inspection

    l Equipment calibration and maintenance

    l Testing

    Failure Costs are costs that would disappear if no defects appeared before shipping a product tocustomer. Failure costs may be subdivided into internal and external failure costs.

    Internal failure costs are costs incurred when we detect an error in our product prior to shipment.

    Internal failure costs includes

    l Rework

    l Repair

    l Failure Mode Analyses

    External failure costs are the cost associated with defects found after the product has been shipped tothe customer.

    Examples of external failure costs are

    1. Complaint Resolution

    2. Product return and replacement

    3. Helpline support

    4. Warranty work

    2.8 SOFTWARE QUALITY ASSURANCE (SQA)Quality Is defined as conformance to explicitly stated functional and performance requirements,

    explicitly documented development standards, and implicit characteristics that are expected of allprofessionally developed software.

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 21MSIT 32 Software Quality and Testing

    The above definition emphasizes three important points.

    1. Software requirements are the foundation from which quality is measured. Lack of conformanceto requirements is lack of quality.

    2. Specified standards define a set of development criteria that guide the manner in which softwareis engineered. If the criteria are not followed, lack of quality will almost surely result.

    3. There is a set of implicit requirements that often goes unmentioned. (e.g. the desire of goodmaintainability). If software conforms to its explicit requirements but fails to meet implicitrequirements, software quality is questionable.

    2.8.1 Background IssuesQA is an essential activity for any business that produces products to be used by others.

    The SQA group serves as the customer in-house representative. That is the people who perform SQAmust look at the software from customer’s point of views.

    The SQA group attempts to answer the questions asked below and hence ensure the quality of software.The questions are

    1. Has software development been conducted according to pre-established standards?

    2. Have technical disciplines properly performed their role as part of the SQA activity?

    SQA Activities

    SQA Plan is interpreted as shown in Fig 2.2

    SQA is comprised of a variety of tasks associated with two different constituencies

    1. The software engineers who do technical work like

    l Performing Quality assurance by applying technical methods

    l Conduct Formal Technical Reviews

    l Perform well-planed software testing.

    2. SQA group that has responsibility for

    l Quality assurance planning oversight

    l Record keeping

    l Analysis and reporting.

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 22

    QA activities performed by SE team and SQA are governed by the following plan.

    l Evaluation to be performed.

    l Audits and reviews to be performed.

    l Standards that is applicable to the project.

    l Procedures for error reporting and tracking

    l Documents to be produced by the SQA group

    l Amount of feedback provided to software project team.

    Figure 2.2: Software Quality Assurance Plan

    l What are the activities performed by SQA and SE team?

    l Prepare SQA Plan for a project

    l Participate in the development of the project’s software description

    l Review software-engineering activities to verify compliance with defined software process.

    l Audits designated software work products to verify compliance with those defined as part ofthe software process.

    l Ensures that deviations in software work and work products are documented and handledaccording to a documented procedure.

    l Records any noncompliance and reports to senior management.

    SQA Planning Team Activities

    Software Engineers Activities

    SQA Plan

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 23MSIT 32 Software Quality and Testing

    2.8.2 Software Reviews

    Software reviews are a “filter “ for the software engineering process. That is, reviews are applied atvarious points during software development and serve to uncover errors that can then be removed.Software reviews serve to “purify” the software work products that occur as a result of analysis, design,and coding.

    Any review is a way of using the diversity of a group of people to:

    1. Point out needed improvements in the product of a single person or a team;

    2. Confirm that parts of a product in which improvement is either not desired, or not needed.

    3. Achieve technical work of more uniform, or at least more predictable quality that can be achievedwithout reviews, in order to make technical work more manageable.

    There are many different types of reviews that can be conducted as part of software- engineering like

    1. An informal meeting if technical problems are discussed.

    2. Formal presentation of software design to an audience of customers, management, and technicalstaff is a form of review.

    3. Formal technical review is the most effective filter from a quality assurance standpoint. Conductedby software engineers for software engineers, the FTR is an effective means of improvingsoftware quality.

    2.8.3 Cost impact of Software Defects

    To illustrate the cost impact of early error detection, we consider a series of relative costs that is basedon actual cost data collected for large software projects.

    Assume that an error uncovered during design will cost 1.0 monetary unit to correct. Relative to thiscost, the same error uncovered just before testing commences will cost 6.5 units; during testing 15 units;and after release, between 60 and 100 units.

    2.8.4 Defect Amplification and Removal

    A defect amplification model can be used to illustrate the generation and detection of errors during

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 24

    preliminary design, detail design, and coding steps of the software engineering process. The model isillustrated schematically in Figure 2.3.

    A box represents a software development step. During the step, errors may be inadvertently generated.Review may fail to uncover newly generated errors from previous steps, resulting in some number oferrors that are passed through. In some cases, errors passed through from previous steps, resulting insome number of errors that are passed through. In some cases errors passed through from previous stepsare amplified (amplification factor, x) by current work. The box subdivisions represent each of thesecharacteristics and the percent efficiency for detecting errors, a function of the thoroughness of review.

    DEVELOPMENT STEP

    Errors from previous Step DEFECTS DETECTION

    Figure 2.3: Defect Amplification Model.

    Figure 2.4 illustrates hypothetical example of defect amplification for a software development processin which no reviews are conducted. As shown in the figure each test step is assumed to uncover andcorrect fifty percent of all incoming errors without introducing new errors (an optimistic assumption). Tenpreliminary design errors are amplified to 94 errors before testing commences. Twelve latent defects arereleased to the field. Figure 2.5 considers the same conditions except that design and code reviews areconducted as part of each development step. In this case, ten initial preliminary design errors are amplifiedto 24 errors before testing commences.

    Only three latent defects exist. By recalling the relative cost associated with the discovery andcorrection of errors, overall costs (with and without review for our hypothetical example) can be established.

    To conduct reviews a developer must expend time and effort and the development organization mustspend money. However, the results of the preceding or previous, example leave little doubt that we haveencountered a “Pay now or pay much more lately” syndrome.

    Formal technical reviews (for design and other technical activities) provide a demonstrable cost benefitand they should be conducted.

    DEVELOPMENT STEP

    Errors from previous Step DEFECTS DETECTION

    Errors passed through

    Amplified errors 1:x

    Newly generated errors

    Percent efficiency for error

    detection

    Figure 2.3: Defect Amplification Model.

    Errors passed to next step

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 25MSIT 32 Software Quality and Testing

    Preliminary design

    0

    0

    10

    70%

    2

    1-1.5

    25

    50%

    5

    10 -3

    25

    60%

    Integration Test

    0

    10

    70%

    2

    1-1.5

    25

    50%

    0

    0

    60%

    Code/Unit Test

    1 15 5

    10

    24

    To integration

    Validation test

    System Test

    12

    6

    Latent errors

    24

    3

    Figure2.4: Defect Amplification -No Reviews

    Detail Design 3, 2

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 26

    Preliminary design

    0

    0

    10

    0%

    6

    4 x 1.5

    x = 1.5

    25

    0%

    10

    27x3

    x=3

    25

    20%

    Integration Test

    0

    10

    50%

    2

    1-1.5

    25

    50%

    0

    0

    60%

    Detail Design

    Code/Unit Test

    10,6

    4 37

    94

    To integration

    Validation test

    System Test

    47

    24

    Latent errors

    94

    12

    Figure 2.5: Defect Amplification - Reviews Conducted

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 27MSIT 32 Software Quality and Testing

    2.9 FORMAL TECHNICAL REVIEWS (FTR)FTR is a SQA activity that is performed by software engineers.

    Objectives of the FTR are

    l To uncover errors in function, logic, are implementations for any representation of the software.

    l To verify that software under review meets its requirements.

    l To ensure that software has been represented according to predefined standards

    l To achieve software that is developed in an uniform manner

    l To make projects more manageable

    In addition, the FTR serves as a training ground, enabling junior engineers to observe different approachesto software analysis, design, and implementation. The FTR also serves to promote backup and continuitybecause numbers of people become familiar with parts of the software that they may not have other wiseseen.

    The FTR is actually a class of reviews that include walkthrough inspection and round robin reviews,and other small group technical assessments of software. Each FTR is conducted as meeting and will besuccessful only if it is properly planned, controlled and attended.

    Types of Formal Technical Review

    While the focus of this research is on the individual evaluation aspects of reviews, for context severalother FTR techniques are discussed as well. Among the most common forms of FTR are the following:

    1. Desk Checking, or reading over a program by hand while sitting at one’s desk, is the oldestsoftware review technique [Adrion et al. 1982]. Strictly speaking, desk checking is not a form ofFTR since it does not involve a formal process or a group. Moreover, desk checking is generallyperceived as ineffective and unproductive due to (a) its lack of discipline and (b) the generalineffectiveness of people in detecting their own errors. To correct for the second problem,programmers often swap programs and check each other’s work. Since desk checking is anindividual process not involving group dynamics, research in this area would be relevant butnone applicable to the current research was found.

    It should be noted that Humphrey [1995] has developed a review method, called PersonalReview (PR), which is similar to desk checking. In PR, each programmer examines his ownproducts to find as many defects as possible utilizing a disciplined process in conjunction withHumphrey’s Personal Software Process (PSP) to improve his own work. The review strategyincludes the use of checklists to guide the review process, review metrics to improve the process,and defect causal analysis to prevent the same defects from recurring in the future. The approach

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 28

    taken in developing the Personal Review process is an engineering one; no reference is made inHumphrey [1995] to cognitive theory.

    2. Peer Rating is a technique in which anonymous programs are evaluated in terms of theiroverall quality, maintainability, extensibility, usability and clarity by selected programmers whohave similar backgrounds [Myers 1979]. Shneiderman [1980] suggests that peer ratings ofprograms are productive, enjoyable, and non-threatening experiences. The technique is oftenreferred to as Peer Reviews [Shneiderman 1980], but some authors use the term peer reviewsfor generic review methods involving peers [Paulk et al 1993; Humphrey 1989].

    3. Walkthroughs are presentation reviews in which a review participant, usually the softwareauthor, narrates a description of the software and the other members of the review groupprovide feedback throughout the presentation [Freedman and Weinberg 1990; Gilb and Graham1993]. It should be noted that the term “walkthrough” has been used in the literature variously.Some authors unite it with “structured” and treat it as a disciplined, formal review process[Myers 1979; Yourdon 1989; Adrion et al. 1982]. However, the literature generally describeswalkthrough as an undisciplined process without advance preparation on the part of reviewersand with the meeting focus on education of participants [Fagan 1976].

    4. Round-robin Review is a evaluation process in which a copy of the review materials is madeavailable and routed to each participant; the reviewers then write their comments/questionsconcerning the materials and pass the materials with comments to another reviewer and to themoderator or author eventually [Hart 1982].

    5. Inspection was developed by Fagan [1976, 1986] as a well-planned and well-defined groupreview process to detect software defects – defect repair occurs outside the scope of theprocess. The original Fagan Inspection (FI) is the most cited review method in the literature andis the source for a variety of similar inspection techniques [Tjahjono 1996]. Among the FI-derived techniques are Active Design Review [Parnas and Weiss 1987], Phased Inspection[Knight and Myers 1993], N-Fold Inspection [Schneider et al. 1992], and FTArm [Tjahjono1996]. Unlike the review techniques previously discussed, inspection is often used to control thequality and productivity of the development process.

    A Fagan Inspection consists of six well-defined phases:

    i. Planning. Participants are selected and the materials to be reviewed are prepared and checkedfor review suitability.

    ii. Overview. The author educates the participants about the review materials through a presentation.iii. Preparation. The participants learn the materials individually.iv. Meeting. The reader (a participant other than the author) narrates or paraphrases the review

    materials statement by statement, and the other participants raise issues and questions. Questionscontinue on a point only until an error is recognized or the item is deemed correct.

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 29MSIT 32 Software Quality and Testing

    v. Rework. The author fixes the defects identified in the meeting.

    vi. Follow-up. The “corrected” products are reinspected.

    Practitioner Evaluation is primarily associated with the Preparation phase.

    In addition to classification by technique-type, FTR may also be classified on other dimensions, includingthe following:

    A. Small vs. Large Team Reviews. Siy [1996] classifies reviews into those conducted by small(1-4 reviewers) [Bisant and Lyle 1996] and large (more than 4 reviewers) [Fagan 1976, 1986]teams. If each reviewer depends on different expertise and experiences, a large team shouldallow a wider variety of defects to be detected and thus better coverage. However, a large teamrequires more effort due to more individuals inspecting the artifact, generally involves greaterscheduling problems [Ballman and Votta 1994], and may make it more difficult for all participantsto participate fully.

    B. No vs. Single vs. Multiple Session Reviews. The traditional Fagan Inspection provided for onesession to inspect the software artifact, with the possibility of a follow-up session to inspectcorrections. However, variants have been suggested.

    Humphrey [1989] comments that three-quarters of the errors found in well-run inspections arefound during preparation. Based on an economic analysis of a series of inspections at AT&T,Votta [1993] argues that inspection meetings are generally not economic and should be replacedwith depositions, where the author and (optionally) the moderator meet separately with inspectorsto collect their results.

    On the other hand, some authors [Knight and Myers 1993; Schneider et al. 1992] have arguedfor multiple sessions, conducted either in series or parallel. Gilb and Graham [1993] do not usemultiple inspection sessions but add a root cause analysis session immediately after the inspectionmeeting.

    C. Nonsystematic vs. Systematic Defect-Detection Technique Reviews. The most frequentlyused detection methods (ad hoc and checklist) rely on nonsystematic techniques, and reviewerresponsibilities are general and not differentiated for single session reviews [Siy 1996]. However,some methods employ more prescriptive techniques, such as questionnaires [Parnas and Weiss1987] and correctness proofs [Britcher 1988].

    D. Single Site vs. Multiple Site Reviews. The traditional FTR techniques have assumed that thegroup-meeting component would occur face-to-face at a single site. However, with improvedtelecommunications, and especially with computer support (see item F below), it has becomeincreasingly feasible to conduct even the group meeting from multiple sites.

    E. Synchronous vs. Asynchronous Reviews. The traditional FTR techniques have also assumed

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 30

    that the group meeting component would occur in real-time; i.e., synchronously. However, somenewer techniques that eliminate the group meeting or are based on computer support utilizeasynchronous reviews.

    F. Manual vs. Computer-supported Reviews. In recent years, several computer supported reviewsystems have been developed [Brothers et al. 1990; Johnson and Tjahjono 1993; Gintell et al.1993; Mashayekhi et al 1994]. The type of support varies from simple augmentation of themanual practices [Brothers et al. 1990; Gintell et al. 1993] to totally new review methods [Johnsonand Tjahjono 1993].

    2.2.2 Economic Analyses of Formal Technical ReviewWheeler et al. [1996], after reviewing a number of studies that support the economic benefit of FTR,

    conclude that inspections reduce the number of defects throughout development, cause defects to befound earlier in the development process where they are less expensive to correct, and uncover defectsthat would be difficult or impossible to discover by testing. They also note “these benefits are not withouttheir costs, however. Inspections require an investment of approximately 15 percent of the total developmentcost early in the process [p. 11].”

    In discussing overall economic effects, Wheeler et al. cite Fagan [1986] to the effect that investmentin inspections has been reported to yield a 25-to-35 percent overall increase in productivity. They alsoreproduce a graphical analysis from Boehm [1987] that indicates inspections reduce total developmentcost by approximately 30%.

    The Wheeler et al. [1996] analysis does not specify the relative value of Practitioner Evaluation toFTR, but two recent economic analyses provide indications.

    l Votta [1993]. After analyzing data collected from 13 traditional inspections conducted at AT&T,Votta reports that the approximately 4% increase in faults found at collection meetings (synergy)does not economically justify the development delays caused by the need to schedule meetingsand the additional developer time associated with the actual meetings. He also argues that it isnot cost-effective to use the collection meeting to reduce the number of items incorrectly identifiedas defective prior to the meeting (“false positives”). Based on these findings, he concludes thatalmost all inspection meetings requiring all reviewers to be present should be replaced withDepositions, which are three person meetings with only the author, moderator, and one reviewerpresent.

    l Siy [1996]. In his analysis of the factors driving inspection costs and benefits, Siy reports thatchanges in FTR structural elements, such as group size, number of sessions, and coordination ofmultiple sessions, were largely ineffective in improving the effectiveness of inspections. Instead,inputs into the process (reviewers and code units) accounted for more outcome variation than

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 31MSIT 32 Software Quality and Testing

    structural factors. He concludes by stating “better techniques by which reviewers detectdefects, not better process structures, are the key to improving inspection effectiveness[Abstract, p. 2].” (emphasis added)

    Votta’s analysis effectively attributes most of the economic benefit of FTR to PE, and Siy’s explicitlystates that better PE techniques “are the key to improving inspection effectiveness.” These findings, ifsupported by additional research, would further support the contention that a better understanding ofPractitioner Evaluation is necessary.

    2.2.3 Psychological Aspects of FTRWork on the psychological aspects of FTR can be categorized into four groups.

    1. Egoless Programming. Gerald Weinberg [1971] began the examination of psychological issuesassociated with software review in his work on egoless programming. According to Weinberg,programmers are often reluctant to allow their programs to be read by other programmersbecause the programs are often considered to be an extension of the self and errors discoveredin the programs to be a challenge to one’s self-image. Two implications of this theory are asfollows:

    i. The ability of a programmer to find errors in his own work tends to be impaired since hetends to justify his own actions, and it is therefore more effective to have other people checkhis work.

    ii. Each programmer should detach himself from his own work. The work should be considereda public property where other people can freely criticize, and thus, improve its quality; otherwise,one tends to become defensive, and reluctant to expose one’s own failures.

    These two concepts have led to the justification of FTR groups, as well as the establishmentof independent quality assurance groups that specialize in finding software defects in manysoftware organizations [Humphrey 1989].

    2. Role of Management. Another psychological aspect of FTR that has been examined is therecording of data and its dissemination to management. According to Dobbins [1987], this mustbe done in such a way that individual programmers will not feel intimidated or threatened.

    3. Positive Psychological Impacts. Hart [1982] observes that reviews can make one more carefulin writing programs (e.g., double checking code) in anticipation of having to present or share theprograms with other participants. Thus, errors are often eliminated even before the actual reviewsessions.

    4. Group Process. Most FTR methods are implemented using small groups. Therefore, several

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 32

    key issues from small group theory apply to FTR, such as group think (tendency to suppressdissent in the interests of group harmony), group deviants (influence by minority), and dominationof the group by a single member. Other key issues include social facilitation (presence of othersboosts one’s performance) and social loafing (one member free rides on the group’s effort)[Myers 1990]. The issue of moderator domination in inspections is also documented in theliterature [Tjahjono 1996].

    Perhaps the most interesting research from the perspective of the current study is that of Saueret al. [2000]. This research is unusual in that it has an explicit theoretical basis and outlines abehaviorally motivated program of research into the effectiveness of software developmenttechnical reviews. The finding that most of the variation in effectiveness of software developmenttechnical reviews is the result of variations in expertise among the participants provides additionalmotivation for developing a solid understanding of Formal Technical Review at the individuallevel.

    It should be noted that all of this work, while based on psychological theory, does not address the issueof how practitioners actually evaluate software artifacts.

    2.9.1 The Review MeetingThe Focus of the FTR is on a work product - a component of the software.

    At the end of review all attendees of the FTR must decide

    1. Whether to accept the work product without further modification.

    2. Reject the work product due to serve errors (Once corrected another review must be performed)

    3. Accept the work product provisionally (minor errors have been encountered and must be correctedbut no additional review will be required).

    Once the decision made, all FTR attendees complete a sign-off indicating their participation in thereview and their concurrence with the review team findings.

    2.9.2 Review reporting and record keepingThe review summary report is typically is a single page form. It becomes part of the project historical

    record and may be distributed to the project leader and other interested parties. The review issue listsserves two purposes.

    1. To identify problem areas within the product

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 33MSIT 32 Software Quality and Testing

    2. To serve as an action item. Checklist that guides the producer as corrections are made. Anissues list is normally attached to the summary report.

    It is important to establish a follow up procedure to ensure that item on the issues list have beenproperly corrected. Unless this is done, it is possible that issues raised can “fall between the cracks”. Oneapproach is to assign responsibility for follow up for the review leader. A more formal approach as signsresponsibility independent to SQA group.

    2.9.3 Review GuidelinesThe following represents a minimum set of guidelines for formal technical reviews

    l Review the product, not the producer

    l Set an agenda and maintain it

    l Limit debate and rebuttal

    l Enunciate problem areas but don’t attempt to solve every problem noted

    l Take return notes

    l Limit the number of participants and insist upon advance preparation

    l Develop a check list each work product that is likely to be reviewed

    l Allocate resources and time schedule for FTRs.

    l Conduct meaningful training for all reviewers

    l Review your earlier reviews

    2.10 STATISTICAL QUALITY ASSURANCEStatistical quality assurance reflects a growing trend throughout industry to become more quantitative

    about quality. For software, statistical quality assurance implies the following steps

    l Information about software defects is collected and categorized

    l An attempt is made to trace each defect to its underlying cause

    l Using Pareto principle (80% of the defects can be traced to 20% of all possible causes), isolatethe 20% (the “vital few”)

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 34

    l Once the vital few causes have been identified, move to correct the problems that have causedthe defects.

    This relatively simple concept represents an important step toward the creation of an adaptive softwareengineering process in which changes are made to improve those elements of the process that introduceerrors. To illustrate the process, assume that a software development organization collects information ondefects for a period of one year. Some errors are uncovered as software is being developed. Otherdefects are encountered after the software has been released to its end user.

    Although hundreds of errors are uncovered all can be tracked to one of the following causes.

    q Incomplete or Erroneous Specification (IES)

    q Misinterpretation of Customer Communication (MCC)

    q Intentional Deviation from Specification (IDS)

    q Violation of Programming Standards ( VPS )

    q Error in Data Representation (EDR)

    q Inconsistent Module Interface (IMI)

    q Error in Design Logic (EDL)

    q Incomplete or Erroneous Testing (IET)

    q Inaccurate or Incomplete Documentation (IID)

    q Error in Programming Language Translation of design (PLT)

    q Ambiguous or inconsistent Human-Computer Interface (HCI)

    q Miscellaneous (MIS)

    To apply statistical SQA table 2.1 is built. Once the vital few causes are determined, the softwaredevelopment organization can begin corrective action.

    After analysis, design, coding, testing, and release, the following data are gathered.

    Ei = The total number of errors uncovered during the ith step in the software

    Engineering process

    Si = The number of serious errors

    Mi = The number of moderate errors

    Ti = The number of minor errors

    PS = Size of the product (LOC, design statements, pages of documentation at the ith

    step

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 35MSIT 32 Software Quality and Testing

    Ws, Wm, Wt = weighting factors for serious, moderate and trivial errors where recommended valuesare Ws = 10, Wm = 3, Wt = 1.

    The weighting factors for each phase should become larger as development progresses. This rewardsan organization that finds errors early.

    At each step in the software engineering process, a phase index, PIi, is computed

    PIi = Ws (Si/Ei)+Wm (Mi/Ei)+Wt (Ti/Ei)

    The error index EI ids computed by calculating the cumulative effect or each PIi, weighting errorsencountered later in the software engineering process more heavily than those encountered earlier.

    EI =Σ (i x PIi)/PS

    = (PIi+2PI2 +3PI3 +iPIi)/PS

    The error index can be used in conjunction with information collected in table to develop an overallindication of improvement in software quality.

    DATA COLLECTION FOR STATISTICAL SQA

    Total Serious Moderate Minor

    Error No. % No. % No % No %

    IES 205 22 34 27 68 18 103 24

    MCC 156 17 12 9 68 18 76 17

    IDS 48 5 1 1 24 6 23 5

    VPS 25 3 0 0 15 4 10 2

    EDR 130 14 26 20 68 18 36 8

    IMI 58 6 9 7 18 5 31 7

    EDL 45 5 14 11 12 3 19 4

    IET 95 10 12 9 35 9 48 11

    IID 36 4 2 2 20 5 14 3

    PLT 60 6 15 12 19 5 26 6

    HCI 28 3 3 2 17 4 8 2

    MIS 56 6 0 0 15 4 41 9

    TOTALS 942 100 128 100 379 100 435 100

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 36

    2.11 SOFTWARE RELIABILITYSoftware reliability, unlike many other quality factors, can be measured, directed and estimated using

    historical and developmental data. Software reliability is defined in statistical terms as “Probability offailure free operation of a computer program in a specified environment for a specified time” to illustrate,program x is estimated to have reliability of 0.96 over 8 elapsed processing hours. In other words, ifprogram x were to be executed 100 times and required 8 hours of elapsed processing time, it is likely tooperate correctly to operate 96/100 times.

    2.11.1 Measures of Reliability and AvailabilityIn a computer-based system, a simple measure of reliability is Mean Time Between Failure

    (MTBF), where

    MTBF = MTTF+MTTR

    The acronym MTTF and MTTR are Mean Time To Failure and Mean Time To Repair, respectively.

    In addition to reliability measure, we must develop a measure of availability. Software availability is theprobability that a program is operating according to requirements at a given point in time and is defined as:

    Availability = MTTF / (MTTF+MTTR) x100%

    The MTBF reliability measure is equally sensitive to MTTF and MTTR. The availability measure issomewhat more sensitive to MTTR an indirect measure of the maintainability of the software.

    2.11.2 Software Safety and Hazard AnalysisSoftware safety and hazard analysis are SQA activities that focus on the identification and assessment

    of potential hazards that may impact software negatively and cause entire system to fail. If hazards canbe identified early in the software engineering process software design features can be specified that willeither eliminate or control potential hazards.

    A modeling and analysis process is conducted as part of safety. Initially hazards are identified andcategorized by criticality and risk.

    Once hazards are identified and analyzed, safety related requirements could be specified for thesoftware i.e., the specification can contain a list of undesirable events and desired system responses tothese events. The roll of software in managing undesirable events is then indicated.

    Although software reliability and software safety are closely related to one another, it is important to

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 37MSIT 32 Software Quality and Testing

    understand the subtle difference between them. Software reliability uses statistical analysis to determinethe likelihood that a software failure will occur however, the occurrence of a failure does not necessarilyresult in a hazard or mishap. Software safety examines the ways in which failure result in condition thatcan be lead to mishap. That is, failures are not considered in a vacuum. But are evaluated in the contextof an entire computer based system.

    2.12 THE SQA PLANThe SQA plan provides a road map for instituting software quality assurance. Developed by the SQA

    group and the project team, The plan serves as a template for SQA activities that are instituted for eachsoftware project.

    ANSI/IEEE Standards 730-1984 and 983-1986 SQA plans is defined as shown below.

    I. Purpose of Plan

    II. References

    III Management

    1. Organization

    2. Tasks

    3. Responsibilities

    IV. Documentation

    1. Purpose

    2. Required software engineering documents

    3. Other Documents

    V. Standards, Practices and conventions

    1. Purpose

    2. Conventions

    VI. Reviews and Audits

    1. Purpose

    2. Review requirements

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 38

    a. Software requirements

    b. Designed reviews

    c. Software V & V reviews

    d. Functional Audits

    e. Physical Audit

    f. In-process Audits

    g. Management reviews

    VII. Test

    VIII. Problem reporting and corrective action

    IX. Tools, techniques and methodologies

    X. Code Control

    XI. Media Control

    XII. Supplier Control

    XIII. Record Collection, Maintenance, and retention

    XIV. Training

    XV. Risk Management.

    2.12.1 The ISO Approach to Quality Assurance SystemISO 9000 describes the elements of a quality assurance in general terms. These elements include the

    organizational structure, procedures, processes, and resources needed to implement quality planning, qualitycontrol, quality assurance, and quality improvement. However, ISO 9000 does not describe how anorganization should implement these quality system elements.

    Consequently, the challenge lies in designing and implementing a quality assurance system that meetsthe standard and fits the company’s products, services, and culture.

    2.12.2 The ISO 9001 standardISO 9001 is the quality assurance standard that applies to software engineering. The standard contains

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 39MSIT 32 Software Quality and Testing

    20 requirements that must be present for an effective quality assurance system. Because the ISO 9001standard is applicable in all engineering disciplines, a special set of ISO guidelines have been developed tohelp interpret the standard for use in the software process.

    The 20 requirements delineated by ISO9001 address the following topic.

    1. Management responsibility

    2. Quality system

    3. Contract review

    4. Design control

    5. Document and data control

    6. Purchasing

    7. Control of customer supplied product

    8. Product identification and tractability

    9. Process control

    10.Inspection and testing

    11.Control of inspection, measuring, and test equipment

    12.Inspection and test status

    13.Control of non confirming product

    14.Corrective and preventive action

    15.Handling, storage, packing, preservation, and delivery

    16.Control of quality records

    17.Internal quality audits

    18.Training

    19.Servicing

    20.Statistical techniques

    In order for a software organization to become registered to ISO 9001, it must establish policies andprocedure to address each of the requirements noted above and then be able to demonstrate that thesepolicies and procedures are being followed.

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 40

    2.12.3 Capability Maturity Model (CMM)The Capability Maturity Model for Software (also known as the CMM and SW-CMM) has been a

    model used by many organizations to identify best practices useful in helping them increase the maturityof their processes. It was developed by the software development community along with SoftwareEngineering Institute and Carnegie Melon University under direction of the US department of defense.

    It is applicable to any size software company. Its five levels as shown in Figure: 2.6 provide a simplemeans to assess a company’s software development maturity and determine the key practices they couldadopt to move up to the next level of maturity.

    Fig 2.6: The software capability maturity model is used to assess a software company’s maturity at softwaredevelopment

    Level 1: Initial. The software development processes at this level are ad hoc and often chaotic.There are no general practices for planning, monitoring or controlling the process. The test process is justas ad hoc as the rest of the process.

    Level 2: Repeatable. This maturity level is best described as project level thinking. Basic projectmanagement processes are in place to track the cost, schedule, functionality and quality of the project.Basic disciplines like software testing practices like test plans and test cases are used.

    Level3: Defined: Organizational, not just project specific, thinking comes into play at this level.Common management and engineering activities are standardized and documented. These standards areadapted and approved for use in different projects. Test documents and plans are reviewed and approvedbefore testing begins.

    Level4: Managed. At this maturity level, the organization’s process is under statistical control. Productquality is specified quantitatively beforehand and the software isn’t release until that goal is met.

    Level5:Optimizing.This level is called optimizing which is a continuously improving from level 4.

    New technologies and processes are attempted, the results are measured, and both incremental andrevolutionary changes are instituted to achieve even better quality levels.

    Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those,

    Initial

    RepeatableDefinedManagedOptimizing

    1 Adhoc and chaotic process.2.Project leve l thin king

    3.Organizatio nal level thinking4.Controlled Process

    5.Continuous proce ss impro ve ment through quantita tive feedback and ne w approaches.

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 41MSIT 32 Software Quality and Testing

    27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the period1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and (0.4% at 5.) The median size oforganizations was 100 software engineering/maintenance personnel; 32% of organizations were U.S.federal contractors or agencies. For those rated at Level 1, the most problematical key process areawas in Software Quality Assurance.

    QUESTIONS

    1. Quality and reliability are related concepts, but are fundamentally different in a number of ways. Discuss them.

    2. Can a program be correct and still not be reliable? Explain.

    3. Can a program be correct and still not exhibit good quality? Explain.

    4. Explain in more detail, the review technique adopted in Quality Assurance.

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 42

    Chapter 3

    Program Inspections, Walkthroughsand Reviews

    3.1 LEARNING OBJECTIVES

    You will learn aboutl What is static testing and its importance in Software Testing.

    l Guidelines to be followed during static testing

    l Process involved in inspection and walkthroughs

    l Various check lists to be followed while handling errors in Software Testing

    l Review techniques

    3.2 INTRODUCTION Majority of the programming community worked under the assumptions that programs are written

    solely for machine execution and are not intended to be read by people. The only way to test a programis by executing it on a machine. Weinberg built a convincing strategy that why programs should be read bypeople, and indicated this could be an effective error detection process.

    Experience has shown that “human testing” techniques are quite effective in finding errors, so muchso that one or more of these should be employed in every programming project. The method discussed inthis Chapter are intended to be applied between the time that the program is coded and the time thatcomputer based testing begins. We discuss this based on two ways:

    Chapter 3 - Program Inspections, Walkthroughs and Reviews42

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 43MSIT 32 Software Quality and Testing

    l It is generally recognized that the earlier errors are found, the lower are the costs or correctingthe errors and the higher is the probability of correcting the errors correctly.

    l Programmers seem to experience a psychological change when computer-based testingcommences.

    3.3 INSPECTIONS AND WALKTHROUGHSCode inspections and walkthroughs are the two primary “human testing” methods. It involve the

    reading or visual inspection of a program by a team of people. Both methods involve some preparatorywork by the participants. Normally it is done through meeting and it is typically known as “meeting of theminds”, a conference held by the participants. The objective of the meeting is to find errors, but not to findsolutions to the errors (i.e. to test but not to debug).

    What is the process involved in inspection and walkthroughs?

    The process is performed by a group of people (three or four), only one of whom is the author of theprogram. Hence the program is essentially being tested by people other than the author, which is inconsonance with the testing principle stating that an individual is usually ineffective in testing his or herown program. Inspection and walkthroughs are far more effective compared to desk checking (the processof a programmer reading his/her own program before testing it) because people other than the program’sauthor are involved in the process. These processes also appear to result in lower debugging (errorcorrection) costs, since, when they find an error, the precise nature of the error is usually located. Also,they expose a batch or errors, thus allowing the errors to be corrected later enmasse. Computer basedtesting, on the other hand, normally exposes only a symptom of the error and errors are usually detectedand corrected one by one.

    Some Observations:

    l Experience with these methods has found them to be effective e in finding from 30% to 70% ofthe logic design and coding errors in typical programs. They are not, however, effective indetecting “high-level” design errors, such as errors made in the requirements analysis process.

    l Human processes find only the “easy” errors (those that would be trivial to find with computer-based testing) and the difficult, obscure, or tricky errors can only be found by computer-basedtesting.

    l Inspections/walkthroughs and computer-based testing are complementary; error-detectionefficiency will suffer if one or the other is not present.

    l These processes are invaluable for testing modifications to programs. Because modifying anexisting program is a more error-prone process(in terms of errors per statement written) thanwriting a new program.

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 44

    3.4 CODE INSPECTIONSAn inspection team usually consists of four people. One of the four people plays the role of a moderator.

    The moderator is expected to be a competent programmer, but he/she is not the author of the program andneed not be acquainted with the details of the program. The duties of the moderator include:

    l Distributing materials for scheduling inspections

    l Leading the session,

    l Recording all errors found, and

    l Ensuring that the errors are subsequently corrected.

    Hence the moderator may be called as quality-control engineer. The remaining members usually consistof the program’s designer and a test specialist.

    The general procedure is that the moderator distributes the program’s listing and design specificationto the other participants well in advance of the inspection session. The participants are expected tofamiliarize themselves with the material prior to the session. During inspection session, two main activitiesoccur:

    1. The programmer is requested to narrate, statement by statement, the logic of the program.During the discourse, questions are raised and pursued to determine if errors exist. Experiencehas shown that many of the errors discovered are actually found by the programmer, rather thanthe other team members, during the narration. In other words, the simple act of reading aloudone’s program to an audience seems to be a remarkably effective error-detection technique.

    2. The program is analyzed with respect to a checklist of historically common programming errors(such a checklist is discussed in the next section).

    It is moderator’s responsibility to ensure the smooth conduction of the proceedings and that theparticipants focus their attention on finding errors, not correcting them.

    After the session, the programmer is given a list of the errors found. The list of errors is also analyzed,categorized, ad used to refine the error checklist to improve the effectiveness of future inspections.

    The main benefits of this method are;

    l Identifying early errors,

    l The programmers usually receive feedback concerning his or her programming style and choiceof algorithms and programming techniques.

    l Other participants are also gain in similar way by being exposed to another programmer’s errorsand programming style.

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 45MSIT 32 Software Quality and Testing

    l The inspection process is a way of identifying early the most error-prone sections of the program,thus allowing one to focus more attention on these sections during the computer based testingprocesses.

    3.5 AN ERROR CHECK LIST FOR INSPECTIONSAn important part of the inspection process is the use of a checklist to examine the program for

    common errors. The checklist is largely language independent as most of the errors can occur with anyprogramming language.

    Data-Reference Errors

    1. Is a variable referenced whose value is unset or uninitialized? This is probably the most frequentprogramming error; it occurs in a wide variety of circumstances.

    2. For all array references, is each subscript value within the defined bounds of the correspondingdimension?

    3. For all array references, does each subscript have an integer value? This is not necessarily anerror in all languages, but it is a dangerous practice.

    4. For all references through pointer or reference variables, is the referenced storage currentlyallocated? This is known as the “dangling reference” problem. It occurs in situations where thelifetime of a pointer is greater than the lifetime of the referenced storage.

    5. Are there any explicit or implicit addressing problems if, on the machine being used, the units ofstorage allocation are smaller than the units of storage addressability?

    6. If a data structure is referenced in multiple procedures or subroutines, is the structure definedidentically in each procedure?

    7. When indexing into a string, are the limits of the string exceeded?

    Data-Declaration Error

    1. Have all variables been explicitly declared? A failure to do so is not necessarily an error, but itis a common source of trouble.

    2. If all attributes of a variable are not explicitly stated in the declaration, are the defaults wellunderstood?

    3. Where a variable is initialized in a declarative statement, is it properly initialized?

    4. Is each variable assigned the correct length, type, and storage class?

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 46

    5. Is the initialization of a variable consistent with its storage type?

    Computation Errors

    1. Are there any computations using variables having inconsistent (e.g. Nonarithmetic) data types?

    2. Are there any mixed mode computations?

    3. Are there any computations using variables having the same data type but different lengths?

    4. Is the target variable of an assignment smaller than the right-hand expression?

    5. Is an overflow or underflow exception possible during the computation of an expression? Thatis, the end result may appear to have a valid value, but an intermediate result might be too big ortoo small for the machine’s data representations.

    6. Is it possible for the divisor in a division operation to be zero?

    7. Where applicable, can the value of a variable go outside its meaningful range?

    8. Are there any invalid uses of integer arithmetic, particularly division? For example, if I is aninteger variable, whether the expression 2*I/2 is equal to I depends on whether I has an odd oran even value and whether the multiplication or division is performed first.

    Comparison Errors

    1. Are there any comparisons between variables having inconsistent data types (e.g. comparing acharacter string to an address)?

    2. Are there any mixed-mode comparisons or comparisons between variables of different lengths?If so, ensure that the conversion rules are well understood.

    3. Does each Boolean expression state what it is supposed to state? Programmers often makemistakes when writing logical expressions involving “and”, “or”, and “not”.

    4. Are the operands of a Boolean operator Boolean? Have comparison and Boolean operatorsbeen erroneously mixed together?

    Control-Flow Errors

    1. If the program contains a multi way branch (e.g. a computed GO TO in Fortran), can the indexvariable ever exceed the number of branch possibilities? For example, in the Fortran statement,

    GOTO(200,300,400), I

    Will I always have the value 1,2, or 3?

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 47MSIT 32 Software Quality and Testing

    2. Will every loop eventually terminate? Devise an informal proof or argument showing that eachloop will terminate

    3. Will the program, module, or subroutine eventually terminate?

    4. Is it possible that, because of the conditions upon entry, a loop will never execute? If so, doesthis represent an oversight? For instance, for loops headed by the following statements:

    DO WHILE (NOTFOUND)

    DO I=X TO Z

    What happens if NOTFOUND is initially false or if X is greater than Z?

    5. Are there any non-exhaustive decisions? For instance, if an input parameter’s expected valuesare 1, 2, or 3; does the logic assume that it must be 3 if it is not 1 or 2? If so, is the assumptionvalid?

    Interface Errors

    1. Does the number of parameters received by this module equal the number of arguments sent byeach of the calling modules? Also, is the order correct?

    2. Do the attributes (e.g. type and size) of each parameter match the attributes of each correspondingargument?

    3. Does the number of arguments transmitted by this module to another module equal the numberof parameters expected by that module?

    4. Do the attributes of each argument transmitted to another module match the attributes of thecorresponding parameter in that module?

    5. If built-in functions are invoked, are the number, attributes, and order of the arguments correct?

    6. Does the subroutine alter a parameter that is intended to be only an input value?

    Input/Output Errors

    1. If files are explicitly declared, are their attributes correct?

    2. Are the attributes on the OPEN statement correct?

    3. Is the size of the I/O area in storage equal to the record size?

    4. Have all files been opened before use?

    5. Are end-of-file conditions detected and handled correctly?

    PDF created with pdfFactory Pro trial version www.pdffactory.com

    http://www.pdffactory.com

  • 48

    6. Are there spelling or grammatical errors in any text that is printed or displayed by the program?

    3.6 WALKTHROUGHSThe code walkthrough, like the inspection, is a set of procedures and error-detection techniques for

    group code reading. It shares much in common with the inspection process, but the procedures are slightlydifferent, and a different error-detection technique is employed.

    The walkthrough is an uninterrupted meeting of one to two hours in duration. The walkthrough teamconsists of three to five people to play the role of moderator, secretary (a person who records all errorsfound), tester and programmer. It is suggested to have other participants like:

    l A highly experienced programmer,

    l A programming-language expert,

    l A new programmer (to give a fresh, unbiased outlook)

    l The person who will eventually maintain the program,

    l Someone from different project and

    l Someone from the same programming team as the programmer.

    The initial procedure is identical to that of the inspection process: the participants are given the materialsseveral days in advance to allow them to study the program. However, the procedure in the meeting isdifferent. Rather than simply reading the program or using error checklists, the participants “play computer”.The person designated as the tester comes to the meeting armed with a small set of paper test cases-representative sets of inputs (and expected outputs) for the program or module. During the meeting, eachtest case is mentally executed. That is, the test data are walked through the logic of the program. Thestate of the program (i.e. the values of the variables) is monitored on paper or a blackboard.

    The test case must be simple and few in number, because people execute programs at a rate that isvery slow compared to machines. In most walkthroughs, more errors