BSIT-54-Main-pg-1-65

download BSIT-54-Main-pg-1-65

of 65

Transcript of BSIT-54-Main-pg-1-65

  • 7/27/2019 BSIT-54-Main-pg-1-65

    1/65

    Chapter 1

    Introduction to Software Testing

    1.1 LEARNING OBJECTIVES

    You will learn about:

    What is Software Testing?

    Need for software Testing,

    Various approaches to Software Testing,

    What is the defect distribution,

    Software Testing Fundamentals.

    1.2 INTRODUCTION

    Software testing is a critical element of software quality assurance and represents the ultimate process

    to ensure the correctness of the product. The quality of product always enhances the customer confidence

    in using the product thereby increasing the business economics. In other words, a good quality product

    means zero defects, which is derived from a better quality process in testing.

    The definition of testing is not well understood. Testers use a totally incorrect definition of the word

    testing, and that this is the primary cause for poor program testing. Examples of these definitions are such

    statements as-

    Testing is the process of demonstrating that errors are not present,

    1BSIT 54 Software Quality and Testing

  • 7/27/2019 BSIT-54-Main-pg-1-65

    2/65

    2 Chapter 1 - Introduction to Software Testing

    The purpose of testing is to show that a program performs its intended functions correctly, and

    Testing is the process of establishing confidence that a program does what it is supposed to do.

    Testing the product means adding value to it which means raising the quality or reliability of the

    program. Raising the reliability of the product means finding and removing errors. Hence one should not

    test a product to show that it works; rather, one should start with the assumption that the program contains

    errors and then test the program to find as many errors as possible. Thus a more appropriate definition

    could be:

    Testing is the process of executing a program with the intention of finding errors

    Here identifying the errors is more important than verifying the functioning of the programs.

    What is the purpose of Testing?

    To show that the software works: It is known asdemonstration-oriented.

    Test all functions that are used to run the software which produces results. These functions are

    directly related to the business functions. Here, all paths of the software system are not tested.

    For instance, there may be an error routine which traps the errors during the execution. This

    may not be necessary to test as it is not relevant to the daily running of the software.

    To Show that the software doesnt work: It is known asdestruction-oriented.

    In this case, all functions including non-business functions are tested to discover the possible

    errors. Even non-routine functions are also tested. Exceptions are also tested. However, once a

    single error is detected, the testing process stops.

    To minimize the risk of not working up to an acceptable level: It is known as evaluation-

    oriented.

    Some times the entire software will not be tested due to many reasons like inadequate time.

    However, all the functions which are critical from the business point of view to the customer will

    be tested. The acceptable level is a mutually agreed terms between the customer and the

    developer.

    Why do we need to Test the application?

    Defects can exist in the software, as it is developed by human beings who can make mistakes duringthe development of software. However, it is the primary duty of a software vendor to ensure that software

    delivered does not have defects and the customers day-to-day operations do not get affected. This can be

    achieved by rigorously testing the software. The most common origin of software bugs is due to:

    Poor understanding and incomplete requirements

  • 7/27/2019 BSIT-54-Main-pg-1-65

    3/65

    3BSIT 54 Software Quality and Testing

    Unrealistic schedule

    Fast changes in requirements

    Too many assumptions and complacency

    Some of the major computer system failures listed below gives sample evidence that the testing is an

    important activity of the software quality process.

    1. In April of 1999 a software bug caused the failure of a $1.2 billion military satellite launch, the

    costliest unmanned accident in the history of Cape Canaveral launches. The failure was the

    latest in a string of launch failures, triggering a complete military and industry review of U.S.

    space launch programs, including software integration and testing processes. Congressional

    oversight hearings were requested.

    2. On June 4, 1996, the first flight of the European Space Agencys new Ariane 5 rocket failedshortly after launching, resulting in an estimated uninsured loss of a half billion dollars. It was

    reportedly due to the lack of exception handling of a floating-point error in a conversion from a

    64-bit integer to a 16-bit signed integer.

    3. The computer system of a major online U.S. stock trading service failed during trading hours

    several times over a period of days in February of 1999 according to nationwide news reports.

    The problem was reportedly due to bugs in a software upgrade intended to speed online trade

    confirmations.

    4. In November of 1997 the stock of a major health industry company dropped 60% due to reports

    of failures in computer billing systems, problems with a large database conversion, and inadequatesoftware testing. It was reported that more than $100,000,000 in receivables had to be written

    off and that multi-million dollar fines were levied on the company by government agencies.

    5. Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be credited

    with $924,844,208.32 each in May of 1996, according to newspaper reports. The American

    Bankers Association claimed it was the largest such error in banking history. A bank spokesman

    said the programming errors were corrected and all funds were recovered.

    All the above incidents only reiterate the importance of thorough testing of software applications and

    products before they are put on production. It clearly demonstrates that cost of rectifying a defect during

    development is much less than rectifying a defect in production.

    1.3 WHAT IS TESTING?

    Before testing any system, it is necessary to know what is testing and how testing is conducted. What

    are the different types of testing and its impact on the system? According to IEEE:

  • 7/27/2019 BSIT-54-Main-pg-1-65

    4/65

    4

    Testing is an activity in which a system or component is executed under specified

    conditions; the results are observed and recorded and an evaluation is made of some

    aspect of the system or component

    Some of the points to be noted regarding testing are:

    Executing a system or component is known as dynamic testing.

    Review, inspection and verification of documents (Requirements, design documents. Test Plans

    etc.), code and other work products of software is known as static testing.

    Static testing is found to be the most effective and efficient way of testing.

    Successful testing of software demands both dynamic and static testing.

    Measurements show that a defect discovered during design that costs $1 to rectify at that stage

    may cost $1,000 to repair in production. This clearly points out the advantage of early testing.

    Testing should start with small measurable units of code, gradually progress towards testing

    integrated components of the applications and finally be completed with testing at the application

    level.

    Testing verifies the system against its stated and implied requirements, i.e., is it doing what it is

    supposed to do? It should also check if the system is not doing what it is not supposed to do, if

    it takes care of boundary conditions, how the system performs in production-like environment

    and how fast and consistently the system responds when the data volumes are high.

    The above points addresses about the testing and its importance. Once the definition of testing is

    known it is necessary to know the approaches to the testing.

    1.4 APPROACHES TO TESTING

    Many approaches have been defined in literature. The importance of any approach depends on the

    type of the system in which you are testing. Some of the approaches are given below:

    Debugging-OrientedThis approach identifies the errors during debugging the program. There is no difference between

    testing and debugging.

    Normally, programmers while developing the programs also test it for intended objectives. When a

    program does not work as specified, they start verifying the program and identify the possible bottlenecks.

    Chapter 1 - Introduction to Software Testing

  • 7/27/2019 BSIT-54-Main-pg-1-65

    5/65

    5BSIT 54 Software Quality and Testing

    This process is known as debugging. Debugging reveals most of the errors specified to the program but

    not the system.

    Demonstration-oriented

    The purpose of testing is to show that the software works

    Here, most of the time, the software is demonstrated in a normal sequence/flow. All the branches may

    not be tested. This approach is mainly to satisfy the customer and no value added to the program.

    Destruction-oriented

    The purpose of testing is to show that the software doesnt work. The program is tested in all possible

    ways such that it breaks down at some point of execution. It may be static testing and dynamic testing. It

    may be testing on different platforms or used an unstructured approach.

    Evaluation-oriented

    The purpose of testing is to reduce the perceived risk of not working up to an acceptable value.

    A minimum acceptance criterion is fixed for any software by the customer and the system is tested. The

    acceptance criterion may be defined by the Customer or Developer.

    Prevention-oriented

    It can be viewed as testing is a mental discipline that results in low risk software.

    It is always better to forecast the possible errors and rectify it earlier. A set of pre-conditions are

    defined by the system designers at each level, i.e. system design, coding, deployment and maintenancelevel. These conditions must be passed before the system could be released. This ensures the minimum

    quality standard of the software before its release.

    In general, program testing is more properly viewed as the destructive process of trying to find the

    errors (whose presence is presumed) in a program. A successful test case is one that furthers progress in

    this direction by causing the program to fail. However, one wants to use program testing to establish some

    degree of confidence that a program does what it is supposed to do and does not do what is not supposed

    to do, but this purpose is best achieved by a diligent exploration for errors.

    1.5 IMPORTANCE OF TESTING

    Testing activity cannot be eliminated in the life cycle as the end product must be bug free and a reliable

    one. Imagine a situation wherein a persons balance in the account reduces without withdrawing. A

    wrong marks card reaches the student which may spoil the students career. Testing must be an integral

    part of the system development and this process cannot be eliminated.

  • 7/27/2019 BSIT-54-Main-pg-1-65

    6/65

    6

    Testing is important because:

    Testing is a critical element of software Quality Assurance

    Post-release removal of defects is the most expensive

    Significant portion of life cycle effort expended on testing

    In a typical service oriented project, about 20-40% of project effort is spent on testing. It is much more

    in the case of human-rated software.

    An example could be, at Microsoft, tester to developer ratio is 1:1 whereas at NASA shuttle development

    center (SEI Level 5), the ratio is 7:1. This shows that how testing is an integral part of Quality Assurance.

    1.6 HURDLES IN TESTING

    As in many other development projects, testing is not free from hurdles. Some of the hurdles normally

    encounters are:

    Usually late activity in the project life cycle

    No concrete output and therefore difficult to measure the value addition

    Lack of historical data

    Recognition of importance is relatively less

    Politically damaging as you are challenging the developer

    Delivery commitments

    Too much optimistic that the software always works correctly

    Based on the project and delivery schedule these hurdles have to be addressed.

    1.7 DEFECT DISTRIBUTION

    In a typical project life cycle, testing is the late activity. When the product is tested, the defects may be

    due to many reasons. It may be either due to programming error or may be due to defects in the design or

    defects at any stages in the life cycle itself. The overall defect distribution is shown in Figure 1.1. It is

    observed that most of the defects are at the requirements stage itself. This is because, the requirements

    may not be understood properly or requirements keep changing even after coding or frequent changes in

    the requirements during development which in turn changes the design. Defects at the coding level are

    minimum because coding process is well understood and well defined.

    Chapter 1 - Introduction to Software Testing

  • 7/27/2019 BSIT-54-Main-pg-1-65

    7/65

    7BSIT 54 Software Quality and Testing

    Figure 1.1: Software defect Distribution

    1.8 TESTING FUNDAMENTALS

    Any software program that is to be quality conscious, must be rigorously tested with a predefined set

    of objectives. These objectives could vary from project to project. Some projects may set a few objectives

    whereas others could set complete objectives. It is necessary to know the objectives before testing any

    software system.

    1.8.1 Testing Objectives

    Some of the objectives of testing are

    Testing is a process of executing a program with the intent of finding an error.

    A good test is one that has a high probability of finding an as yet undiscovered error.

    A successful test is one that uncovers an as yet undiscovered error.The objective is to design tests that systematically uncover different classes of errors and do so with

    a minimum amount of time and effort.

    Secondary benefits include

    Testing demonstrates that software functions appear to be working according to specification.

    Rqmts.

    56%

    Other10%

    Code7%

    Design

    27%

    Rqmts.

    Design

    Code

    Other

  • 7/27/2019 BSIT-54-Main-pg-1-65

    8/65

    8

    Those performance requirements appear to have been met.

    Data collected during testing provides a good indication of software reliability and some indication

    of software quality.

    Finally, if testing process is followed we can guarantee that testing cannot show the absence of defects,

    it can only show that software defects are present. Once the objectives are set, testing process needs to

    be understood properly.

    1.8.2 Test Information Flow

    Testing is a complex process and requires efforts similar to software development. A typical test

    information flow is shown in Figure 1.2.

    Figure 1.2: Test information flow in a typical software test life cycle

    In the Figure1.2,

    Software Configuration includes a Software Requirements Specification, a Design Specification,

    and source code.

    A test configuration includes a Test Plan and Procedures, test cases, and testing tools.

    It is difficult to predict the time to debug the code, hence it is difficult to schedule.

    Once the right software is available for testing, proper test plan and test cases are developed. Then the

    Chapter 1 - Introduction to Software Testing

  • 7/27/2019 BSIT-54-Main-pg-1-65

    9/65

    9BSIT 54 Software Quality and Testing

    software is subjected to test with simulated test data. After the test execution, the test results are examined.

    It may have defects or the software is passed with out any defect. The software with defect is subjected

    to debugging and again tested for its correctness. This process will continue till the testing reports zerodefects or run out of time for testing.

    1.8.3 Test Case Design

    During testing, test data and condition under which these data must be used is to be determined. Such

    process is known as test case design. We need to understand the scope of the testing and design the test

    cases.

    Some of the points to be noted during the test case design are:

    Can be as difficult as the initial design.

    Can test if a component conforms to specification It is known as Black Box Testing.

    Can test if a component conforms to design It is known as White box testing.

    In the Testing case design we cannot prove the complete correctness of the system as not all execution

    paths can be tested during the test execution.

    Consider the flow chart example shown in Figure 1.3 which represents multiple control structures and

    multiple paths of program execution.

    Figure 1.3: Flow chart of a typical program execution with multiple paths.

  • 7/27/2019 BSIT-54-Main-pg-1-65

    10/65

    A program with a structure as illustrated above (with less than 100 lines of Pascal code) has about

    100,000,000,000,000 possible combinations of paths. If attempted to test these at the rate of 1000 tests per

    second, would take 3170 years to test all paths. This shows that exhaustive testing of a software is notpossible.

    Testing is a mandatory activity in software development cycle. This activity must begin from the

    requirements specification and should be addressed at each level of development cycle. Before the beginning

    of the test execution, one should know about the importance of testing, objectives and process of testing.

    QUESTIONS

    1. What is testing? Describe briefly.

    2. What is the purpose of software testing?

    3. What are the realone for software buge? Explain.

    4. What are the approaches to testing ? Explain.

    5. Describe the hurdles in testing.

    6. Explain the origin of the defect distribution in a typical software development life cycle.

    7. What are the objectives of testing? Explain.

    8. With diagram explain test information flow in typical software test life cycle.

    Chapter 1 - Introduction to Software Testing10

  • 7/27/2019 BSIT-54-Main-pg-1-65

    11/65

    Chapter 2

    Types of Software Testing

    In God, We Trust

    All else, We Test

    - Anon

    2.1 LEARNING OBJECTIVES

    You will learn about:

    Different types of Software Testing

    Brief definition of each of Software Testing types

    2.2 INTRODUCTION

    Testing is a yardstick to ensure the appropriateness of the understanding of needs and the correctness

    of the corresponding deliverable (which could be a system, a product or a service). Understanding is an

    outcome of the learning process, whereas delivery of an output is an outcome of the process ofunderstanding. There are multiple ways of testing to ensure appropriateness/correctness. In the following

    sections, different types of testing are explained briefly to gain the knowledge in the area of software

    testing.

    11BSIT 54 Software Quality and Testing

  • 7/27/2019 BSIT-54-Main-pg-1-65

    12/65

    12

    2.3 TYPES OF SOFTWARE TESTING

    1. Black Box Testing

    To ensure the validity of the software application without the need for having its internal structure

    exposed. During the testing, the program code is not available but the executable module is used. Detailed

    test plan and proper test environment is required. Black box testing method is used in most of the applications

    to ensure the business functions accurately.

    2. White Box Testing

    It is a method to test the software utilizing its internal structure. The complete code structure is

    available during the test. Normally, programmers use this method to find the errors at the program level

    which helps in debugging. This also helps in understanding the behavior of the program, code review, and

    data flow and code level test case design.

    3. Grey Box Testing

    It is a higher level of abstraction and less comprehensive as compared to, white box testing. It involves

    business case testing with parts of the internal structure exposed. Used to test the software through some

    understanding of its internals and is like looking under hood.

    4. Alpha Testing

    A person other than the developer carries out testing, in-house, and at various project milestones.

    Since a third person tests the software, a detailed test plan must be provided.

    5. Beta Testing

    This type of testing is conducted by end-users either after or in parallel with system testing. Users who

    are the ultimate owner of the software will test the system before deployment and the errors will be

    reported back to the developer. For instance, Microsoft Windows 2000 was released to a set of users

    world wide for a limited duration for beta testing. All these users were using earlier versions of Microsoft

    operating system. The goal is to test the product by the end user in all respect.

    6. Unit Testing

    Unit testing uses both black box and white box methods to test a module against its design specification

    immediately after its design. This is normally done by the developer or another programmer. Unit testingis important and mandatory for all software modules. To illustrate, if you develop a program which computes

    simple interest, it has to be tested for all different type of inputs. This is known as unit testing.

    7. Integration Testing

    Here an independent tester in association with developers tests the system after the integration. In

    Chapter 2 - Types of Software Testing

  • 7/27/2019 BSIT-54-Main-pg-1-65

    13/65

    13BSIT 54 Software Quality and Testing

    large scale business systems, it is necessary to integrate many modules developed by different people.

    Once it is integrated, the same has to be tested and known as integration testing.

    8. System Testing

    This is a pre-deployment testing to verify whether the developed system meets the requirement

    specifications or not by simulating the target operational environment. This is to verify whether the system

    is production ready or not.

    9. Acceptance Testing

    Here the customer carries out system testing before finally accepting the system as meeting his stated

    requirements. This is normally done by the customer in association with the developer. Accepting testing

    is must for all systems. They will test the software with the real data and conforms the software system

    for its correctness.

    10. Configuration/ Platform Testing

    Some time the system may have to work with different platforms like windows 2000, Windows NT,

    Unix etc,. In such cases, the system has to be tested in all platforms in which the system will be ultimately

    used. It is to ensure the system functionality across,

    Different Hardware/ Software configurations,

    Multiple versions of Operating Systems,

    Different versions of browsers,

    Various plug-ins or external components

    Different local conditions.

    This is a mandatory testing for all application before deployment.

    11. Localization Testing

    All systems need not be interfaced or mapped on to a English language. In some countries, English

    may be used. In such cases, the system has to be developed both in English and customized to local

    language. This type of testing is to ensure the software that localization does not affect the software

    functionality. It ensures the appropriateness of linguistic locale changes.

    12. Internationalization Testing

    To ensure that the software is able to operate with the local languages and data representation including

    those using multi byte character sets. Also, currency symbols used in the application will be tested.

  • 7/27/2019 BSIT-54-Main-pg-1-65

    14/65

    14

    13. Usability Testing

    To test the effective user interface of the system by considering the human factors and done by

    ergonomic experts. Normally, this kind of test is performed by using checklist approach. For modern

    software systems, this type of testing is mandatory.

    14. Performance Testing

    Applications are tested to ensure expected efficiency, which includes parameters such as:

    Response time,

    Resource utilization, and

    Reliability

    Performance testing helps in acquiring the customers confidence on the system.

    15. Load Testing

    The system utilization is normally depends on the number of users. More users on the system, more is

    the pressure or load on the system. But how to ensure that the developed system is capable of handling

    more loads. Load testing is to test the server and hardware system by applying load, defined by the

    workload pattern to yield required performance characteristics like response time, throughput and resource

    utilization.

    16. Stress Testing

    It is another form of load testing. Similar to load testing in a constrained environment, it involvesstressing the server beyond the expected load and verifying the behavior of the operational characteristics

    of the application.

    17. Benchmark Testing

    Benchmark is a standard mechanism to compare with other similar products. This type of testing is to

    compare the product with the standard benchmark available.

    18. Functional Testing

    Wherein the system under testing is subjected to verification against its functional behavior as laid

    down in the requirements document. Functional behavior is normally collected at the requirements gatheringstage and the same will verified during the functional testing. However, the methods used for functional

    testing may be different, like, white box or black box methods.

    19. Formal Inspections

    It is a structural method to identify defects in various artifacts used in the development process as well

    Chapter 2 - Types of Software Testing

  • 7/27/2019 BSIT-54-Main-pg-1-65

    15/65

    15BSIT 54 Software Quality and Testing

    as in the deliverables. The artifacts include requirements document, design document, coding standards,

    program codes, reports and others. Code walk through is a standard method used in formal inspections.

    Usually, a team of people performs this activity and log defects in a formal fashion.20. Consistency Testing

    It is a type of testing method to verify the consistency of data in an application. This is important

    because the same data stored in two different formats may cause problems. To illustrate, a name field

    may use twenty characters as field length in one place and different field length at other places may cause

    consistency problem.

    21. Regression Testing

    A mechanism to conduct repeated tests on every new version of the software by using the same test

    scripts and test plan. This would ensure that coupling effects (wherein fixing defects in one part of the

    application could potentially create defects in previously correct segments) are taken care of in the

    development cycle. This is helpful when you are releasing multiple versions of the software.

    22. Smoke Testing

    Rigorous testing requires more time and people. Also, the complete software may not be ready for

    rigorous testing. Initially, testing is carried out on important functional behavior so that it can be taken up

    for further testing. Smoke testing mainly ensures that all functions work correctly on a new software

    before being subjected to rigorous testing. This is a pre-qualification test.

    23. Sanity Testing

    A cursory test to ensure all normal functions that work as expected before shipping or delivery. It is

    equivalent to ensuring the management of the company that their product is healthy and quality conscious.

    24. Top Down Testing

    To test the application, through the white box method, starting with the main program and going down

    unit by unit. It involves creation of stubs if a lower level unit is not ready. This method is useful if the

    system is large and developed many people. It is difficult to wait for all the modules to get ready before

    the testing actually begins. This method also ensures that the requirements are tested well in advance.

    25. Bottom Up Testing

    To test the application, by white box method, starting with the lower level units and going up unit by

    unit. It involves creation of drivers when a higher-level unit is not ready. In this method, as and when the

    modules are ready, testing is conducted. Here, module level tests are carried out first and system level

    later.

  • 7/27/2019 BSIT-54-Main-pg-1-65

    16/65

    16

    26. End to End Testing

    Test the functions of the system by involving all possible components, which includes hardware, software,

    other components, processes and people.

    27. Automated Testing

    Testing is a complex process. For large systems, manual testing is difficult and time consuming. In

    order to avoid this, automated tools are used for testing. It is a method to execute series of test cases

    automatically during the tests using testing tools.

    28. Heuristic Testing

    It is a method to test the software through experiential methods, involving technical insights and

    participation in critical thinking.

    29. User Testing

    Collect all the user scenarios through real user exercises and test the software. Participation by users

    is very important in this type of testing.

    30. Protection Testing

    It is a method to verify whether information is protected or not. It is a method to find the presence of

    faults, which could result in corruption of data, denial of services, unauthorized access or protection

    against attackers.

    31. Data Driven Test

    Generally, test scrip is developed to test the software. Data files are separated from data files. Data

    driven test is the method which uses different sets of data during testing without affecting the test script.

    32. Forced Error Testing

    A method to drive the software intentionally to all error conditions and checks whether the handling of

    error routines is correct. Standard software provides the error trapping and possible recovery from the

    errors. This method of testing is to ensure these error traps are working correctly.

    33. Database Testing

    A testing method to identify database related errors. Data bases are separated from the program

    code. These data bases must be separated for its integrity of data, schema and other related details.

    34. Destructive Testing

    A method to stress the application or environment until it fails to perform the functions and then

    carrying out a root-cause analysis.

    Chapter 2 - Types of Software Testing

  • 7/27/2019 BSIT-54-Main-pg-1-65

    17/65

    17BSIT 54 Software Quality and Testing

    35. Preventive Testing

    This type of Testing is through code review, formal inspection, design reviews and walkthroughs to

    prevent possible errors once the software is ready. This is a kind of static testing method and helps in

    documentation also.

    36. Boundary Testing

    To test the software with extreme input values and cause it to generate extreme output values.

    37. Volume Testing

    It is a kind of load testing wherein large amounts of data are processed through the system.

    38. Compatibility Testing

    It is similar to configuration testing to ensure whether the software works with multiple Operating

    System and hardware. This type of testing is important particularly for web applications. We dont know

    the type of OS used at the client destination.

    39. Documentation Testing

    Test to ensure the correctness of the documentation, which includes manuals, context helps, screen

    images etc. This is required when a product is sold along with manuals and other documents.

    40. On-line Help Testing

    It is to ensure the correctness of on-line help and proper links with the appropriate pages.

    41. Installation Testing

    Test the installation module for multiple versions, operating systems and different platforms. This is

    also known as deployment testing.

    42. Graphic User Interface(GUI) Testing

    Most of the modern software uses GUI method to develop user interface. These user interfaces are

    developed using GUI standards. This method of testing is to ensure the software is developed as per GUI

    standards and works correctly.

    43. Security Testing

    Any software is subjected to security threat. It may be due to virus or hackers may misuse the

    software. In order to protect the software from these threats, security testing must be conducted.

  • 7/27/2019 BSIT-54-Main-pg-1-65

    18/65

    18

    44. Link Testing

    To verify whether the web links are pointing to the correct page and correspondingly, whether the

    page exists. This is very important testing in case of E-commerce applications.

    45. Static Testing

    This is a method of testing the program codes through walkthroughs, reviews and inspections. Here,

    the software module will not be executed during the testing.

    46. Requirement Phase Testing

    Use verification methods to test the requirements of software before the development process and

    ensure the requirements are testable.

    47. Error-Handling Testing

    Determine the ability of the software being tested to process incorrect transactions.

    48. Module Testing

    It is a process of testing individual subprograms, subroutines and procedures in a program. Also, test a

    program in small blocks as they are built.

    49. Basis Path Testing

    It is a white box method to test the software using the flow diagrams of all the executable paths.

    50. Big Bang Testing

    No integration testing; test the system at the end of development cycle by attacking all modules at

    once. It is fast and must be done at once.

    51. Exhaustive Testing

    Test the software for every possible inputs and every possible outcome. In most of the cases, it is

    difficult to test all possible inputs as it is time consuming.

    52. Hardware Testing

    Test the hardware (this is very important in embedded systems) to check if it meets the customerspecifications.

    53. Software Testing

    Refers to a broad area of activity and is a method to test if the software conforms to requirements

    completely.

    Chapter 2 - Types of Software Testing

  • 7/27/2019 BSIT-54-Main-pg-1-65

    19/65

    19BSIT 54 Software Quality and Testing

    54. Production Testing

    It is a part of the production process and involves testing the product that has the exact image of the

    deliverable. It ensures how the software works when it is deployed later.

    55. Component Testing

    Test individual software components and interoperability among all components.

    56. E-Commerce Testing

    A buzzword, which is used for end-to-end testing of web based applications.

    57. Browser Testing

    Test the web based applications with browsers against standards set by World Wide Web Consortium.

    58. HTML Testing

    Test the web application to identify deviations from Hyper Text Meta Language standards, missing

    tags, wrong links and lack of proper syntax.

    59. Server Testing

    This type of testing in web applications tests the various servers for availability, tolerability and

    maintainability.

    60. Reliability Testing

    It is a type of testing to ensure the reliability of the software for a specified period. For example, a

    typical web site must be available 24 x 7 days through out the year.

    61. Availability Testing

    To ensure the availability of the application to the user by verifying the user connections, check whether

    application responds to an input, 24x7 availability time and number of failed attempts to load a page.

    The above dictionary of testing types is designed to be a quick reference to help develop an appreciation,

    obtain some understanding and strengthen your testing vocabulary. Some of the testing types/definitions

    may overlap with each other and suggest nearly the same meaning. In such cases, it may be appropriate

    to refer to more elaborate compendia or specialist material (as provided below) to arrive at a betterunderstanding. (Refer to References given at the end of the book).

    QUESTION

    1. Explain types of software testing in detail.

  • 7/27/2019 BSIT-54-Main-pg-1-65

    20/65

    Chapter 3

    Software Quality Assurance

    3.1 LEARNING OBJECTIVES

    You will learn about:

    Basic principles about the Software Quality

    Software Quality Assurance and SQA activities

    Software Reliability

    3.2 INTRODUCTION

    Quality is defined as a characteristic or attribute of something. As an attribute of an item, quality

    refers to measurable characteristics-things we are able to compare to known standards such as length,

    color, electrical properties, malleability, and so on. However, software, largely an intellectual entity, is

    more challenging to characterize than physical objects.

    Quality design refers to the characteristics that designers specify for an item. The grade of materials,

    tolerance, and performance specifications all contribute to the quality of design.

    Quality of conformance is the degree to which the design specification is followed during manufacturing.Again, the greater the degree of conformance, the higher the level of quality of conformance.

    Software Quality Assurance Encompasses

    A quality management approach for software development

    20 Chapter 3 - Software Quality Assurance

  • 7/27/2019 BSIT-54-Main-pg-1-65

    21/65

    Effective software engineering technology

    Formal technical reviews during the software development

    A multi tiered testing strategy

    Control of software documentation and changes made to it

    A procedure to assure compliance with software development standards

    Measurement and reporting mechanisms

    Figure 3.1: Achieving Software Quality

    The term software quality is ensured by following either the process or methods of many items shown

    in Figure 3.1. Development of software must be carried out through software engineering methods.

    Ad-hoc methods do not yield proper results. While developing, one should follow standards and procedures.

    Standards related to screen design, documentation, file design and so on. Every software development

    methodology as well as maintenance must associate with Software Configuration Methods (SCM) and

    standard Software Quality Assessment (SQA). Normally, a SQ Analyst audits the software periodically

    during the development. Quality process must involve technical reviews and standard measurement must

    be followed. Testing is the integral process of the software quality process.

    3.3 QUALITY CONCEPTS

    What are the quality concepts?

    Quality

    SoftwareEngineering

    Methods

    Formal

    TechnicalReview

    StandardsAnd

    ProceduresSCM

    &

    SQA

    Testing

    Measurement

    21BSIT 54 Software Quality and Testing

  • 7/27/2019 BSIT-54-Main-pg-1-65

    22/65

    22 Chapter 3 - Software Quality Assurance

    Quality

    Quality control

    Quality assurance

    Cost of quality

    The American heritage dictionary defines quality as a characteristic or attribute of something. As an

    attribute of an item quality refers to measurable characteristic-things which we are able to compare to

    known standards such as length, color, electrical properties, and malleability, and so on. However, software

    is largely an intellectual entity, more challenging to characterize than physical object.

    Nevertheless, measures of a programs characteristic do exist. These properties include

    1. Cyclomatic complexity2. Cohesion

    3. Number of function points

    4. Lines of code

    When we examine an item based on its measurable characteristics, two kinds of quality may be

    encountered:

    Quality of design

    Quality of conformance

    3.4 QUALITY OF DESIGN

    Quality of design refers to the characteristics that designers specify for an item. The grade of materials,

    tolerance, and performance specifications all contribute to quality of design. As higher graded materials

    are used and tighter, tolerance and greater levels of performance are specified the design quality of a

    product increases if the product is manufactured according to specifications. Proper standards and

    documentations are used to increase the quality standards. Adequate checks must be associated during

    the development life cycle of the product.

    3.5 QUALITY OF CONFORMANCE

    Quality of conformance is the degree to which the design specifications are followed during

    manufacturing. Again, the greater the degree of conformance, the higher the level of quality of conformance.

  • 7/27/2019 BSIT-54-Main-pg-1-65

    23/65

    23BSIT 54 Software Quality and Testing

    In software development, quality of design encompasses requirements, specifications and design of

    the system. The requirements must be gathered formally and there should not be any ambiguity and must

    be complete in all respects. Specifications must be elaborated and defined formally. Design must followstandard methods and process.

    Quality of conformance is an issue focussed primarily on implementation. If the implementation follows

    the design and the resulting system meets its requirements and performance goals, conformance quality is

    high.

    3.6 QUALITY CONTROL (QC)

    QC is the series of inspections, reviews, and tests used throughout the development life cycle to

    ensure that each work product meets the requirements placed upon it. QC includes a feedback loop to the

    process that created the work product. The combination of measurement and feedback allows us to tune

    the process when the work products created fail to meet their specification. These approach views QC as

    part of the manufacturing process QC activities may be fully automated, manual or a combination of

    automated tools and human interaction. An essential concept of QC is that all work products have defined

    and measurable specification to which we may compare the outputs of each process the feedback loop is

    essential to minimize the defect produced. Normally, a QC team reviews the development and gives

    guidelines periodically to improve the software quality. They review the code, standards, process followed

    at every stages.

    3.7 QUALITY ASSURANCE (QA)

    QA consists of the editing and reporting functions of management. The goal of quality assurance is to

    provide management with the data necessary to be informed about product quality, thereby gaining insight

    and confidence that product quality is meeting its goals. Of course, if the data provided through QA

    identify problems, it is managements responsibility to address the problems and apply the necessary

    resources to resolve quality issues.

    3.7.1 Cost of Quality

    Cost of quality includes all codes incurred in the pursuit of quality or in performing quality related

    activities. Cost of quality studies are conducted to provide a base line for the current cost of quality, to

    identify opportunities for reducing the cost of quality, and to provide a normalized basis of comparison.

    The basis of normalization is usually money. Once we have normalized quality costs on a money basis, we

  • 7/27/2019 BSIT-54-Main-pg-1-65

    24/65

    24

    have the necessary data to evaluate where the opportunities lie to improve our process further. However,

    we can evaluate the effect of changes in money based terms.

    QC may be divided into cost associated with

    Prevention

    Appraisal

    Failure

    Prevention costs include:

    Quality Planning

    Formal Technical Review on systems

    Testing Equipments used for running the software

    Training imparted for persons involved in development

    Appraisal costs includes activity to gain insight into product condition the First time through each

    process.

    Examples for appraisal costs includes:

    In process and inter process inspection

    Equipment calibration and maintenance

    Testing

    Failure Costs are costs that would disappear if no defects appeared before shipping a product to

    customers failure costs may be subdivided into internal and external failure costs.

    Internal failure costs are costs incurred when we detect an error in our product prior to shipment.

    Internal failure costs includes

    Rework

    Repair

    Failure Mode Analyses

    External failure costs are the cost associated with defects found after the product has been shipped to

    the customer.

    Examples of external failure costs are

    Chapter 3 - Software Quality Assurance

  • 7/27/2019 BSIT-54-Main-pg-1-65

    25/65

    25BSIT 54 Software Quality and Testing

    1. Complaint Resolution

    2. Product return and replacement

    3. Helpline support

    4. Warranty work

    3.8 SOFTWARE QUALITY ASSURANCE(SQA)

    How do we define Quality?

    Conformance to explicitly stated functional and performance requirements, explicitly documented

    development standards, and implicit characteristics that are expected of all professionally developedsoftware.

    The above definition emphasizes three important points:

    1. Software requirements are the foundation from which quality is measured. Lack of conformance

    to requirements is lack of quality.

    2. Specified standards define a set of development criteria that guide the manner in which software

    is engineered. If the criteria are not followed, lack of quality will almost surely result.

    3. There is a set of implicit requirements often goes unmentioned. (E.g. the desire of good

    maintainability). If software conforms to its explicit requirements but fails to meet implicit

    requirements, software quality is questionable.

    3.8.1 Background Issues

    QA is an essential activity for any business that produces products to be used by others.

    The SQA group serves as the customer in-house representative. That is the people who perform SQA

    must look at the software from customers point of view.

    The SQA group attempts to answer the questions asked below and hence ensure the quality of software.

    The questions are:

    1. Has software development been conducted according to pre-established standards?

    2. Have technical disciplines properly performed their role as part of the SQA activity?

  • 7/27/2019 BSIT-54-Main-pg-1-65

    26/65

    26 Chapter 3 - Software Quality Assurance

    SQA Activities

    SQA Plan is interpreted as shown in Figure 3.2.

    SQA comprises of a variety of tasks associated with two different constituencies.

    1. The software engineers who do technical work like

    Performing Quality assurance by applying technical methods

    Conduct Formal Technical Reviews

    Perform well-planed software testing.

    2. SQA group that has responsibility for

    Quality assurance planning oversight

    Record keeping

    Analysis and reporting.

    QA activities performed by SE team and SQA are governed by the following plan:

    Evaluation to be performed.

    Audits and reviews to be performed.

    Standards that is applicable to the project.

    Procedures for error reporting and tracking

    Documents to be produced by the SQA group

    Amount of feedback provided to software project team.

    Figure 3.2: Software Quality Assurance Plan

    What are the activities performed by SQA and SE team?

    Prepare SQA Plan for a project

    SQA PlanningTeam

    SoftwareEngineers

    SQA Plan

  • 7/27/2019 BSIT-54-Main-pg-1-65

    27/65

    27BSIT 54 Software Quality and Testing

    Participate in the development of the projects software description

    Review software-engineering activities to verify compliance with defined software process.

    Audits designated software work products to verify compliance with those defined as part of

    the software process.

    Ensures that deviations in software work and work products are documented and handled

    according to a documented procedure.

    Records any noncompliance and reports to senior management.

    3.8.2 Software Reviews

    Software reviews are a filter for the software engineering process. That is, reviews are applied at

    various points during software development and serve to uncover errors that can then be removed.

    Software reviews serve to purify the software work products that occur as a result of analysis, design,

    and coding.

    Any review is a way of using the diversity of a group of people to:

    1. Point out needed improvements in the product of a single person or a team;

    2. Confirm that parts of a product in which improvement is either not desired, or not needed.

    3. Achieve technical work of more uniform, or at least more predictable quality that can be achieved

    without reviews, in order to make technical work more manageable.

    There are many different types of reviews that can be conducted as part of software- engineering like:1. An informal meeting if technical problems are discussed.

    2. A formal presentation of software design to an audience of customers, management, and technical

    staff is a form of review.

    3. A formal technical review is the most effective filter from a quality assurance standpoint.

    Conducted by software engineers for software engineers, the FTR is an effective means for

    improving software quality.

    3.8.3 Cost impact of Software DefectsTo illustrate the cost impact of early error detection, we consider a series of relative costs that is based

    on actual cost data collected for large software projects.

    Assume that an error uncovered during design will cost 1.0 monetary unit to correct. Relative to this

    cost, the same error uncovered just before testing commences will cost 6.5 units; during testing 15 units;

  • 7/27/2019 BSIT-54-Main-pg-1-65

    28/65

    28 Chapter 3 - Software Quality Assurance

    and after release, between 60 and 100 units.

    3.8.4 Defect Amplification and Removal

    A defect amplification model can be used to illustrate the generation and detection of errors during

    preliminary design, detail design, and coding steps of the software engineering process. The model is

    illustrated schematically in Figure 3.3.

    A box represents a software development step. During the step, errors may be inadvertently generated.

    Review may fail to uncover newly generated errors from previous steps, resulting in some number of

    errors that are passed through. In some cases, errors passed through from previous steps, resulting in

    some number of errors that are passed through. In some cases errors passed through from previous steps

    are amplified (amplification factor, x) by current work. The box subdivisions represent each of these

    characteristics and the percent efficiency for detecting errors, a function of the thoroughness of review.

    Figure 3.3: Defect Amplification Model.

    Figure 3.4 illustrates hypothetical example of defect amplification for a software development process

    in which no reviews are conducted. As shown in the figure each test step is assumed to uncover and

    correct fifty percent of all incoming errors without introducing new errors (an optimistic assumption). Ten

    preliminary design errors are amplified to 94 errors before testing commences. Twelve latent defects are

    released to the field. Figure 3.5 considers the same conditions except that design and code reviews are

    conducted as part of each development step. In this case, ten initial preliminary design errors are amplified

    to 24 errors before testing commences.

    Only three latent defects exist. By recalling the relative cost associated with the discovery and

    correction of errors, overall costs (with and without review for our hypothetical example) can be established.

    To conduct reviews a developer must expend time and effort and the development organization must

    spend money. However, the results of the presiding example leave little doubt that we have encountered

    a Pay now or pay much more latersyndrome.

    Errors from previous Step DEFECTS DETECTION

    Errors passed through

    Amplified errors 1:x

    Newly generated

    errors

    Percent efficiency for

    error detection

    Errorspassed tonext ste

  • 7/27/2019 BSIT-54-Main-pg-1-65

    29/65

    29BSIT 54 Software Quality and Testing

    Formal technical reviews (for design and other technical activities) provide a demonstrable cost benefit

    and they should be conducted.

    Figure 3.4: Defect Amplification -No Reviews

    Preliminary design

    0

    0

    10

    70

    %

    2

    1-1.5

    25

    50

    %

    5

    10 -3

    25

    60%

    Integration Test

    0

    10

    70

    %

    2

    1-1.5

    25

    50

    %

    0

    0

    60%

    Detail Design

    Code/Unit

    3,2

    115

    10

    24

    To

    Validation test

    System Test

    12

    6

    24

    3

    Latent

  • 7/27/2019 BSIT-54-Main-pg-1-65

    30/65

    30 Chapter 3 - Software Quality Assurance

    Figure 3.5: Defect Amplification - Reviews Conducted

    Preliminary design

    0

    0

    10

    0%

    6

    4 x 1.5

    x =1.5

    25

    0%

    10

    27x3

    x=3

    25

    20%

    Integration Test

    0

    10

    50

    %

    2

    1-1.5

    25

    50

    %

    0

    0

    60%

    Detail Design

    Code/Unit

    10,

    437

    94

    To

    Validation test

    System Test

    47

    24

    Latent errors

    94

    12

  • 7/27/2019 BSIT-54-Main-pg-1-65

    31/65

    31BSIT 54 Software Quality and Testing

    3.9 FORMAL TECHNICAL REVIEWS (FTR)

    FTR is a SQA activity that is performed by software engineers. The main objectives of the FTR are:

    To uncover errors in function, logic, implemented in any representation of the software

    To verify that software under review meets its requirements

    To ensure that software has been represented according to predefined standards

    To achieve software that is developed in an uniform manner

    To make projects more manageable

    In addition, the FTR serves as a training ground, enabling junior engineers to observe different approaches

    to software analysis, design, and implementation. The FTR also serves to promote backup and continuity

    because numbers of people become familiar with parts of the software that they may not have other wise

    seen.

    The FTR is actually a class of reviews that include walkthrough inspection and round robin reviews,

    and other small group technical assessments of software. Each FTR is conducted as meeting and will be

    successful only if it is properly planned, controlled and attended. In the paragraph that follows, guidelines

    similar to those for a walk through are presented as a representative Formal technical review.

    3.9.1 The Review Meeting

    The Focus of the FTR is on a work product - a component of the software. At the end of review all

    attendees of the FTR must decide:

    1. Whether to accept the work product without further modification

    2. Reject the work product due to serve errors (Once corrected another review must be performed)

    3. Accept the work product provisionally (minor errors have been encountered and must be corrected

    but no additional review will be required)

    The decision made, All FTR attendees complete a sign-offindication their participation in the review

    and their concurrence with the review team findings

    3.9.2 Review Reporting and Record Keeping

    The review summary report is typically is a single page form. It becomes part of the project historical

  • 7/27/2019 BSIT-54-Main-pg-1-65

    32/65

    32 Chapter 3 - Software Quality Assurance

    record and may be distributed to the project leader and other interested parties. The review issue lists

    serves two purposes.

    1. To identify problem areas within the product

    2. To serve as anaction item. Checklist that guides the producer as corrections are made. An

    issues list is normally attached to the summary report.

    It is important to establish a follow up procedure to ensure that item on the issues list have been

    properly corrected. Unless this is done, it is possible that issued raised can fall between the cracks. One

    approach is to assign responsibility for follow up for the review leader. A more formal approach as signs

    responsibility independent to SQA group.

    3.9.3 Review Guidelines

    The following represents a minimum set of guidelines for formal technical reviews:

    Review the product, Not the producer

    Set an agenda and maintain it

    Limit the debate

    Enunciate problem areas but dont attempt to solve every problem noted

    Take return notes

    Limit the number of participants and insist upon the advanced preparation

    Develop a check list each work product that is likely to be reviewed

    Allocate resources and time schedule for FTRs.

    Conducts meaningful training for all reviewers

    Review your early reviews

    3.10 STATISTICAL QUALITY ASSURANCE

    Statistical quality assurance reflects a growing trend throughout industry to become more quantitative

    about quality. For software, statistical quality assurance implies the following steps

    Information about software defects is collected and categorized

  • 7/27/2019 BSIT-54-Main-pg-1-65

    33/65

    33BSIT 54 Software Quality and Testing

    An attempt is made to trace each defect to its underlying cause

    Using Pareto principle (80% of the defects can be traced to 20% of all possible causes), isolate

    the 20% (the vital few)

    Once the vital few causes have been identified, move to correct the problems that have caused

    the defects.

    This relatively simple concept represents an important step toward the creation of an adaptive software

    engineering process in which changes are made to improve those elements of the process that introduce

    errors. To illustrate the process, assume that a software development organization collects information on

    defects for a period of one year. Some errors are uncovered as software is being developed. Other

    defects are encountered after the software has been released to its end user.

    Although hundreds of errors are uncovered, all can be tracked to one of the following causes.

    Incomplete or erroneous specification (IES)

    Misinterpretation of customer communication (MCC)

    Intentional deviation from specification (IDS)

    Violation of programming standards ( VPS )

    Error in data representation (EDR)

    Inconsistent module interface (IMI)

    Error in design logic (EDL)

    Incomplete or erroneous testing (IET)

    Inaccurate or incomplete documentation (IID)

    Error in programming language translation of design (PLT)

    Ambiguous or inconsistent human-computer interface (HCI)

    Miscellaneous (MIS)

    To apply statistical SQA table 1 is built. Once the few vital causes are determined, the software

    development organization can begin corrective action. After analysis, design, coding, testing, and release,the following data are gathered.

  • 7/27/2019 BSIT-54-Main-pg-1-65

    34/65

    34 Chapter 3 - Software Quality Assurance

    Ei = The total number of errors uncovered during the ithstep in the software Engineering

    process

    Si = The number of serious errors

    Mi = The number of moderate errors

    Ti = The number of minor errors

    PS = Size of the product (LOC, design statements, pages of documentation at the ith step

    Ws, Wm, Wt = weighting factors for serious, moderate and trivial errors where recommended values

    are Ws = 10, Wm = 3, Wt = 1.

    The weighting factors for each phase should become larger as development progresses. This rewardsan organization that finds errors early.

    At each step in the software engineering process, a phase index, PIi, is computed as;

    PIi = Ws (Si/Ei)+Wm (Mi/Ei)+Wt (Ti/Ei)

    The error index EI ids computed by calculating the cumulative effect or each PIi, weighting errors

    encountered later in the software engineering process more heavily than those encountered earlier.

    EI = (i x PIi)/PS= (PIi+2PI

    2+3PI3 +iPIi)/PS

    The error index can be used in conjunction with information collected in table to develop an overall

    indication of improvement in software quality.

  • 7/27/2019 BSIT-54-Main-pg-1-65

    35/65

    35BSIT 54 Software Quality and Testing

    3.11 SOFTWARE RELIABILITY

    Software reliability, unlike many other quality factors, can be measured directed and estimated using

    historical and developmental data. Software reliability is defined in statistical terms as Probability of

    DATA COLLECTION FOR STATISTICAL SQA

    Total Serious Moderate Minor

    Error No. % No. % No % No %

    IES 205 22 34 27 68 18 103 24

    MCC 156 17 12 9 68 18 76 17

    IDS 48 5 1 1 24 6 23 5

    VPS 25 3 0 0 15 4 10 2

    EDR 130 14 26 20 68 18 36 8

    IMI 58 6 9 7 18 5 31 7

    EDL 45 5 14 11 12 3 19 4

    IET 95 10 12 9 35 9 48 11

    IID 36 4 2 2 20 5 14 3

    PLT 60 6 15 12 19 5 26 6

    HCI 28 3 3 2 17 4 8 2

    MIS 56 6 0 0 15 4 41 9

    TOTA

    LS

    942 100 128 100 379 100 435 100

  • 7/27/2019 BSIT-54-Main-pg-1-65

    36/65

    36 Chapter 3 - Software Quality Assurance

    failure free operation of a computer program in a specified environment for a specified time to illustrate,

    program x is estimated to have reliability of 0.96 over 8 elapsed processing hours. In other words, if

    program x were to be executed 100 times and required 8 hours of elapsed processing time, it is likely tooperate correctly to operate 96/100 times.

    3.11.1 Measures of Reliability and Availability

    In a computer-based system, a simple measure of reliability ismean time between failure (MTBF),

    where

    MTBF = MTTF+MTTR

    The acronym MTTF and MTTR are Mean Time To Failure andMeanTime To Repair, respectively.

    In addition to reliability measure, we must develop a measure ofavailability. Software availability is

    the probability that a program is operating according to requirements at a given point in time and is defined

    as:

    Availability = MTTF / (MTTF+MTTR) x100%

    The MTBF reliability measure is equally sensitive to MTTF and MTTR. The availability measure is

    somewhat more sensitive to MTTR an indirect measure of the maintainability of the software.

    3.11.2 Software Safety and Hazard Analysis

    Software safety and hazard analysis are SQA activities that focus on the identification and assessment

    of potential hazards that may impact software negatively and cause entire system to fail. If hazards can

    be identified early in the software engineering process software design features can be specified that will

    either eliminate or control potential hazards.

    A modeling and analysis process is conducted as part of safety. Initially hazards are identified and

    categorized by criticality and risk.

    Once hazards are identified and analyzed, safety related requirements could be specified for the

    software i.e., the specification can contain a list of undesirable events and desired system responses to

    these events. The roll of software in managing undesirable events is then indicated.

    Although software reliability and software safety are closely related to one another, it is important to

    understand the subtle difference between them. Software reliability uses statistical analysis to determine

    the likelihood that a software failure will occur however, the occurrence of a failure does not necessarily

    result in a hazard or mishap. Software safety examines the ways in which failure result in condition that

  • 7/27/2019 BSIT-54-Main-pg-1-65

    37/65

    37BSIT 54 Software Quality and Testing

    can be lead to mishap. That is, failures are not considered in a vacuum. But are evaluated in the context

    of an entire computer based system.

    3.12 THE SQA PLAN

    The SQA plan provides a road map for instituting software quality assurance. Developed by the SQA

    group and the project team, The plan serves as a template for SQA activities that are instituted for each

    software project.

    ANSI/IEEE Standards 730-1984 and 983-1986 SQA plans is defined as shown below.

    I. Purpose of Plan

    II. References

    III Management

    1. Organization

    2. Tasks

    3. Responsibilities

    IV. Documentation

    1. Purpose

    2. Required software engineering documents

    3. Other Documents

    V. Standards, Practices and conventions

    1. Purpose

    2. Conventions

    VI. Reviews and Audits

    1. Purpose

    2. Review requirements

    a. Software requirements

    b. Designed reviews

  • 7/27/2019 BSIT-54-Main-pg-1-65

    38/65

    38 Chapter 3 - Software Quality Assurance

    c. Software V & V reviews

    d. Functional Audits

    e. Physical Audit

    f. In-process Audits

    g. Management reviews

    VII. Test

    VIII. Problem reporting and corrective action

    IX. Tools, techniques and methodologies

    X. Code Control

    XI. Media Control

    XII. Supplier Control

    XIII. Record Collection, Maintenance, and retention

    XIV. Training

    XV. Risk Management.

    3.12.1 The ISO Approach to Quality Assurance System

    ISO 9000 describes the elements of a quality assurance in general terms. These elements include the

    organizational structure, procedures, processes, and resources needed to implement quality planning, quality

    control, quality assurance, and quality improvement. However, ISO 9000 does not describe how an

    organization should implement these quality system elements.

    Consequently, the challenge lies in designing and implementing a quality assurance system that meets

    the standard and fits the companys products, services, and culture.

    3.12.2 The ISO 9001 Standard

    ISO 9001 is the quality assurance standard that applies to software engineering. The standard contains

    20 requirements that must be present for an effective quality assurance system. Because the ISO 9001

    standard is applicable in all engineering disciplines, a special set of ISO guidelines have been developed to

    help interpret the standard for use in the software process.

  • 7/27/2019 BSIT-54-Main-pg-1-65

    39/65

    39BSIT 54 Software Quality and Testing

    The 20 requirements delineated by ISO9001 address the following topic.

    1. Management responsibility

    2. Quality system

    3. Contract review

    4. Design control

    5. Document and data control

    6. Purchasing

    7. Control of customer supplied product

    8. Product identification and tractability

    9. Process control

    10.Inspection and testing

    11.Control of inspection, measuring, and test equipment

    12.Inspection and test status

    13.Control of non confirming product

    14.Corrective and preventive action

    15.Handling, storage, packing, preservation, and delivery

    16.Control of quality records

    17.Internal quality audits

    18.Training

    19.Servicing

    20.Statistical techniques

    In order for a software organization to become registered to ISO 9001, it must establish policies and

    procedure to address each of the requirement noted above and then be able to demonstrate that these

    policies and procedures are being followed.

  • 7/27/2019 BSIT-54-Main-pg-1-65

    40/65

    QUESTIONS

    1. With diagram explain how software quality is achieved.

    2. What are quality concepts? Explain.

    3. Describe the terms prevention costs, appraisal cost and failure cost.

    4. What are the background issues of QA? Explain briefly.

    5. Explain the activities of SQA plan.

    6. What do you mean by software reviews? Describe briefly.

    7. What is defect amplification model? Explain in detail.

    8. What is FTR? Explain its objectives.

    9. Explain types of FTR.

    10. Explain the six well defined phases of Fagan Inspection.

    11. Briefly explain the following:

    i)Small Vs Large Team Reviews.

    ii) Single Vs Multiple Session Reviews.

    iii) Non Systematic & Systematic DDT Reviews.

    iv) Single Site Vs Multiple Site Reviews.

    v) Synchronous and Arychronous Reviews.

    vi) Manual Vs Computer Supported Reviews.

    12. What are recent economic analyses? Explain.

    13. How psychological aspects of FTR are categorized? Explain in detail.

    14. What is the purpose of Review meeting? Explain.

    15. Explain the Review guidelines in detail.

    16. Describe the Statistical Quality Assurance. Mention the different causes for tracking error.

    17. Explain the measures of Reliability and Availability.

    18. Explain CMM model in detail.

    Chapter 3 - Software Quality Assurance40

  • 7/27/2019 BSIT-54-Main-pg-1-65

    41/65

    Chapter 4

    Program Inspections, Walkthroughs,and Reviews

    4.1 LEARNING OBJECTIVES

    You will learn about:

    What are static testing and its importance in Software Testing?

    Guidelines to be followed during static testing

    Process involved in inspection and walkthroughs

    Various check lists to be followed while handling errors in Software Testing

    Review techniques

    4.2 INTRODUCTION

    The majority of the programming community worked under the assumptions that programs are written

    solely for machine execution and are not intended to be read by people. The only way to test a program

    is by executing it on a machine. Weinberg built a convincing strategy that why programs should be read by

    people, and indicated this could be an effective error detection process.

    Experience has shown that human testing techniques are quite effective in finding errors, so much

    so that one or more of these should be employed in every programming project. The method discussed in

    this Chapter are intended to be applied between the time that the program is coded and the time that

    computer based testing begins. We discuss this based on two ways:

    It is generally recognized that the earlier the errors are found, the lower are the costs of correcting

    41BSIT 54 Software Quality and Testing

  • 7/27/2019 BSIT-54-Main-pg-1-65

    42/65

    42 Chapter 4 - Program Inspections, Walkthroughs, and Reviews

    the errors and the higher is the probability of correcting the errors correctly.

    Programmers seem to experience a psychological change when computer-based testing

    commences.

    4.3 INSPECTIONS AND WALKTHROUGHS

    Code inspections and walkthroughs are the two primary human testing methods. It involve the

    reading or visual inspection of a program by a team of people. Both methods involve some preparatory

    work by the participants. Normally it is done through meeting and it is typically known as meeting of the

    minds, a conference held by the participants. The objective of the meeting is to find errors, but not to find

    solutions to the errors (i.e. to test but not to debug).

    What is the process involved in inspection and walkthroughs?

    The process is performed by a group of people (three or four), only one of whom is the author of the

    program. Hence the program is essentially being tested by people other than the author, which is in

    consonance with the testing principle stating that an individual is usually ineffective in testing his or her

    own program. Inspection and walkthroughs are far more effective compare to desk checking (the process

    of a programmer reading his/her own program before testing it) because people other than the programs

    author are involved in the process. These processes also appear to result in lower debugging (error

    correction) costs, since, when they find an error, the precise nature of the error is usually located. Also,

    they expose a batch of errors, thus allowing the errors to be corrected later enmasse. Computer based

    testing, on the other hand, normally exposes only a symptom of the error and errors are usually detected

    and corrected one by one.

    Some Observations

    Experience with these methods has found them to be effective in finding from 30% to 70% of

    the logic design and coding errors in typical programs. They are not, however, effective in

    detecting high-level design errors, such as errors made in the requirements analysis process.

    Human processes find only the easy errors (those that would be trivial to find with computer-

    based testing) and the difficult, obscure, or tricky errors can only be found by computer-based

    testing. As an example stack overflow or memory leak in a module of a program.

    Inspections/walkthroughs and computer-based testing are complementary; error-detection

    efficiency will suffer if one or the other is not present.

    These processes are invaluable for testing modifications to programs. Because modifying an

    existing program is a more error-prone process(in terms of errors per statement written) than

    writing a new program.

  • 7/27/2019 BSIT-54-Main-pg-1-65

    43/65

    43BSIT 54 Software Quality and Testing

    4.4 CODE INSPECTIONS

    An inspection team usually consists of four people. One of the four people is an expert. The expert is

    expected to be a competent programmer, but he/she is not the programmer of the code under inspection

    and need not be acquainted with the details of the program. The duties of the expert include:

    Distributing materials for scheduling inspections

    Leading the session,

    Recording all errors found, and

    Ensuring that the errors are subsequently corrected.

    Hence the expert may be called as quality-control engineer. The remaining members usually consist of

    the programs designer and a test specialist.

    The general procedure is that the expert distributes the programs listing and design specification to

    other participants well in advance of the inspection session. The participants are expected to familiarize

    themselves with the material prior to the session. During inspection session, two main activities occur:

    1. The programmer is requested to narrate, statement by statement, the logic of the program.

    During the discourse, questions are raised and pursued to determine if errors exist. Experience

    has shown that many of the errors discovered are actually found by the programmer, rather than

    the other team members, during the narration. In other words, the simple act of reading aloud

    ones program to an audience seems to be a remarkably effective error-detection technique.

    2. The program is analyzed with respect to a checklist of historically common programming errors(such a checklist is discussed in the next section).

    It is Test leads responsibility to ensure the smooth conduction of the proceedings and that the participants

    focus their attention on finding errors, not correcting them.

    After session, the programmer is given a list of the errors found. The list of errors is also analyzed,

    categorized, ad used to refine the error checklist to improve the effectiveness of future inspections.

    The main benefits of this method are:

    Identifying early, errors in the program,

    The programmers usually receive feedback concerning his or her programming style and choice

    of algorithms and programming techniques.

    Other participants are also gain in similar way by being exposed to another programmers errors

    and programming style.

  • 7/27/2019 BSIT-54-Main-pg-1-65

    44/65

    44 Chapter 4 - Program Inspections, Walkthroughs, and Reviews

    The inspection process is a way of identifying early the most error-prone sections of the program,

    thus allowing one to focus more attention on these sections during the computer based testing

    processes.

    4.5 AN ERROR CHECKLIST FOR INSPECTIONS

    An important part of the inspection process is the use of a checklist to examine the program for

    common errors. The checklist is largely language independent as most of the errors can occur with any

    programming language.

    Checklist for Data-Reference Errors

    1. Is a variable referenced whose value is unset or uninitialized? This is probably the most frequent

    programming error; it occurs in a wide variety of circumstances.

    2. For all array references, is each subscript value within the defined bounds of the corresponding

    dimension?

    3. For all array references, does each subscript have an integer value? This is not necessarily an

    error in all languages, but it is a dangerous practice.

    4. Check for dangling reference problem?

    Note:

    The Dangling reference problem occurs in situations where the lifetime of a pointer is greater than thelifetime of the referenced storage.

    5. Are there any explicit or implicit addressing problems if on the machine being used, the units of

    storage allocation are smaller than the units of storage addressability?

    6. If a data structure is referenced in multiple procedures or subroutines, is the structure defined

    identically in each procedure?

    7. When indexing into a string, are the limits of the string exceeded?

    Checklist for Data-Declaration Error

    1. Have all variables been explicitly declared? A failure to do so is not necessarily an error, but it is

    a common source of trouble.

    2. If all attributes of a variable are not explicitly stated in the declaration, are the defaults well

    understood?

    3. Where a variable is initialized in a declarative statement, is it properly initialized?

  • 7/27/2019 BSIT-54-Main-pg-1-65

    45/65

    45BSIT 54 Software Quality and Testing

    4. Is each variable assigned the correct length, type, and storage class?

    5. Is the initialization of a variable consistent with its storage type?

    Checklist for Computation Errors

    1. Are there any computations using variables having inconsistent (e.g. No arithmetic) data types?

    2. Are there any mixed mode computations?

    3. Are there any computations using variables having the same data type but different lengths?

    (Example: short int X + long int Y = short int Z)

    4. Is the target variable of an assignment smaller than the right-hand expression? (Example int

    Product = float variable 1 * double of variable 2)

    5. Is an overflow or underflow exception possible during the computation of an expression? Thatis, the end result may appear to have a valid value, but an intermediate result might be too big or

    too small for the machines data representations.

    6. Is it possible for the divisor in a division operation to be zero?

    7. Where applicable, can the value of a variable go outside its meaningful range?

    8. Are there any invalid uses of integer arithmetic, particularly division? For example, if I is an

    integer variable, whether the expression 2*I/2 is equal to I depends on whether I has an odd or

    an even value and whether the multiplication or division is performed first.

    Checklist Comparison Errors

    1. Are there any comparisons between variables having inconsistent data types (e.g. comparing a

    character string to an address)?

    2. Are there any mixed-mode comparisons or comparisons between variables of different lengths?

    If so, ensure that the conversion rules are well understood.

    3. Does each Boolean expression state what it is supposed to state? Programmers often make

    mistakes when writing logical expressions involving and, or, and not.

    4. Are the operands of a Boolean operator Boolean? Have comparison and Boolean operators

    been erroneously mixed together?

    Checklist for Control-Flow Errors

    1. If the program contains a multi way branch (e.g. a computed GO TO in FORTRAN), can the

    index variable ever exceed the number of branch possibilities? For example, in the Fortran

    statement,

  • 7/27/2019 BSIT-54-Main-pg-1-65

    46/65

    46 Chapter 4 - Program Inspections, Walkthroughs, and Reviews

    GOTO (200,300,400), I

    Will I always have the value 1, 2, or 3?

    Here, the index value must not go beyond 3 as there are only 3 branches.

    2. Will every loop, function or program module eventually terminate? Devise an informal proof or

    argument showing that each loop will terminate

    3. Is it possible that, because of the conditions upon entry, a loop will never execute? If so, does

    this represent an oversight? For instance, for loops headed by the following statements:

    DO WHILE (NOTFOUND)

    DO I=X TO Z

    What happens if NOTFOUND is initially false or if X is greater than Z?

    5. Are there any non-exhaustive decisions? For instance, if an input parameters expected values

    are 1,2, or 3, does the logic assume that it must be 3 if it is not 1 or 2? If so, is the assumption

    valid?

    Checklist for Interface Errors

    1. Does the number of parameters received by this module equal the number of arguments sent by

    each of the calling modules? Also, is the order correct?

    2. Do the attributes (e.g. type and size) of each parameter match the attributes of each corresponding

    argument?

    3. In the procedural language check whether Does the number of arguments transmitted by this

    module to another module equal the number of parameters expected by that module?

    4. Do the attributes of each argument transmitted to another module match the attributes of the

    corresponding parameter in that module?

    5. If built-in functions are invoked, are the number, attributes, and order of the arguments correct?

    6. Does the subroutine alter a parameter that is intended to be only an input value?

    Input/Output Errors with respect to file handling

    1. If files are explicitly declared, are their attributes correct?

    2. Are the attributes on the file OPEN statement correct?

    3. Is the size of the I/O area in storage equal to the record size?

  • 7/27/2019 BSIT-54-Main-pg-1-65

    47/65

    47BSIT 54 Software Quality and Testing

    4. Have all files been opened before use?

    5. Are end-of-file conditions detected and handled correctly?

    6. Are there spelling or grammatical errors in any text that is printed or displayed by the program?

    The above error checklist helps for all code reviewers to assess the quality of code written by the

    programmer. This also helps in reducing the post deployment work and increases the documentation.

    4.6 WALKTHROUGHS

    The code walkthrough, like the inspection, is a set of procedures and error-detection techniques for

    group code reading. It shares much in common with the inspection process, but the procedures are slightly

    different, and a different error-detection technique is employed.

    The walkthrough is an uninterrupted meeting of one to two hours in duration. The walkthrough team

    consists of three to five people to play the role of moderator, secretary (a person who records all errors

    found), tester and programmer. It is suggested to have other participants like:

    A highly experienced programmer,

    A programming-language expert,

    A new programmer (to give a fresh, unbiased outlook)

    The person who will eventually maintain the program,

    Someone from different project and

    Someone from the same programming team as the programmer.

    The initial procedure is identical to that of the inspection process: the participants are given the materials

    several days in advance to allow them to study the program. However, the procedure in the meeting is

    different. Rather than simply reading the program or using error checklists, the participants play computer.

    The person designated as the tester comes to the meeting armed with a small set of paper test cases-

    representative sets of inputs (and expected outputs) for the program or module. During the meeting, each

    test case is mentally executed. That is, the test data are walked through the logic of the program. The

    state of the program (i.e. the values of the variables) is monitored on paper or a blackboard.The test case must be simple and few in number, because people execute programs at a rate that is

    very slow compared to machines. In most walkthroughs, more errors are found during the process of

    questioning the programmer than are found directly by the test cases themselves.

  • 7/27/2019 BSIT-54-Main-pg-1-65

    48/65

    QUESTIONS

    1. Is code reviews are relevant to the software testing? Explain the process involved in a typical code review.

    2. Explain the need for inspection and list the different types of code reviews.

    3. Consider a program and perform detailed reviews and list the review findings in detail.

    4. Explain the difference between code walk through and Inspection.

    5. What are the duties of moderator during code inspection.

    6. Explain different types of errors for Inspection.

    Chapter 4 - Program Inspections, Walkthroughs, and Reviews48

  • 7/27/2019 BSIT-54-Main-pg-1-65

    49/65

    Chapter 5

    Test Case Design

    5.1 LEARNING OBJECTIVES

    You will learn about:

    Dynamic testing of Software Applications

    White box and black box testing

    Various techniques used in white box testing

    Various techniques used in black box testing

    Static program analysis

    Automation of testing process

    5.2 INTRODUCTION

    Software can be tested either by running the programs and verifying each step of its execution against

    expected results or by statically examining the code or the document against its stated requirement or

    objective. In general, software testing can be divided into two categories, viz. static and dynamic testing.Static testing is a non-execution-based testing and carried through mostly by human effort. In static

    testing, we test, design, code or any document throug