introduction to software testing and quality assurance...
Embed Size (px)
Transcript of introduction to software testing and quality assurance...
-
1 | P a g e
This course provides a highly practical bottom-up introduction to software testing and quality assurance. Each organization performs testing and quality assurance activities in different ways. This course provides a broad view of both testing and quality assurance so that participants will become aware of the various activities that contribute to managing the quality of a software product.
-
2
Contents CHAPTER 1: Software Testing and Software Development Life Cycle ........................................................ 5
1.1 Introduction ....................................................................................................................................... 5
1.2 Software Development Lifecycle (SDLC) ......................................................................................... 5
1.3 Various SDLC Models ........................................................................................................................ 6
CHAPTER 2: Software Quality Testing ..................................................................................................... 12
2.1 Introduction ..................................................................................................................................... 12
2.2 What is Software Quality? .............................................................................................................. 12
2.3 Standards and Guidelines ............................................................................................................... 14
CHAPTER 3: Software Test Life Cycle and Verification & Validation .................................................... 16
3.1 Software Testing Life Cycle (STLC) ................................................................................................ 16
3.2 Verification and Validation Model ................................................................................................. 17
CHAPTER 4A: Validation Activity – Low-Level Testing ......................................................................... 24
CHAPTER 4B: Validation Activity – High-Level Testing ......................................................................... 28
4B.1 Objectives ...................................................................................................................................... 28
4B.2 Steps of Function Testing ............................................................................................................. 28
4B.3 Summary ....................................................................................................................................... 29
CHAPTER 5: Types of System Testing ..................................................................................................... 30
5.1 Introduction ..................................................................................................................................... 30
5.2 Usability Testing .............................................................................................................................. 31
5.3 Performance Testing ....................................................................................................................... 34
5.4 Load Testing .................................................................................................................................... 35
5.5 Stress Testing .................................................................................................................................. 36
5.6 Security Testing ............................................................................................................................... 38
5.7 Configuration Testing ..................................................................................................................... 40
5.8 Compatibility Testing ...................................................................................................................... 40
5.9 Installation Testing ......................................................................................................................... 41
5.10 Recovery Testing ........................................................................................................................... 42
5.11 Availability Testing ....................................................................................................................... 42
5.12 Volume Testing.............................................................................................................................. 43
5.13 Accessibility Testing ..................................................................................................................... 43
CHAPTER 6: Acceptance Testing .............................................................................................................. 45
-
3
6.1 Introduction ..................................................................................................................................... 45
6.2 Objective .......................................................................................................................................... 45
6.3 Acceptance Testing ......................................................................................................................... 45
CHAPTER 7: Black Box Testing ................................................................................................................ 47
7.1 Introduction ..................................................................................................................................... 47
7.2 Objectives ......................................................................................................................................... 47
7.3 Advantages of Black Box Testing ................................................................................................... 48
7.4 Disadvantages of Black Box Testing .............................................................................................. 48
7.5 Black Box Testing Methods ............................................................................................................ 48
CHAPTER 8: Testing Types ....................................................................................................................... 52
8.1 Introduction ..................................................................................................................................... 52
8.2 Mutation Testing ............................................................................................................................. 52
8.3 Progressive Testing ......................................................................................................................... 52
8.4 Regression Testing .......................................................................................................................... 53
8.5 Retesting .......................................................................................................................................... 53
8.6 Localization Testing ........................................................................................................................ 53
8.7 Internationalization Testing ........................................................................................................... 54
CHAPTER 9: White Box Testing ............................................................................................................... 55
9.1 Introduction ..................................................................................................................................... 55
9.2 Objective .......................................................................................................................................... 55
9.3 Advantages of WBT ......................................................................................................................... 55
9.4 Disadvantages of WBT .................................................................................................................... 56
9.5 Techniques for White Box Testing ................................................................................................. 56
9.6 Cyclomatic Complexity ................................................................................................................... 58
9.7 How to calculate Statement, Branch/Decision, and Path Coverage for ISTQB Exam purpose . 65
CHAPTER 10: Test Cases ........................................................................................................................... 68
10.1 Introduction................................................................................................................................... 68
10.2 Objective ........................................................................................................................................ 68
10.3 Structure of Test Cases ................................................................................................................. 69
10.4 Test Case Template ....................................................................................................................... 69
CHAPTER11: Test Planning ...................................................................................................................... 71
11.1 Introduction................................................................................................................................... 71
-
4
11.2 Objectives ...................................................................................................................................... 71
11.3 IEEE Standard for Software Test Documentation ...................................................................... 71
CHAPTER 12: Configuration Management .............................................................................................. 75
12.1 Introduction................................................................................................................................... 75
12.2 Objective ........................................................................................................................................ 75
12.3 Configuration Management Tools ............................................................................................... 76
CHAPTER13: Defect Tracing and Defect Life Cycle ................................................................................ 77
13.1 Introduction................................................................................................................................... 77
13.2 Objectives ...................................................................................................................................... 77
13.3 Why Do Faults Occur? ................................................................................................................... 77
13.4 What Is a Bug Life Cycle? .............................................................................................................. 78
13.5 Bug Status Description ................................................................................................................. 80
13.6 Severity: How Serious Is The Defect? .......................................................................................... 82
13.7 Priority: How to Decide Priority? ................................................................................................ 82
13.8 Defect Tracking ............................................................................................................................. 83
13.9 Defect Prevention .......................................................................................................................... 83
13.10 Defect Report .............................................................................................................................. 84
CHAPTER 14: Risk Analysis ...................................................................................................................... 86
14.1 Introduction................................................................................................................................... 86
14.2 Objectives ...................................................................................................................................... 86
14.3 Risk Identification ......................................................................................................................... 87
14.4 Risk Strategy .................................................................................................................................. 87
14.5 Risk Assessment ............................................................................................................................ 87
14.6 Risk Mitigation .............................................................................................................................. 88
14.7 Risk Reporting ............................................................................................................................... 88
14.8 What Is Schedule Risk? ................................................................................................................. 89
DEFINITIONS ............................................................................................................................................. 90
-
5
CHAPTER1: Software Testing and Software Development Life Cycle
1.1 Introduction Software testing is a crucial phase of a product development lifecycle. It is a process of finding flaws in a given product or application. The purpose of testing is not to ensure that a product functions properly under all conditions, but to ensure that the product does not function under specific conditions. The objective of software testing is to:
• validate and verify, automatically or manually, that a software program/product meets the technical and business requirements.
• evaluate the product for its correctness, completeness, reusability, and reliability.
• ensure that the behavior of the product is as per the end-user’s expectations.
• identify defects in a product as early as possible in the development lifecycle, thereby helping in reducing costs fixing the defects later.
• deliver defect-free and high-quality products.
1.2 Software Development Lifecycle (SDLC)
Software development lifecycle (SDLC) is a conceptual model that describes sequence of activities followed by designers and developers during product development. SDLC consists of multiple stages or phases in which input for each phase is the output of the previous one. In the IT industry, different SDLC models are followed, which involve various stages from creating through testing a software product. The commonly followed SDLC model is categorized into five stages: analysis, design, implementation, verification, and maintenance.
1.
Software Development Life Cycle (SDLC)
Analysis
Implementation Design Verification
Maintenance
-
6
1.3 Various SDLC Models
Various types of SDLC models exist to streamline the development process. Each one has its pros and cons. It is up to the development team to choose the appropriate model for its project. In this section, we will learn about four SDLC models:
1. Waterfall model 2. Incremental model 3. Spiral model 4. Agile methodology
Let's learn about each of these models in brief.
1.3.1 Waterfall Model Waterfall model, a classic software lifecycle model, was introduced and widely followed in software engineering. This model exhibits a linear and sequential approach in software development. In this model, the different phases of software engineering are cascaded to each other such that one can move to a phase only when its preceding phase is finished and once you finish one phase, you cannot move back to the previous phase. The different phases in waterfall model are as follows:
• Project Planning This phase defines the objectives, strategies, and supporting methods required to achieve the project goal.
• Requirement Analysis and Definition The main objective of this phase is to prepare a document, called Software Requirement Specification (SRS), that clearly specifies all the requirements of the customer. SRS is the primary output of this phase.
• Systems Design This phase includes designing of screen layouts, business rules, process diagrams, pseudo code, and other documentation to describe the features and operations of a software product in detail.
• Implementation In this phase, the actually coding starts. After the preparation of system design documents, programmers develop the software program/application based on the specifications. In this phase, the source code, executables, and databases are created.
• Integration and Testing In this phase, the pieces of all codes/modules of a product are integrated into a complete system and tested to check if all modules/units coordinate between each
-
7
other, and the system as a whole behaves as per the specifications.
• Acceptance, Installation, Deployment This phase includes:
o Getting the software accepted o Installing the software at the customer site
Acceptance consists of formal testing conducted by the customer according to the acceptance test plan prepared earlier and analysis of the test results to determine whether the system satisfies its acceptance criteria. When the test results satisfy the acceptance criteria, the user accepts the software.
• Maintenance This phase is for all types of modifications and corrections of the product after it is installed and operational. This, the least glamorous and perhaps the most important step of all in SDLC, however goes on seemingly forever.
Waterfall Model
Let’s quickly go through the advantages and disadvantages of the Waterfall model.
Advantages
• It is simple and easy to use. • Because of the rigidity of the model, each phase has specific deliverables and a
review process; it is easy to manage. • Phases are processed and completed one at a time. • More suitable for smaller projects where requirements are very well understood.
Requirement
Design
Implementation/Coding
Testing
Maintenance
-
8
Disadvantages
• Adjusting scope during the lifecycle can kill a project. • No working software is produced until late during the lifecycle. • Risk and uncertainty is very high. • Poor model for complex and object-oriented projects. • Poor model for long and ongoing projects.
1.3.2 Incremental Model
Incremental model is an advanced approach to the Waterfall model. It is essentially a series of waterfall cycles. In this model, a core setof functions is identified in the first cycle and is built and deployed as the first release. You can then repeat the software development cycle with each release adding mode functionality until all the requirements are met. Each development cycleacts as the maintenance phase for the previous software release. New requirements discoveredduring the development of a given cycle are implemented in subsequent cycles. In this model, a subsequent cycle may beginbefore the previous cycle is complete.
Incremental Life Cycle Model
Let’s go through the advantages and disadvantages of the Incremental model.
Advantages
• Allows requirement modification and addition of new requirements • Easier to test and debug on smaller cycles • Easier to manage riskssince risks are identified and handled during each iteration • Every iteration in incremental model is an easily managed milestone
Requirements
Design
Implementation
& Unit Testing
Integration & System
Operation
-
9
Disadvantages
• Majority of requirements must be known in the beginning • Cost and schedule overrun may result in an unfinished system
1.3.3 Spiral Model
This model is similar to the incremental model, but with additional phase of risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering, and Evaluation. Let’s see each of these phases in brief.
1. Planning: determines the objectives, alternatives, and constraints on the new iteration
2. Risk analysis: evaluates alternatives and identify and resolve risk issues
3. Engineering: develops and verifies the product for this iteration
4. Evaluation: evaluates the output of the project to date before the project continues to the next spiral; plans the next iteration
----PLANNING ----RISK ANALYSIS
-Requirement -Prototyping
Gathering Progress
-Design Project Cost
----EVALUATION ----ENGG.
-Customer -Coding
Evaluation -Testing
Spiral Model
-
10
Let’s go through the advantages and disadvantages of the spiral model.
Advantages
• Useful for complex and large projects • High amount of risk analysis • Software is produced early in the software lifecycle because of the prototype
Disadvantages
• Expensive model • Time spent for planning, risk analysis, prototyping can be excessive • Risk analysis requires highly-level, skilled expertise • Project’s success is highly dependent on the risk analysis phase • Doesn’t work well for smaller projects
1.3.4 Agile Methodology Agile methodology breaks development tasks into smalleriterations with minimal planning. The working software is delivered frequently, say, on weekly, fortnightly, or monthly basis. Iterations are short time frames and typically last from one to four weeks. In each iteration, a team works through a full software development cycle.This minimizes the overall risk and allows the project to adapt to changes quickly.
The team involved in agile methodology is usually cross-functional and self-organizing regardless of any existing corporate hierarchy or the corporate roles of team members. Team members take responsibility for tasks that deliver the functionality and decide individually on how to meet an iteration's requirements.
In most agile implementation, a formal, daily, face-to-face meeting is conducted among team members. In a brief session, team members report to each other what they did the previous day, what they intend to do today, and what their roadblocks are. This face-to-face meeting helps in exposing the problem areas.
Let’s go through the advantages and disadvantages of the agile methodology.
Advantages
• Involves an adaptive team which is able to respond to the changing requirements
• Face- to-face communication and continuous inputs from customer representative leaves no space for guesswork
• End result is the high-quality software in least possible time duration and satisfied customer
-
11
Disadvantages
• It becomes difficult to assess efforts required at the beginning of SDLC, in case of large, complex software deliverables.
• The project can easily get off the track if the customer is not clear what final outcome that they want.
-
12
CHAPTER 2: Software Quality Testing
2.1 Introduction
“Quality” is defined as the degree to which a component, system, or process meets the specified requirements and/or user/customer needs and expectations. Quality could also mean:
• a product or service free of defects • fitness for use • conformance to requirements
In this chapter, you will learn about software quality testing and terminologies.
2.2 What is Software Quality?
In the software engineering industry, software quality refers to:
• Software functional quality: reflects how well a product complies/conforms to a given design, based on the functional requirements or specifications
• Software structural quality: refers to how a product meets non-functional requirements such as robustness or maintainability
Software quality is broadly classified as Quality Assurance and Quality Control.
QUALITY
QUALITY ASSSURANCE QUALITY CONTROL
Categories of Software Quality
-
13
2.2.1 Quality Assurance (QA) Quality Assurance aims at defect prevention in processes. It monitors and evaluates various aspects of projects and ensures that the engineering processes and standards are strictly adhered throughout the software lifecycle to ensure quality. Audits are a key technique used to perform product evaluation and process monitoring. Key Points
• Identifies weaknesses in processes and improves them
• QA is the responsibility of the entire team
• Helps defect prevention
• Helps establish processes for defect prevention
• Sets up measurement programs to evaluate processes
2.2.2 Quality Control (QC) Quality control focuses on testing of products to remove defects from the products and ensure that the product meets performance requirements. Key Points
• Involves comparison of product quality with applicable standards, and actions taken when non-conformance is detected
• Implements processes for defect removal
• QC is the responsibility of the tester
• Detects and reports defects found in testing
-
14
2.3 Standards and Guidelines
Standards are rules or processes set to be followed in any organization for developing a product, whereas guidelines acts as a suggestion for carrying out a particular activity or task. The Software Engineering Institute (SEI) standard, established in 1984 at Carnegie Mellon University, aims at rapid improvementof the quality of operational software in the mission-critical computer systemsof the UnitedStates Department of Defense.
Based on the type of industry, various industry standards exist. The standards used for software industries are as follows:
1. Capability Maturity Model (CMM)
2. International Organization for Standardization (ISO)
3. IEEE
4. ANSI
Let’s learn about each of these standards in detail.
2.3.1 Capability Maturity Model (CMM) Capability Maturity Model (CMM) is a process improvement approach. It helps organizations improve their performance and can be used to guide process improvement across a project, a division, or an entire organization. CMM describes five evolutionary stages in which an organization manages its processes. These stages are:
1. Level 1: Initial In level 1 organizations, processes are disorganized and chaotic. Success usually depends on individual efforts and heroics of people. These organizations often exceed the budget and schedule of their projects. Key Points
• Tendency to over commit • Skip processes in the time of crisis • Does not repeat their past successes again • Success depends on having quality people
2. Level 2: Repeatable In level 2 organizations, project tracking, requirements management, realistic planning, and configuration management processes are established and put in place.
-
15
Key Points
• Software development successes are repeatable • Process discipline helps ensure that existing practices are followed even
during tight delivery timelines • Basic project management processes are established to track cost, schedule,
and functionality
3. Level 3: Defined Standard software development and maintenance processes are established and improved over time. These standard processes bring consistency across the organization.
4. Level 4: Managed Using metrics and measurements, management can effectively track productivity, development efforts, processes, and products. In this level of organization, quality is consistently high.
5. Level 5: Optimizing In level of organization, processes are constantly improved and new, innovative processes are introduced to better serve the organization's particular needs.
2.3.2 International Organization for Standardization (ISO) The ISO 9001:2000 standard specifies requirements for a quality management system. This ISO standard covers documentation, design, development, production, testing, installation, servicing and other processes.
2.3.3 Institute of Electrical and Electronics Engineers (IEEE) IEEE has created standards related to software quality and testing. These standards are IEEE Standard for Software Test Documentation(IEEE/ANSI Standard 829), IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), IEEE Standard for Software Quality Assurance Plans (IEEE/ASNI Standard 730), and others.
2.3.4 American National Standards Institute (ANSI) ANSI is the primary industrial standards body in the U.S. It publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).
-
16
CHAPTER3: Software Test Life Cycle and Verification& Validation
3.1 Software Testing Life Cycle (STLC)
Every company follows its own software testing lifecycle (STLC) to suit their requirements, culture, and available resources. STLC includes various stages of testing through which a software product goes.
STLC comprises of following sequential phases:
1. Planning 2. Analysis 3. Design 4. Construction and verification 5. Testing cycles 6. Final testing and implementation 7. Post implementation
Let’s learn about each of these stages. 1.Planning In the Planning stage, the Project Manager decides what things need to be tested, what would be the appropriate budget, etc. Proper planning at this stage helps to reduce the risk of low quality software. Major tasks involves in the planning stage are:
• Defining scope of testing • Identifying approaches • Defining risks • Identifying resources • Defining schedule
2.Analysis Once the test plan is created, the next phase is the Analysis phase. This phase involves:
• Identifying the types of testing to be carried out at various SDLC stages • Determining if testing should be performed manually or automatically • Creating test case formats, test cases, functional validation matrix based on Business
Requirements • Identifying which test cases to automate • Reviewing documentations
In the analysis phase, frequent meetings are held between testing teams, project managers, and development teams to check the progress of project and ensure the completeness of the test plan created in the planning phase.
3.Design In the design phase, the following activities are carried out:
• Test plans are test cases are revised.
• Functional validation matrix is revised and finalized.
-
17
• Risk assessment criteria is developed.
• Test cases for automation are identified and scripts are written for them.
• Test data is prepared.
• Standards for unit testing and pass/fail criteria are defined.
• Testing schedule is revised and finalized.
• Test environment is prepared.
4.Construction and Verification This phase aims at completion of all test plans, test cases, and scripting of the automated test cases. In this phase, test cases are run and defects are reported as and when found.
5. Testing cycles In this phase, test cycles need to be completed until test cases are executed without errors or a predefined condition is reached. Activities involved in this phase are:
• Running test cases • Reporting defects • Revising test cases • Adding new test cases • Fixing defects • Retesting
6. Final Testing and Implementation In this phase, the following activities are carried out:
• Executing stress and performance test cases • Completing or updating documentation for testing • Providing and completing different matrices for testing
In this phase, acceptance, load, and recovery testing is also conducted and the application is verified under production conditions.
7. Post implementation In this phase, the following activities are carried out:
• Evaluating the testing process and documenting lessons learnt from the testing process
• Creating plans to improve the process. Recording of new errors and enhancements is an ongoing process.
• Cleaning up the test environment • Restoring test machines to base lines.
3.2 Verification and Validation Model
Verification and Validation are the two main processes involved in software testing. Let us learn about these processes in detail.
Software quality, correctness, and completeness can be identified by performing adequate testing. In order to make sure that the product development is as per requirements, we
-
18
have to initiate testing right from the beginning. The picture below depicts the Verification and Validation model which shows that the software testing process is carried out in parallel with the development process. The left part of the “V” is called validation, which is carried out after a part of the product is developed. V-V model can also be called as Software Testing Life Cycle (STLC). In STLC, each development activity is followed with a testing activity.
Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing.
V-V Model
3.2.1 Different Stages of SDLC with STLC Stage 1: Requirement Gathering
Development Activity
In this phase, the requirements of the proposed system are collected by analyzing the needs of the users. However, in many situations, not enough care is taken in establishing correct requirements; up front. It is necessary that requirements are established in a systematic way to ensure their accuracy and completeness, but this is not always an easy task.
-
19
Testing Activity
To make requirements more accurate and complete, testing needs to be performed right from the requirements phase in which testers review the requirements. For example, the requirements should not have ambiguous words like may or may not. It should be clear and concise. Stage 2: Functional Specifications
Development Activity
The Functional Specification document describes the features of the software product. It describes the product’s behavior as seen by an external observer, and contains the technical information and data needed for the design. The Functional Specification defines what the functionality will be.
Testing Activity
Testing is performed in order to ensure that the functional specifications accurate.
Stage 3: Design
Development Activity
During the design process, the software specifications are transformed into design models that describe the details of the data structures, system architecture, interface, and components. At the end of the design process, a design specifications document is produced. This document is composed of the design models that describe the data, architecture, interfaces, and components.
Testing Activity
Each design product is reviewed for quality before moving to the next phase of software development. In order to evaluate the quality of a design (representation), the criteria for a good design should be established. Such a design should exhibit good architectural structure, be modular, contain distinct representations of data, architecture, interfaces, and components.
The software design process encourages good design through the application of fundamental design principles, systematic methodology and through review.
Stage 4: Code
Development Activity
Using the design document, code is constructed. Programs are written using a conventional programming language or an application generator. Different high-level programming languages such as C, C++, VB, Java, etc. are used for coding. With respect to the type of application, the right programming language is chosen. Programming tools such as compilers, interpreters, and debuggers are used to generate the code.
-
20
Testing Activity
Code review is done to find and fix defects that are over looked in the initial development phase to improve the overall quality of code. Online software repositories, like anonymous CVS, allow groups of individuals to collaboratively review code to improve software quality and security. Code review is a process of verifying the source code. Code reviews can often find and remove common security vulnerabilities such as format string attacks, race conditions, and buffer overflows, thereby improving software security.
Stage 5: Building Software
Development Activity
This phase involves building different software units (components) and integrating them one by one to build single software.
Testing Activity
a. Unit Testing A unit test is a validation procedure to check working of the smallest module of source code. Once the modules are ready, individual components should be tested to verify that the units function as per the specifications. Test cases are written for all functions and methods to identify and fix the problems faster. For testing of units, dummy objects are written such as stubs and drivers. This helps in testing each unit separately when all the code is not written. Usually a developer uses this method to review his code.
b. Integration Testing Integration testing follows unit testing and is done before system testing. Individual software modules are combined and tested as a group under integration testing. The purpose is to validate functionality, performance and reliability requirements. Test cases are constructed to test all components and their interfaces and confirm whether they are working correctly. It also includes inter-process communication and shared data areas.
Stage 6: Building System
Development Activity
After the software has been build, we have the whole system considering all the non-functional requirements such as installation procedures, configuration, etc.
Testing Activity
a. System Testing Testing the complete integrated system to confirm that it complies to requirement specifications is called as System Testing. Under System Testing, the entire system is tested against its Functional Requirement Specifications (FRS), and/or System
-
21
Requirement Specification (SRS) and with the non-functional requirements. System Testing is crucial. Testersneed to test from users’ perceptive and need to be more creative.
b. Acceptance Testing Also called as User Acceptance Testing (UAT), this is one of the final stages of a project and will often occur before a customer accepts a new system. It is a process to obtain confirmation by the owner of the object under test, through trial or review that the modification or addition meets mutually agreed upon requirements. Users of the system will perform these tests according to their User Requirements Specification, to which the system should conform. There are two stages of acceptance testing, Alpha and Beta.
Now the whole product has been developed, the required level of quality has been achieved, and the software is ready to be released for customers.
3.2.1 Verification
Verification ensures that the product is built or developed in accordance with the requirements and design specifications given by the end user.
Verification also ensures that the data gathered is used in the right place and in the right way. Verification happens at the beginning of the software testing lifecycle. This process is used to exhibit consistency, correctness, completeness of the software at every stage as well as in between the different stages of the lifecycle. In the verification phase, documents related to software, plans, code, specifications, etc. are reviewed.
Verification Methods There are mainly three methods of verification. They are as follows:
1. Peer Reviews 2. Walkthroughs 3. Inspections
1. Peer Reviews
Peer review is the review of products performed by peers during product development to identify the defects for removal and recommend other changes that are needed. It is an informal way of verification. Peer reviews are also called as “buddy checks”.
2. Walkthroughs Walkthroughs are semi-formal meetings led by a presenter who presents the documents. The purpose of walkthroughs is to find potential bugs and is essentially used for knowledge sharing or communication purpose.
-
22
3. Inspections Inspections are formal meetings attended by authors and participants who come prepared with their own task. The goals of these meetings are to communicate important product information and detect defects by verifying the software product.
3.2.2 Validation Validation checks the product design to ensure that the product is right for its intended use.
Unlike verification, the validation process happens in the later part of the software testing cycle. It is in this process that the actual testing of software takes place. Validation determines the correctness of the product in accordance with the user requirements. Validation Techniques The two main techniques of the Validation process are:
1. White box testing
2. Black box testing
1. White box testing
White box testing is a software testing approach. It uses inner structural and logical properties of the program for verification and deriving test data. White box testing is also called as glass, structural, open box, transparent, or clear box testing. For white box testing, the tester needs to have knowledge of the code or the internal program design. White box testing also requires the tester to look into the code and find out which unit/statement of the code is malfunctioning.
2. Black box testing
Black Box Testing is a validation strategy that does not need any knowledge of the internal design or the code. Black box testing is also called as opaque box, functional/behavioral box, or closed box testing.
The main focus of this testing is on testing for requirements and functionality of the software product or application. In this approach, black-box tests are derived from functional design specifications against which testers check the actual behavior of the software.
-
23
The verification and validation processes are summarized in the table below.
Verification Validation
Focus is on “Process”, i.e. determining if “Am I building the product right?”
Focus is on “Product”, i.e. determining if “Am I building the right product?”
Low-level activity High-level activity It is performed during development on key artifacts, like walkthroughs, reviews and inspections, mentor feedback, training, checklists, and standards.
It is performed after a product is produced against established criteria ensuring that the product integrates correctly into the environment.
Am I accessing the data right (in the right place; in the right way)?
Am I accessing the right data (in terms of the data required to satisfy the requirement)?
Verify the consistency, completeness, and correctness of the software at each stage and between each stage of the development life cycle.
Validate the correctness of the final software product by a development project with respect to the user needs and requirements
Advantages of V-V Model
• Simple and easy to use
• Each phase has specific deliverables
• Chances of success are high since the test plans are developed in the initial stage of development lifecycle
• Works well for small projects where requirements are easily understood
Disadvantages of V-V Model:
• Less flexible and adjusting scope is difficult and expensive
• Software product is developed during the implementation phase, so no early prototypes of the software are produced
• Very rigid like the waterfall model
-
24
CHAPTER 4A: Validation Activity – Low-Level Testing
The validation process in software development stage is carried out at two levels, low level and high level.
In this section, we will learn about low-level testing methods. Low-level testing is broadly classified into:
• Unit Testing • Integration Testing
Unit Testing
Unit testing involves validation of individual units of source code to ensure that they are working properly. A unit is the smallest testable part of an application.
The main purpose of unit testing is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and determine whether it behaves as per requirements. Each unit is tested separately before integrating into modules to test the interfaces between modules. Unit testing results in identifying large number of defects. Unit testing requires knowledge of internal design of code and it is generally done by developers.
Integration Testing
Integration testing is the process of combining and testing multiple components in a group. This testing is performed after unit testing and before system testing.
Integration testing detects interface errors and ensures that the modules or components operate properly when combined together. Integration testing is done by developers or by the QA team.
Integration testing is of two types:
• Non-incremental • Incremental
http://en.wikipedia.org/wiki/Unit_testing�http://en.wikipedia.org/wiki/System_testing�
-
25
INTEGRATION TESTING
INCREMENTAL NON-INCREMENTAL
TOP DOWN BOTTOM UP SANDWITCH
DFS BFS
Types of Integration Testing
Non-incremental Testing
In this approach, all the developed modules are coupled together to form a complete software system and then used for integration testing. It is also called as Big Bang Integration. The integrated result is then tested. In this method, debugging is difficult since an error can be associated with any component.
Incremental Testing
In this approach, modules are integrated in small increments. It, therefore, becomes easier to isolate the errors and interfaces are more likely to be tested completely. Incremental testing is further classified into top down integration, bottom up integration, and sandwiched integration. a) Top Down Integration
In this method, modules are integrated in small increments in a downward direction, starting from the top, i.e. with the main module until the end of the related modules at the bottom, sequentially. Top-down integration is further classified into: depth-first and breadth-first search approach.
o Depth-first search
Depth-first approach integrates the components vertically downwards, i.e. depth-wise using a control path of the program. For example, if we select the left hand path, components U1, U2, U4 DFS= {[(U1+U2)+U4]+U5}+U3
-
26
U1
U2 U3
U4 U5
Top-down Integration
o Breadth-first search Breadth-first integration incorporates all components directly subordinate at each level, moving across the structure horizontally. For example, considering components U1,U2,U3,
BSF={[(U1)+ (U2+U3)]+U4+U5}
Advantages of top-down integration Functionality of the main module is tested first. This helps in verifying major control or decision points early in the testing process. Disadvantages of top-down integration Stubs are required when performing integration testing and generally, developing stubs is very difficult.
b) Bottom-up integration In this approach, the lowest level components are tested first, then moving upwards with testing the higher-level components. Bottom-up integration testing begins components at the lowest levels in the program structure. All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration testing of lower-level integrated modules, the next level of modules are formed and used for integration testing. This approach is best used only when all or most of the modules of the same development level are ready. This approach helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage. Advantages of bottom-up integration Drivers required are much easier to develop.
-
27
Disadvantages of bottom-up integration The Main module functionality is tested at the end. So major control and decision problem is identified later in the testing process.
c) Sandwich Integration In this approach, top-down testing and bottom-up testing is combined together. Both top-down and bottom-up are started simultaneously and the testing is built up from both sides. It needs a big team.
-
28
CHAPTER 4B: Validation Activity – High-Level Testing
High-level testing is broadly classified into:
1. Function Testing 2. System Testing 3. Acceptance Testing
Function testing is a type of High-level testing and is based on black box testing that creates its test cases on the specifications of the software component under test. Functions are tested by feeding them input and then examining the output. It is used to detect discrepancies between a program’s functional specification and it’s actual behavior. It is carried out after completing unit testing and integration testing, and can be conducted in parallel with System testing. However, it is advisable to begin system testing when function testing has demonstrated some pre-defined level of reliability, usually after 40% of the function testing is complete.
Functional testing differs from system testing in that functional testing validates a program by checking it against the functional design specifications, while system testing validates a program by checking it against the user or system requirements.
4B.1 Objectives
The goal of function testing is to verify the actual behavior of the software or application against the functional design specifications provided by customers.
Function testing is performed before the product is made available to customers. It can begin whenever the product has sufficient functionality to execute some of the tests, or after unit and integration testing have been completed.
Function testing is the process of attempting to detect discrepancies between a program’s functional specification and its actual behavior. When a discrepancy is detected, either the program or the specification is incorrect. All black-box methods are applicable to function based testing.
4B.2 Steps of Function Testing
1. Decompose and analyze the functional design specification 2. Identify functions that the software is expected to perform 3. Create input data based on the function's specifications 4. Determine output based on the function's specifications 5. Develop functional test cases
http://en.wikipedia.org/wiki/Black_box_testing�http://en.wikipedia.org/wiki/System_testing�
-
29
6. Execute test cases 7. Comparison expected and actual results
4B.3 Summary
Function Testing:
1. Attempts to detect discrepancies between a program’s functional specification and its actual behavior.
2. Includes positive and negative scenarios i.e. valid inputs and invalid inputs. 3. Ignores the internal mechanism or structure of a system or component and focuses on
the output generated in response to selected input and execution conditions. 4. Evaluates the compliance of a system or component with specified functional
specification and corresponding predicted results.
-
30
CHAPTER 5: Types of System Testing
5.1 Introduction
‘System Testing’ is the next level of testing and is one of the most difficult activities. It focuses on testing the system as a whole. Once the components are integrated, the system needs to be rigorously tested to ensure that it meets the Quality Standards. It verifies software operation from the perspective of the end-user, with different configuration/setups. System testing builds on the previous levels of testing namely unit testing and integration testing. System testing can be conducted in parallel with function testing.
5.1.1 Prerequisites for System Testing
The prerequisites for System Testing are:
• All the components should have been successfully Unit Tested. • All the components should have been successfully integrated and Integration testing
must have been performed. • An Environment closely resembling the production environment should be created.
5.1.2Steps of System Testing The major steps of system testing are as follows:
1. Create a System Test Plan by decomposing and analyzing the SRS. 2. Develop the requirements test cases. 3. Carefully build data used as input for system testing. 4. If applicable, create scripts to
a) Build environment and b) To automate execution of test cases
5. Execute test cases. 6. Fix bugs if any and re-test the code. 7. Repeat test cycle as necessary
5.1.3 Types of System Testing 1. Usability testing 2. Performance Testing 3. Load Testing 4. Stress Testing 5. Security Testing
-
31
6. Configuration Testing 7. Compatibility Testing 8. Installability Testing 9. Recovery Testing 10. Availability Testing 11. Volume Testing 12. Accessibility Testing
5.2 Usability Testing
Usability testing is a technique for ensuring that the intended users of a system can carry out the intended tasks efficiently, effectively and satisfactorily. It is carried out pre-release so that any significant issues identified can be addressed. Usability testing can be carried out at various stages of the design process. In the early stages, however, techniques such as walkthroughs are often more appropriate. System usability testing is the system testing of an integrated, black box application against its usability requirements. The system usability test is conducted to observe people using the product to discover errors and areas of improvement. Usability testing is a black-box testing technique.
• Identify usability defects involving the application’s human interface such as: o Difficulty of orientation and navigation (e.g., navigation defects such as broken
links and anchors within a website) o Efficiency of interaction (based on user task analysis) o Information consistency and presentation o Appropriate use of language and metaphors o Conformance to the:
Digital brand description document Website design guidelines
o Programming defects (e.g., incorrectly functioning tab key, accelerator keys, and mouse actions)
• Validating the application by determining if it fulfills its quantitative and qualitative usability requirements concerning ease of:
o Installation by the environments team o Usage by the user organization o Operations by the operations organization
• Determine if the application’s human interfaces should be iterated to make them more usable.
• More emphasis is on the presentation of the product rather than its functionality.
-
32
• Report these failures to the development teams so that the associated defects can be fixed.
• It helps determine the extent to which the application is ready for launch. • Provide input to the defect trend analysis effort.
5.2.1 What Is Usability? Usability is how easily users can navigate from one page to another or from one menu to another. Usability is a combination of factors that influence user’s experience with a product or system. Usability testing is a methodical evaluation of the graphical user interface (GUI) according to usability criteria.
Usability criteria include:
• Efficiency of use – Once a user is experienced with the system, how much time will it require to accomplish key tasks?
• Ease of learning –How fast can a user learn to use a system that he has never seen before, in order to accomplish basic tasks?
• Memorability – When the user approaches the system the next time, will he/she remember enough to use it effectively?
• Subjective satisfaction – How does the user react to the system? How does he/she feel about using it?
• Error frequency and severity –How frequent are errors in the system? How severe are they? How do users recover from errors?
5.2.2 Purpose of Usability Testing A usability test establishes the ease of use and effectiveness of a product using standard usability test practices. It also identifies usability problems with the product and helps establish solutions for those problems. Once those solutions are implemented, the product is easier to use, requires less support and should be better received in the marketplace. When clients want to determine how well target users can understand and use their software or hardware product we recommend usability testing of the product with target market users.
-
33
5.2.3 Methods of Usability Testing • By Onsite Observation: Conducted on-site. On-site observation enables the study of
users working on the system in their typical work environment. This is usually done when the system or environment are too complicated to be replicated in a laboratory. On-site observations might also be used to study users in their real environment.
The advantage of this type of testing is that it gives users a less formal feeling regarding the test and enables a relatively long observation period. The informal setting helps collect information from a real environment and not only from preset scenarios.
• By Laboratory Experiments: The usability test may be performed on a real system, on a paper prototype, or on a demo (e.g., Power Point) that incorporates only the elements of the system that are to be tested. Testing is performed in a controlled atmosphere. Users are introduced to the system and are required to perform several key tasks according to pre-set scenarios. User activities are recorded using two cameras – one that records on-screen activities and the second that records the user response and expressions. In addition, usability experts monitoring the usability test take notes of any item of interest.
5.2.4 Summary • The goal of usability testing is to adapt software to meet user’s actual work styles,
rather than forcing users to adapt a new work style. • Usability testing involves having users work with the product and observing their
responses to it. • Unlike Beta testing, which also involves users, it should be done as early as possible in
the development cycle. • Usability testing is the process of attempting to identify discrepancies between the user
interface of a product and the human engineering requirements of its potential users. • The real customer is involved as early as possible, even at the stage where only screens
drawn on paper are available. • Usability is testing to ensure that the application is easy to work with, limits keystrokes,
and is easy to understand. The best way to perform this testing is to bring in experienced, medium and novice users and solicit their input on the usability of the application.
• Usability testing can be done numerous times during the life cycle.
-
34
5.3 Performance Testing
Performance testing is done to verify all the performance related aspects of the application. The aim of performance testing is to identify inefficiencies and bottlenecks with regard to application performance and enable to be identified, analyzed, fixed and prevented in the future. Performance testing is the system testing of an integrated, black box, partial application against its performance requirements under normal operating circumstances.
Software performance testing is used to determine the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. Qualitative attributes such as reliability, scalability and interoperability may also be evaluated. Performance testing is often done in conjunction with stress testing.
The performance testing is conducted to:
• Validate the system. • Cause failures relating to performance requirements:
o Response time (the average and the maximum application response times). o Throughput (the maximum transaction rates that the application can handle). o Latency (the average and maximum time to complete a system operation). o Capacity (the maximum number of objects the application/databases can handle).
• Track and report these failures to development teams so that the associated defects can be fixed.
• Reduce hardware costs by providing information allowing systems engineers to: o Identify the minimum hardware necessary to meet performance requirements. o Tune the application for maximum performance by identifying the optimal system
configuration (e.g., by repeating the test using different configurations). • Provide information that will assist in performance tuning under various workload
conditions, hardware configurations, and database sizes (e.g., by helping identify performance bottlenecks).
5.3.1 What Is Performance Testing? Performance testing is testing to ensure that the application responds in the time limit set by the user.
http://en.wikipedia.org/wiki/Software_performance_testing�http://en.wikipedia.org/wiki/Stress_testing�
-
35
5.3.2 Purpose of Performance Testing Performance testing is testing that is performed to determine how fast some aspect of a system performs under a particular workload. The purpose of performance testing is to measure and evaluate response times, transaction rates, and other time sensitive requirements of an application in order to verify that performance requirements have been achieved.
Examples include response times for on-line processing, processing times for batch work, transaction throughput rates (number of transactions in a predetermined period), etc.
5.3.3 Benefits of Performance Testing • Helps in improving customer satisfaction by providing them with a faster, more reliable
product. • Helps identify and fix bottlenecks in an application before rolling it out to customers.
5.3.4 Summary Performance testing determines whether the program meets its performance requirements. Efficiencies in performance testing are realized through extensive experience, optimization of processes, and optimal selection of tools.
5.4 Load Testing
Load Tests are end-to-end performance tests under anticipated production load. Load testing is the process of exercising the system under test by feeding it the largest tasks it can operate with. It is the process of putting demand on a system or device and measuring its response. Load testing is sometimes called volume testing, or longevity/endurance testing. Load testing is done to expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc. and to ensure that the application meets the performance baseline established during performance testing. This is done by running regression tests against the application at a specified maximum load.
The load testing is done to:
• Cause failures concerning the load requirements that help identify defects that are not efficiently found during unit and integration testing. Partially validate the application (i.e., to determine if it fulfills its scalability requirements for example, when the number of users increases), Distribution and load-balancing mechanisms.
• Determine if the application will support typical production load conditions.
http://en.wikipedia.org/wiki/Software_testing�http://en.wikipedia.org/wiki/System�
-
36
• Identify the point at which the load becomes so great that the application fails to meet performance requirements.
• Report these failures to the development teams so that the associated defects can be fixed.
• Locate performance bottlenecks including those in I/O, CPU, network, and database.
5.4.1 What Is Load Testing? Load Testing is subjecting your system to statistically representative load. Load testing is non-functional form of System Testing. Load Runner and Rational Robot are the front runners for this type of testing. Application is tested against the heavy loads, such as testing of a Web site under a range of loads to determine at what point the system’s response time degrades or fails.
5.4.2 Why Is Load Testing Important? • It is done to measure and monitor performance of an e-business infrastructure. Watch
out how the system handles (or not) the load of thousands of concurrent users hitting your site before deploying and launching it for the entire world to visit.
• It increases uptime and availability of mission-critical Internet systems, by spotting bottlenecks in the systems under large user stress scenarios before they happen in a production environment.
• It protects IT investments by predicting scalability and performance. IT projects are expensive. The hardware, the staffing, the consultants, the bandwidth, and more add up quickly. Avoid wasting money on expensive IT resources and ensure that it will all scale with load testing.
• It avoids project failures by predicting site behavior under large user loads. Before uploading the site, one has to visualize the site behavior with a large number of users and test high-load scenarios. Take precautions to avoid such scenarios.
5.5 Stress Testing
Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing). The main purpose behind this is to make sure that the system fails and recovers gracefully – this quality is known as recoverability.
-
37
Where performance testing demands a controlled environment and repeatable measurements, stress testing joyfully induces chaos and unpredictability.
Stress testing is performed to:
• Partially validate the application (i.e., to determine if it fulfills its scalability requirements).
• Determine how an application degrades and eventually fails, as conditions become extreme. For example, stress testing could involve an extreme number of simultaneous users, extreme numbers of transactions, queries that return the entire contents of a database, queries with an extreme number of restrictions, or an entry at the maximum amount of data in a field.
• Report these failures to the development teams so that the associated defects can be fixed.
• Determine if the application will support “worst case” production load conditions. • Provide data that will assist systems engineers in making intelligent decisions regarding
future scaling needs. • Help determine the extent to which the application is ready for launch. • Provide input to the defect trend analysis effort.
5.5.1 What Is Stress Testing? Stress Testing is testing done by applying load to the application under test, beyond the limits specified. Subjecting the system to extreme pressures in a short time-span is stress testing.
5.5.2 What Is The Purpose of Stress Testing? Stress testing helps in determining, e.g. the maximum number of requests a Web application can handle in a specific period of time, and at what point the application will overload and break down. The test is designed to determine how heavy a load the Web application can handle. A huge load is generated as quickly as possible in order to stress the application to its limit. The time between transactions is minimized in order to intensify the load on the application, and the time the users would need for interacting with their Web browsers is ignored.
For example, simultaneous log-on of 1000 users at the same time on a particular Website.
-
38
5.5.3 Summary • Tester’s objective is to force the system to “break down” under the stress of extreme
conditions. • When we perform Stress testing on a particular application, system will fail but should
fail in a rational manner without corrupting the data or losing customer’s data. Perform testing on your application to the point that it experiences diminished response or break down, to determine the application’s limitations.
• This testing is conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how.
• Start Stress Testing early to catch subtle bugs that need the original developers to fix basic design flaws that may affect many parts of the system.
• It is to ensure that the application will respond appropriately with many users and activities happening simultaneously.
5.6 Security Testing
Security testing is performed to guarantee that only users with the appropriate authority are able to use the applicable features of the system. Security is a primary concern to avoid any unwanted penetration into the application. Security Testing is checking a system, application or its component against its security requirements and the implementation of its security mechanisms. It also verifies the applications failure to meet security-related requirements (black box testing), failure to properly implement security mechanisms (white box/gray box testing), thereby it enables the underlying defects to be identified, analyzed, fixed, and prevented in the future.
The security testing covers:
• Requirements: Verifying the application (i.e. determine if it fulfills its security requirements), identification, authentication, authorization, content protection, integrity, intrusion detection, privacy, system maintenance
• Mechanisms: Determine if the system causes any failures concerning its implementation of its security mechanisms:
o Encryption and Decryption o Firewalls o Personnel Security: Passwords
-
39
Digital Signatures Personal Background Checks
o Physical Security: Locked doors for identification, authentication, and authorization Budges for identification, authentication, and authorization Cameras for identification, authentication, and authorization
• Cause Failures: Cause of failures concerning the security requirements that help identify
defects that are not efficiently found during other types of testing: o The application fails to identify and authenticate a user. o The application allows a user to perform an unauthorized function. o The application fails to protect its content against unauthorized usage. o The application allows the integrity of data or messages to be violated. o The application allows undetected intrusion. o The application fails to ensure privacy by using an inadequate encryption
technique.
• Report Failures: It is necessary to report failures to the development teams so that the associated defects can be fixed.
• Determine Launch Readiness: It helps determine the extent to which the system is ready for launch.
• Project Metrics: It helps provide project status metrics.
• Trend Analysis: It provides input to the defect trend analysis effort.
5.6.1 Purpose of Security Testing It helps in determining how well a system protects against unauthorized, internal or external access or willful damage.
5.6.2 Summary • It is to show that testing whether the system meets its specified security objectives. • Tester’s aim is to demonstrate the system’s failure to fulfill the stated security
requirements o Beware: it is impossible to prove that a system is impenetrable. o Objective is to establish sufficient confidence in security.
-
40
5.7 Configuration Testing
Configuration testing is to check the operation of the software under test with different types of hardware configurations. It is done to check whether the system can work on machines with different configurations (software with hardware). Computers are designed using different peripherals, components, drivers which are designed by various companies.
5.7.1 Purpose of Configuration Testing To determine whether the program operates properly when the hardware or software is configured in a required manner.
5.7.2 Summary • It is the process of checking the operation of the software with various types of
hardware. For example: for applications to run on Windows-based PC used in homes and businesses.
o PC: different manufacturers such as Compaq, Dell, Hewlett Packard, IBM and others
o Components: disk drives, video, sound, modem, and network cards o Options and memory o Device drivers
5.8 Compatibility Testing
Compatibility testing is to check the operation of the software under test with different types of software. Software compatibility testing means checking that your software interacts with and shares information correctly with the other software. For example, it measures how Web pages display well on different browser versions. Compatibility testing is used to determine if your software application has issues related to how it functions in concern with the operating system and different types of system hardware and software.
5.8.1 Purpose of Compatibility Testing To evaluate how well software performs on a particular hardware, software, operating system, browser, or network environment.
-
41
5.8.2 Summary • Testing whether the system is compatible with other systems with which it should
communicate. • It is the process of determining the ability of two or more systems to exchange
information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.
• It means checking that your software interacts with and shares information correctly with other software. For example: What other software (operating systems, web browser etc.) your software is designed to be compatible with?
5.9 Installation Testing
Installability testing is to ensure that all the installation options in the software are working properly. Installation testing (in software engineering) can simply be defined as any testing that occurs outside of the development environment. Such testing will frequently occur on the computer system in which the software product will eventually be installed on.
5.9.1 Purpose of Installation Testing It is done to identify the ways in which installation procedures lead to incorrect results. It is done to ensure that the application or component is easy to install, ensure that time and money are not wasted during the installation process, Improve the morale of the engineers who will install the application or component, minimize installation defects, determine whether the installation procedure is documented, determine whether the methodology for migration from old system to new system is documented.
5.9.2 Summary • Testing installation procedures is a good way to avoid making a bad impression. Since,
“Installation makes the first impression on the end-user” • To identify ways in which the installation procedures lead to incorrect results • Installation options are
o New o Upgrade o Customized/Complete o Under Normal and Abnormal Conditions
• It is the testing concerned with the installation procedures for the system
-
42
5.10 Recovery Testing
Recovery testing is to check a system’s ability to recover from failure. It is done to determine whether operations can be continued after a disaster or after integrity of the system has been lost. This involves reverting to a point where the integrity of the system was known and then reprocessing transactions up to the point of failure. It is used where continuity of operations is essential.
5.10.1 Purpose of Recovery Testing To verify the system’s ability to recover from varying degrees of failure.
5.10.2 Summary To determine whether, the system or program meets its requirements for recovery after a failure.
5.11 Availability Testing
Availability testing is done to verify functionalities available for use to the user whenever a system undergoes any failure. Application is tested for its reliability so that failures, if any, are discovered and removed before deploying the system. Availability testsare conducted to check both reliability and availability of an application. Reliability is the degree to which something operates without failure under given conditions during a given time period. Most likely scenarios are tested under normal usage conditions to validate that the application provides expected service.
It compares the availability percentage to the original service level agreement. Using availability testing, the application is run for a planned period, and failure events collected with repair times. Where reliability testing is about finding defects and reducing the number of failures, availability testing is primarily concerned with measuring and minimizing the actual repair time.
Formula for calculating percentage availability: (MTBF/(MTBF+MTTR)) X 100.
Notice that as MTTR trends towards zero, the percentage availability testing reduce and eliminate downtime.
-
43
5.12 Volume Testing
Volume testing is done to check the performance of the application when volume of data being processed in the database is increased. Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. Such systems capturing real-time sales or could be database updates and or data retrieval.
Volume testing will seek to verify the physical and logical limits to a system’s capacity and ascertain whether such limits are acceptable to meet the projected capacity of the organization’s business processing.
5.12.1 Summary • Testing where the system is subjected to large volumes of data • Testing designed to challenge a system’s ability to manage the maximum amount of
data over a period to of time. This type of testing also evaluates a system’s ability to handle overload situations in an orderly fashion.
5.13 Accessibility Testing
Accessibility Testing is an approach to measuring a product’s ability to be easily customized or modified for the benefit of users with disabilities. Users should be able to change input and output. Accessibility testing is the process of ensuring that a Web application is accessible to people with disabilities. If your Web application is produced for or by a US government agency, accessibility verification is required in order to prevent violation of the federal law, the potential loss of government contracts, and the potential for costly lawsuits. It can help you prevent functionality problems that could occur when people with disabilities try to access your application with adaptive devices such as screen readers, refreshable Braille displays, and alternative input devices.
5.13.1 Purpose of Accessibility Testing The goal of Accessibility Testing is to ensure that people with disabilities can access and use the software product as effectively as without disabilities. It is to pinpoint problems within Web sites and products that may otherwise prevent users with disabilities from accessing the information they are searching for.
It can help one in determine the compliance of the product i.e. how your product complies with legal requirements regarding accessibility, user friendliness & effectiveness of the product for physically challenged users.
-
44
5.13.2 Summary • Enable users with common disabilities to use the application or component • Determines the degree to which the user interface of an application enables users with
common or specified (e.g., auditory, visual, physical, or cognitive) disabilities to perform their specified tasks. Examples of accessibility requirements include the people with auditory disabilities, colorblindness, and physical disabilities by enabling them to verbally interact with and use it, with mild cognitive disabilities.
-
45
CHAPTER 6: Acceptance Testing
6.1 Introduction
Acceptance testing is the process of evaluating the product with the current needs of its end users. It is usually done by end users of customers after the testing group has successfully completed the testing. Acceptance tests really are requirement artifacts because they describe the criteria by which the customer will determine whether the system meets their needs. It is a type of high-level testing, describes black-box requirements, identified by your project customers, which your system must conform to. It involves operating the software in production mode for a pre-specific period of time.
6.2 Objective
The objectives of acceptance testing are to:
• Determine whether the application satisfies its acceptance criteria. • Enable the customer organization to determine whether to accept the application. • Determine if the application is ready for deployment to the full user community. • Report any failures to the development teams so that the associated defects can be
fixed.
6.3 Acceptance Testing
Acceptance testing is further divided into:
ACCEPTANCE TESTING
CONTRACTUAL NON-CONTRACTUAL
-
46
If the software is developed under contract, the contracting customer does the accepting testing. For example, proper messages should be provided for the navigation from one part to another for an end-user. If the software is not developed under contract, then acceptance testing will be done in following two different ways:
• Alpha Testing • Beta Testing
6.3.1 Alpha Testing • Alpha testing is usually performed by end users inside the development organization. • The testing is done in a controlled environment. • Developers are present. • Defects found by end users are noted down by the development team and fixed before
release.
6.3.2 Beta Testing • Beta testing is usually performed by end users at customer’s site, i.e. outside the
development organization and inside the end users organization. • Not a controlled environment. • Developers will not be present. • Defects found by end users are reported to the development organization.
Once the acceptance testing is done and user/client gives a clearance, the next step is to release the software. At the time of release, usually final candidate testing is done, which is, a last minute testing. It is also called as a Golden Candidate Testing.
-
47
CHAPTER7: Black Box Testing
7.1 Introduction
• Black Box Testing is a Validation strategy, and not a type of testing.
• The types of testing under this strategy are totally based/focused on the testing for requirements and functionality of the work product/software application.
• It is a testing technique that does not require knowledge of the internal functionality/program structure of the system.
• Black box testing is sometimes also called as “Opaque Testing”, “Functional/Behavioral Testing” and “Closed Box Testing”.
• It will not test hidden functions (i.e. functions implemented but not described in the functional design specification) and errors associated with them will not be found in black-box testing.
7.2 Objectives
The objectives of black box testing are to:
• Validate the system to determine if it fulfills its operational requirements. • Identifying the defects that are not efficiently found during unit and integration testing. • Report these failures to the development teams so that the associated defects can be
fixed. • Help determine the extent to which the system is ready for launch.
Black box testing verifies the actual behavior of the software with its functional requirements and not with the internal program structure or code. That is the reason black box testing is also considered as functional testing. This testing technique is also called as behavioral testing or opaque box testing or simply closed box testing. So, black box testing is not normally carried out by the programmer. This testing technique treats the system as black box or closed box. Tester will only know the formal inputs and projected (expected) results. Tester does not know how the program actually works at those results. Hence tester tests the system based on the functional specifications given to him.
-
48
7.3 Advantages of Black Box Testing
• Tests will be done from an end user’s point of view because the end user should finally accept the system.
• Test cases can be designed as soon as the functional specifications are complete. • Testing helps to identify the vagueness and contradiction in functional specifications. • Efficient when used on larger systems. • The tester and the developer are independent of each other. • Tester can be non-technical.
7.4 Disadvantages of Black Box Testing
• It is difficult to identify all possible valid and invalid inputs in limited testing time. So writing test cases is slow and difficult.
• It is difficult to identify tricky inputs, if the test cases are not developed based on specifications.
• Chances of having repetition of tests that are already done by programmer.
7.5 Black Box Testing Methods
There are three Black Box Testing methods:
1. Equivalence partitioning 2. Boundary Value Analysis 3. Error Guessing
7.5.1 Equivalence Partitioning Equivalence partitioning is a black box testing technique. All the inputs with which we get the same output can be categorized in a same equivalence class. Therefore the tests are written using test data, which represents each equivalence class. It is designed to minimize the number of test cases.
Input Output
-
49
7.5.2 How to Ident