SOFTWARE TESTING - UNIT II Important 16 Marks Questions...In this testing, we test each module...
Transcript of SOFTWARE TESTING - UNIT II Important 16 Marks Questions...In this testing, we test each module...
SOFTWARE TESTING - UNIT – II
Important 16 Marks Questions
1. What is incremental testing? Explain two approaches of incremental testing.
2. Write a note on System testing and its various types.
3. Write a detailed note on Test Case Design.
4. Compare and contrast between top down versus and bottom up testing.
5. Write a note on Function testing and its various types.
TEST CASE DESIGN TECHNIQUES
Test case is one of the significant components of the testing phase. It works as a tool or a manual
to verify and validate a particular or set of requirement(s) or functionality(s) of the software
product.
Here, we are going to provide some of the useful techniques to design effective test cases in
order to increase the productivity of the testing process.
Techniques to design test case
Techniques, used to design a test case may broadly be classified into three different types
Black Box design technique
White Box design technique
Black Box design technique
This technique is basically used to design the test cases based on the specifications. It comprises
of following types:
Boundary Value Analysis(BVA)
Boundary value analysis is based on testing at the boundaries between partitions. It includes
maximum, minimum, inside or outside boundaries, typical values and error values.
It is generally seen that a large number of errors occur at the boundaries of the defined input
values rather than the center. It is also known as BVA and gives a selection of test cases which
exercise bounding values.
This black box testing technique complements equivalence partitioning. This software testing
technique base on the principle that, if a system works well for these particular values then it will
work perfectly well for all values which comes between the two boundary values.
Guidelines for Boundary Value analysis
If an input condition is restricted between values x and y, then the test cases should be designed
with values x and y as well as values which are above and below x and y.
If an input condition is a large number of values, the test case should be developed which need to
exercise the minimum and maximum numbers. Here, values above and below the minimum and
maximum values are also tested.
Apply guidelines 1 and 2 to output conditions. It gives an output which reflects the minimum and
the maximum values expected. It also tests the below or above values.
Example:
Input condition is valid between 1 to 10
Boundary values 0,1,2 and 9,10,11
Equivalence Partitioning
Equivalent Class Partitioning allows you to divide set of test condition into a partition which
should be considered the same. This software testing method divides the input domain of a
program into classes of data from which test cases should be designed.
The concept behind this technique is that test case of a representative value of each class is equal
to a test of any other value of the same class. It allows you to Identify valid as well as invalid
equivalence classes.
Example:
Input conditions are valid between
1 to 10 and 20 to 30
Hence there are three equivalence classes
--- to 0 (invalid)
1 to 10 (valid)
11 to 19 (invalid)
20 to 30 (valid)
31 to --- (invalid)
You select values from each class, i.e.,
-2, 3, 15, 25, 45
Decision table testing
A decision table is also known as to Cause-Effect table. This software testing technique is used
for functions which respond to a combination of inputs or events. For example, a submit button
should be enabled if the user has entered all required fields.
The first task is to identify functionalities where the output depends on a combination of inputs.
If there are large input set of combinations, then divide it into smaller subsets which are helpful
for managing a decision table.
For every function, you need to create a table and list down all types of combinations of inputs
and its respective outputs. This helps to identify a condition that is overlooked by the tester.
Following are steps to create a decision table:
Enlist the inputs in rows
Enter all the rules in the column
Fill the table with the different combination of inputs
In the last row, note down the output against the input combination.
State Transition Diagrams
In State Transition technique changes in input conditions change the state of the Application
Under Test (AUT). This testing technique allows the tester to test the behavior of an AUT. The
tester can perform this action by entering various input conditions in a sequence. In State
transition technique, the testing team provides positive as well as negative input test values for
evaluating the system behavior.
Guideline for State Transition:
State transition should be used when a testing team is testing the application for a limited set of
input values.
The technique should be used when the testing team wants to test sequence of events which
happen in the application under test.
Example:
In the following example, if the user enters a valid password in any of the first three attempts the
user will be able to log in successfully. If the user enters the invalid password in the first or
second try, the user will be prompted to re-enter the password. When the user enters password
incorrectly 3rd time, the action has taken, and the account will be blocked.
State Transition Diagram
White Box design technique
It's a technique of designing a test case, based on the internal structure of the system or the
software product. Following techniques, comes under the umbrella of white box design
technique.
Error guessing
Error guessing is a software testing technique which is based on guessing the error which can
prevail in the code. It is an experience-based technique where the test analyst uses his/her or
experience to guess the problematic part of the testing application.
The technique counts a list of possible errors or error-prone situations. Then tester writes a test
case to expose those errors. To design test cases based on this software testing technique, the
analyst can use the past experiences to identify the conditions.
Guidelines for Error Guessing:
The test should use the previous experience of testing similar applications
Understanding of the system under test
Knowledge of typical implementation errors
Remember previously troubled areas
Evaluate Historical data & Test results
Exploratory testing
It may be seem as a learning approach along with a testing technique, where a tester having
inadequate knowledge of the specification and requirements of the product, progressively,
performs testing with the minimal planning and continuously develops strategy to progress the
testing activity using his skills and traits.
INCREMENTAL TESTING
Incremental Testing, also known as Incremental Integration Testing, is one of the approaches of
Integration Testing and incorporates its fundamental concepts.
In this testing, we test each module individually in unit testing phase, and then modules are
integrated incrementally and tested to ensure smooth interface and interaction between modules.
In this approach, every module is combined incrementally, i.e., one by one till all modules or
components are added logically to make the required application, instead of integrating the
whole system at once and then performing testing on the end product. Integrated modules are
tested as a group to ensure successful integration and data flow between modules.
As in integration testing, the primary focus of doing this testing is to check interface, integrated
links, and flow of information between modules. This process is repeated till the modules are
combined and tested successfully.
Example
Let’s understand this concept with an example:
System or software application consists of following Modules:
Incremental Integration testing approach
Each Module i.e. M1, M2, M3, etc. are tested individually as part of unit testing
Modules are combined incrementally i.e. one by one and tested for successful interaction
In Fig2, Module M1 & Module M2 are combined and tested
In Fig3, Module M3 is added and tested
In Fig4, Module M4 is added and testing is done to make sure everything works together
successfully
Rest of the Modules are also added incrementally at each step and tested for successful
integration
Objective of Incremental Test
To ensure that different modules work together successfully after integration
Identify the defects earlier and in each phase. This gives developers an edge to identify where the
problem is. Like if testing after M1 and M2 are integrated is successful but when M3 is added,
the test fails; this will help the developer in segregating the issue
Issues can be fixed in early phase without much rework and in less cost
Incremental Integration Testing Methodologies
Stubs are used in Top-down testing approach and are known as “called programs”. Stubs help
simulate the interface between lower lever modules which are not available or developed.
Drivers are used in Bottom-up testing approach and are known as “calling programs”. Drivers
help simulate the interface between top level modules which are not developed or available.
#1) Top Down
In top down testing takes place from top to bottom, i.e., from the central module to sub module.
Modules framing the top layer of application are tested first.
This approach follows the structural flow of the application under testing. Unavailable or not
developed modules or components are substituted by stubs.
Let’s understand this with an example:
Module: Website Login can be denoted as L
Module: Order can be denoted as O
Module Order Summary can be denoted as OS (Not yet developed)
Module: Payment can be denoted as P
Module Cash Payment can be denoted as CP
Module Debit/Credit Payment can be denoted as DP (Not yet developed)
Module Wallet Payment can be denoted as WP (Not yet developed)
Module: Reporting can be denoted as R (Not yet developed)
Top down Incremental Integration Testing Approach
Following test cases will be derived:
Test Case1: Module L and Module O will be integrated and tested
Test Case2: Module L, O and P will be integrated and tested
Test Case3: Module L, O, P and R will be integrated and tested.
Following test cases will be derived for “depth-first”:
Test Case1: Module L and Module O will be integrated and tested
Test Case2: Module L, O and OS will be integrated and tested
Test Case3: Module L, O, OS, P will be integrated and tested
Test Case4: Module L, O, OS, P, CP will be integrated and tested
Merits of Top-down Methodology
Early exposure of architecture defects
It outlines the working of an application as a whole in early stages and helps in early
disclosure of design defects
Main control points are tested early
De-Merits of Top-down Methodology
Significant modules are tested late in cycle
It is very challenging to write test conditions
A stub is not a perfect implementation of related Module. It just simulates the data flow
between two modules
#2) Bottom-up
In this approach, testing takes place from bottom to top, i.e., modules at bottom layer are
integrated and tested first and then sequentially other modules are integrated as we move up.
Unavailable or not developed modules are replaced by drivers.
Bottom up Incremental Integration testing approach
Following test cases will be derived:
Test Case1: Unit testing of module Practical and Theory
Test Case2: Integration and testing of Modules Marks-Practical-theory
Test Case3: Integration and testing of Modules Percentage-Marks-Practical-Theory
Test Case4: Unit testing of Module Sports Grade
Test Case5: Integration and testing of Modules Rank-Sports Grade-Percentage-Marks-Practical-
Theory
Merits of Bottom-up Methodology
This methodology is very useful for applications where bottom up design model is used
It’s easier to create test conditions in bottom up approach
To start testing at the bottom level of hierarchy means testing of critical modules or
functionality at an early stage. This helps in early discovery of errors
Interface defects are detected at early stage
De-merits of Bottom-up Methodology
Drivers are more difficult to write than stub
Design defects are caught in the later stage
In this approach, we do not have working application until the last module is build
Driver is not a complete implementation of related Module. It just simulates the data flow
between two modules.
FUNCTIONAL TESTING
Functional testing is testing the ‘Functionality’ of a software or an application under test. It tests
the behavior of the software under test. Based on the requirement of the client, a document called
a software specification or Requirement Specification is used as a guide to test the application.
A test data is sculpted based on it and a set of Test Cases are prepared. The software is then
tested in a real environment to check whether the actual result is in sync with the expected result.
This technique is called as Black Box Technique and is mostly carried out manually and is also
very effective in finding bugs.
TYPES OF FUNCTIONAL TESTING
Smoke Testing
This type of testing is performed before the actual system testing to check if the critical
functionalities are working fine in order to carry out further extensive testing. This, in turn, saves
the time of installing the new build again and avoids further testing if the critical functionalities
fail to work. It is a generalized way of testing the application.
Sanity Testing
It is a type of testing where only a specific functionality or a bug which is fixed is tested to check
whether the functionality is working fine and see if there are no other issues due to the changes
in the related components. It is a specific way of testing the application.
Integration Testing
Integration Testing is performed when two or more functions or components of the software are
integrated to form a system. It basically checks the proper functioning of the software when the
components are merged to work as a single unit.
Regression Testing
Regression testing is carried out on receiving the build of the software after fixing the bugs that
were found in the initial round of testing. It verifies whether the bugs are fixed and checks if the
entire software is working fine with the changes.
Localization Testing
It is a testing process to check the software’s functioning when it is transformed into an
application using a different language as required by the client.
Example: A website is working fine in English language setup and now it is localized to Spanish
language setup. The changes in the language may affect the overall user interface and
functionality too. Testing done to check these changes is known as Localization testing.
User Acceptance Testing
In User Acceptance testing the application is tested based on the user’s comfort and acceptance
by considering their ease of use.
The actual end users or the clients are given a trial version to be used in their office setup to
check if the software is working as per their requirements in a real environment. This testing is
carried out before the final launch and is also termed as Beta Testing or end-user testing.
SYSTEM TESTING
System Testing includes testing of a fully integrated software system. Generally, a computer
system is made with the integration of software (any software is only a single element of a
computer system). The software is developed in units and then interfaced with other software
and hardware to create a complete computer system. In other words, a computer system consists
of a group of software to perform the various tasks, but only software cannot perform the task;
for that software must be interfaced with compatible hardware. System testing is a series of
different type of tests with the purpose to exercise and examine the full working of an integrated
software computer system against requirements.
Hierarchy of Testing Levels
There are mainly two widely used methods for software testing, one is White box testing which
uses internal coding to design test cases and another is black box testing which uses GUI or user
perspective to develop test cases.
White box testing
Black box testing
System testing falls under Black box testing as it includes testing of the external working of the
software. Testing follows user's perspective to identify minor defects.
System Testing includes the following steps.
Verification of input functions of the application to test whether it is producing the
expected output or not.
Testing of integrated software by including external peripherals to check the interaction
of various components with each other.
Testing of the whole system for End to End testing.
Behavior testing of the application via auser's experience
Types of System Testing
Regression Testing
Regression testing is performed under system testing to confirm and identify that if there's any
defect in the system due to modification in any other part of the system. It makes sure, any
changes done during the development process have not introduced a new defect and also gives
assurance; old defects will not exist on the addition of new software over the time.
Load Testing
Load testing is performed under system testing to clarify whether the system can work under
real-time loads or not.
Functional Testing
Functional testing of a system is performed to find if there's any missing function in the system.
Tester makes a list of vital functions that should be in the system and can be added during
functional testing and should improve quality of the system.
Recovery Testing
Recovery testing of a system is performed under system testing to confirm reliability,
trustworthiness, accountability of the system and all are lying on recouping skills of the system.
It should be able to recover from all the possible system crashes successfully.
Migration Testing
Migration testing is performed to ensure that if the system needs to be modified in new
infrastructure so it should be modified without any issue.
Usability Testing
The purpose of this testing to make sure that the system is well familiar with the user and it
meets its objective for what it supposed to do.
Software and Hardware Testing
This testing of the system intends to check hardware and software compatibility. The hardware
configuration must be compatible with the software to run it without any issue. Compatibility
provides flexibility by providing interactions between hardware and software.
Why is System Testing Important?
System Testing gives hundred percent assurance of system performance as it covers end
to end function of the system.
It includes testing of System software architecture and business requirements.
It helps in mitigating live issues and bugs even after production.
System testing uses both existing system and a new system to feed same data in both and
then compare the differences in functionalities of added and existing functions so, the
user can understand benefits of new added functions of the system.