Project Report 2011
-
Upload
achillies2k -
Category
Documents
-
view
106 -
download
1
Transcript of Project Report 2011
PROJECT REPORT
ON
SOFTWARE TESTING TECHNIQUES
At
STQC IT Services, Ministry Of Information Technology, Delhi
Submitted to:
AIM & ACTDEPT. OF COMPUTER SCIENCE & APPLICATIONS
BANASTHALI VIDYAPITH,BANASTHALI
In the partial fulfillment of the requirement for the degree of
M.Tech.(SE)
Session (2006 - 2008)
Submitted by: Under the Guidance of: Priya Pandey Mr. C. S. BISHT, M.Tech(S.E) 4th sem DirectorRoll No.: 7290 STQC IT Services, DELHI
Acknowledgement
I take this opportunity to express my sincere thanks to Mr. C. S. BISHT, Director,
Mr. A K Upadhyaya, Additional Director and Mr.Sanjeev Kumar, Additional
Director who has been a source of inspiration for me.
I would like to express my sincere thanks to other IT staff support members of
STQC IT Services, Ministry Of Information Technology, New Delhi.
I am also thankful to Mrs. Nayantara Shirvastava, Scientist, STQC-IT for her
valuable guidance throughout this project.
Last but not the least; I owe huge debt of thanks to the STQC IT Services,
Ministry Of Information Technology, New Delhi that gave me an opportunity to
do my project work. These projects were a good exposure that will definitely help
me in my professional carrier.
Priya Pandey M.Tech(S.E.)
Roll No.:7290
3
TABLE OF CONTENT
1. PURPOSE2. SCOPE3. OVERVIEW
3.1. INTRODUCTION TO THE ORGANIZATION 3.1.1. BACKGROUND 53.1.2. MISSION 53.1.3. ACTIVITIES 53.1.4. STANDARDS USED 53.1.5. STQC ACTIVITIES3.1.6. ORGANIZATION CHART 3.1.7. TEST CONTROL SUB-COMMITTEE 3.1.8. INDEPENDENT TEST GROUP
3.2. TESTING ACTIVITIES3.3. TESTING DOCUMENTATION
3.3.1. TEST PLAN3.3.2. TEST SPECIFICATION3.3.3. TEST INCIDENT REPORT3.3.4. TEST PROGRESS REPORT3.3.6 TEST SUMMARY REPORT3.3.7 TEST PLANNING & CONTROL
4. GENERAL CONCEPTS OF TESTING4.1. TESTING OBJECTIVES 4.2. TESTING STRATEGY4.3. LEVELS OF TESTINGS
4.3.1. UNIT TESTING4.3.2. LINK TESTING4.3.3. FUNCTION TESTING4.3.4. SYSTEM TESTING4.3.5. ACCEPTANCE TESTING
4.4. GENERAL TESTING PRINCIPLES 4.5. TESTING START PROCESS4.6. TESTING STOP PROCESS4.7. SOFTWARE TESTING-REQUIREMENTS TRACEABILITY MATRIX4.8. REGRESSION TESTING4.9. BLACK BOX TESTING
5. STRATEGY OF TESTING5.1. SOFTWARE TESTING LIFE CYCLE
5.1.1. TEST PLANNING5.1.2. TEST ANALYSIS5.1.3. TEST DESIGN5.1.4. CONSTRUCTION AND VERIFICATION5.1.5. TESTING CYCLES5.1.6. FINAL TESTING AND IMPLEMENTATION
6. MEASURING SOFTWARE TESTING6.1. SOFTWARE QUALITY6.2. SOFTWARE QUALITY CHARACTERISTICS
6.2.1. FUNCTIONALITY6.2.2. RELIABILITY
4
6.2.3. USABILITY6.2.4. EFFICIENCY6.2.5. MAINTAINABILITY6.2.6. PORTABILITY
6.3. QUALITY IN USE CHARACTERISTICS7. PROCESS FOLLOWED AT STQC
7.1. STUDY THE MANUAL7.1.1. SRS (SOFTWARE REQUIREMENT SPECIFICATION)7.1.2. USER MANUAL
7.2. PREPARE TEST SCENARIOS7.3. PREPARE TEST CASE7.4. PREPARE DEFECT REPORT7.5. PREPARE TEST REPORT7.6. REGRESSION TESTING7.7. PERFORMANCE TESTING
8. APPLICATION SOFTWARE ASSIGNED FOR TESTING 8.1. LRIS (LAND REGISTRATION INFORMATION SYSTEM)
8.1.1. INTRODUCTION OF PROJECT8.1.2. PRODUCT FUNCTION8.1.3. SYSTEM INTERFACE8.1.4. OBJECTIVE OF THE PROJECT8.1.5. SYSTEM REQUIREMENT8.1.6. RESPONSIBILITY8.1.7. METHODOLOGY8.1.8. ERROR SCREEN
8.2. VATIS (VALUE ADDED TAX INFORMATION SYSTEM)8.2.1. INTRODUCTION8.2.2. OBJECTIVES OF THE PROJECT8.2.3. RESPONSIBILITY8.2.4. METHODOLOGY
8.3. FARMER PORTAL8.3.1. INTRODUCTION OF PROJECT8.3.2. APPLICATIONS8.3.3. RESPONSIBILITY8.3.4. METHODOLOGY
9. AUTOMATED TOOLS FOR TESTING9.1. INTRODUCTION9.2. RATIONAL ROBOT 9.3. LOADRUNNER9.4. BORLAND® SILK PERFORMER
10. CONCLUSION11. REFERENCES12. DEFINITIONS13. SUMMARYAPPENDIX A: CHECKLIST ON UNIT TESTINGAPPENDIX B: CHECKLIST ON LINK TESTINGAPPENDIX C: CHECKLIST ON FUNCTION TESTINGAPPENDIX D: CHECKLIST ON SYSTEMS TESTINGAPPENDIX E: CHECKLIST ON ACCEPTANCE TESTINGAPPENDIX F: CHECKLIST FOR CONTRACTED-OUT SOFTWARE DEVELOPMENT
5
1. PURPOSE
The major purpose of this document is to provide a set of application software
testing techniques, to ensure that computer systems are properly tested, so as to
pursue reliable and quality computer systems.
6
2. SCOPE
This document is to give a set of guidelines to be referenced by application project
teams on the planning and carrying out of testing activities for the application
software. It is important that this document should not treat these guidelines as
mandatory standards, but as a reference model; and should tailor the guidelines
according to individual project’s actual situation.
This document at its current version is most suitable to development projects
following the SDLC, which is defined in the department’s Information Systems
Procedures Manual. As for the maintenance projects, these guidelines may need to
be adjusted according to projects’ actual situation. Such adjustments will be the
responsibility of individual project team.
It is intended that the guidelines would be applicable to both the in-house-
developed and contract-out software projects.
7
3. OVERVIEW
This document, in essence, suggests a reference model for the planning and
carrying out of Application Software Testing in STQC. The following serve as an
overview of the model:
3.1 ABOUT THE ORGANIZATION
3.1.1. Background
Standardization, Testing and Quality Certification (STQC) Directorate an attached
office under the Department of Information Technology, Government of India.
Established in 1977, for providing Standardization, Testing & Certification
Support to Indian electro ABCs and allied industries at National/International
level.
STQC provides cost-effective International level Assurance Services in Quality
and Security on a national level to Indian industry and users. STQC Services are
also being extended to other overseas countries. This program has been in
existence over three decades and was established based on the recommendations of
Bhawa Committee's report on electronics industry. The program has received
substantial technical and financial support from the Government of Germany
under the Indo-German Technical Cooperation project spanning over 15 years
(1980-1995).
Initially the STQC program was catering to the Testing and Calibration needs of
the small and medium sized electronic industry. With the shift of focus on IT, the
programme has undergone major changes in the past 4 years. From a mere Testing,
Calibration and Quality Assurance Support to Electronics Hardware Sector, STQC
has positioned itself as a prime Assurance Service Provider to both Hardware and
Software industry and users. Recent focus of Department of Information
Technology (DIT) in IT Security, Software Testing & Certification and assignment
8
of National Assurance Framework have further raised the responsibility and
expectations from the Directorate.
One of the center under STQC IT Services, Delhi center is well equipped with well
qualified, trained & experienced man power to provide the software quality related
services .The center has very good training and testing facilities. STQC has taken
initiative to support this major initiative of DIT on the aspects of Standards,
Quality and Security. STQC has evolved a Quality Assurance framework covering
the aspects of Quality of IT Service Delivery. It has also evolved a quality model
for testing and evaluation of Application Software based on the latest International
Standards and have also validated this model.
3.1.2. STQC Mission
“To be a key enabler in making Indian IT organizations and users achieve
compliance to International Quality Standards and compete globally".
3.1.3. STQC Testing Services
Independent Third party Test laboratories network covering Software system testing and issue of Test Reports
Generation of Test Cases and automation of execution
Development of regression test bed
Verification & Validation planning in SDLC
3.1.4. STQC Standard Used
Various ISO/IEC IEEE Stds: Software Engineering
9
3.1.5. STQC Activities
3.1.6. The Organization Chart
10
3.1.7. Test Control Sub-Committee
A Test Control Sub-Committee is set up to co-ordinate, monitor and resolve
priority conflict on the testing activities. The emphasis here is on the necessity of
these coordination activities. Therefore for those small-sized projects not justifying
existence of such sub-committee, its function is still required but is to be achieved
through discussion meeting between project team and user representatives.
3.1.8 Independent Test Group
Where resource constraints permitted, an independent Test Group is set up to carry
out the testing activities. The emphasis here is on the independent role of the Test
Group, which does not necessarily mean dedicated resources.
3.2 TESTING ACTIVITIES
To test out a computer system from the following 5 different perspectives (the
emphasis here is on different perspectives, which do not necessarily mean different
testing phases):
(i) To validate individual program modules against program specifications (Unit
Testing);
(ii) To validate program linkages against design specifications (Link Testing);
(iii) To validate integrated software against functional specifications (Function
Testing);
(iv) To validate the integrated software against specifications on operating
environment (System Testing); and,
(v) To validate the integrated software against end-user needs (Acceptance
Testing).
3.3 TESTING DOCUMENTATION
To document testing activities through the use of
11
(i) Test Plan
(ii) Test Specification
(iii) User Manual Document
(iv) Test Incident Report
(v) Test Progress Report
(vi) Test Summary Report
Introduction
The following summarizes the testing documentation (highlighted by the number
in bracket) to be produced in a project:
For each project
References of the 5 levels of testing as well as any (1)
necessary complementary reviews (refer to section
6.5) in the project plan
For each of the 4 levels of testing (i.e. Link,
Function, Systems and Acceptance Testing)
Prepare a test plan (2)
Prepare a test specification (3)
Prepare test incident reports for faults (4)found
Prepare periodic test progress reports (5)
End Level
Prepare test summary report after completion of the (6)tests
End Project
The above documentation will be subject to quality assurance checks for existence
and completeness by the Quality Assurance Staff.
12
Note: For small-sized projects, test plans & test specification as for Link Testing,
Function testing, Systems Testing could be combined.
3.3.1 TEST PLAN
3.2.1.1 Purpose of Document
To prescribe the scope, approach, resources, and schedule of the testing activities
for a level of testing. To identify the items being tested, the features to be tested,
the testing tasks to be performed, and the personnel responsible for each task. This
test plan should be prepared for each level of testing except Unit Testing.
3.2.1.2 Outline of Document
The testing plan should provide at least the following information:
(a) Test Items
List the functional items and software features to be tested.
For Link Testing, list the software items to be tested (which in most of the cases
should be ALL software items).
For Function Testing, list the functions in the function catalogue (which in most
cases should be ALL functions).
For Systems Testing, list the tests to be carried out.
(b) Test Tasks
Normally, there are the following 4 tasks:
1) Prepare test specification, regarding
(i) Test procedures
(ii) Test cases
(iii) Test data
(iv) Test environment
13
2) Set up of the testing environment
3) Load test data
4) Conduct tests
(*Do not plan on the assumption that each test case will only be executed once)
For each task, list the estimated effort required & duration.
For example,
(c) Responsibilities of relevant parties. For each party, specify their corresponding
responsibilities for the testing levels.
(d) Remarks. Describe any special constraint on the test procedures; identify any
special techniques and tools requirements that are necessary for the execution of
this test.
A test plan is a systematic approach to testing a system such as a machine or
software. The plan typically contains a detailed understanding of what the eventual
workflow will be.
14
Test Plan is the plan according to which the software is been tested. It is prepared
before the testing of the software is performed. It consists of the outline on which
software is tested. Test Plan prescribes the scope, approach, resources, and
schedule of the testing activities. It identifies the items to be tested, the features to
be tested, the testing tasks to be performed, the personnel responsible for each
task, and the risks associated with the plan. It consists of the outline on which
software is tested.
The test plan prepared is as follows
Test plan template, based on IEEE 829 format 1.1.1 Test plan identifier
1.1.2 References
1.1.3 Introduction
1.1.4 Test items (functions)
1.1.5 Software risk issues
1.1.6 Features to be tested
1.1.7 Features not to be tested
1.1.8 Approach (strategy)
1.1.9 Item pass/fail criteria
1.1.10 Entry & exit criteria
1.1.11 Suspension criteria & resumption requirements
1.1.12 Test deliverables
1.1.13 Remaining test tasks
1.1.14 Environmental needs
1.1.15 Staffing and training needs
1.1.16 Responsibilities
1.1.17 Planning risks and contingencies
1.1.18 Approvals
1.1.19 Glossary
1 Test plan identifier
Specify the unique identifier assigned to the test plan.
15
2. Introduction
Summarize the software items and software features to be tested. References to the
following documents, when they exist, are required in the highest-level test plan:
a) Project authorization;
b) Project plan;
c) Quality assurance plan;
d) Configuration management plan;
e) Relevant policies;
f) Relevant standards.
3 Test items
Identify the test items including their version/revision level. Also specify hardware
requirements. Supply references to the following test item documentation, if it
exists:
a) Requirements specification;
b) Design specification;
c) Users guide;
d) Operations guide;
e) Installation guide.
4 Features to be tested
Identify all software features and its combinations that to be tested. Identify the
test design specification associated with each feature and each combination of
features.
5 Features not to be tested
Identify all features and its significant combinations that will not be tested with
reasons.
6. Approach
16
Describe the overall approach to testing. For each major group of features or
feature combinations, specify the approach that will ensure that these feature
groups are adequately tested. Specify the major activities, techniques, and tools
that are used to test the designated groups of features. The approach should be
described in sufficient detail to permit identification of the major testing tasks and
estimation of the time required to do each one.
Specify the minimum degree of comprehensiveness desired. Identify the
techniques that will be used to judge the comprehensiveness of the testing effort.
Identify significant constraints on testing such as test item availability, testing
resource availability, and deadlines.
7 Item pass/fail criteria
Specify the criteria to be used to determine whether each test item has passed or
failed testing
8 Suspension criteria and resumption requirements
Specify the criteria used to suspend all or a portion of the testing activity on the
test items associated with this plan. Specify the testing activities that must be
repeated, when testing is resumed.
9 Test deliverables
Identify the deliverable documents. The following documents should be included:
a) Test plan;
b) Test logs;c) Test summary reports.
10 Testing tasks
Identify the set of tasks necessary to prepare for and perform testing. Identify all
inter task dependencies and any special skills required.
11 Environmental needs
17
Specify both the necessary and desired properties of the test environment. This
specification should contain the physical characteristics of the facilities including
the hardware, the mode of usage (e.g., stand-alone), and any other software or
supplies needed to support the test. Also specify the level of security that must be
provided for the test facilities, system software, and proprietary components such
as software, data, and hardware.
Identify special test tools needed. Identify any other testing needs (e.g.,
publications or office space). Identify the source for all needs that are not currently
available to the test group.
12 Responsibilities
Identify the groups responsible for managing, designing, preparing, executing,
checking, and resolving. Identify the groups responsible for providing the test
items identified in 4.2.3 and the environmental needs identified in 4.2.11.
These groups may include the developers, testers, operations staff, user
representatives, technical support staff, data administration staff, and quality
support staff.
13 Staffing and training needs
Specify test-staffing needs by skill level. Identify training options for providing
necessary skills.
14 Schedule
Include test milestones identified in the software project schedule as well as all
item transmittal events.
15 Risks and contingencies
Identify the high-risk assumptions of the test plan. Specify contingency plans for
each (e.g., delayed delivery of test items might require increased night shift
scheduling to meet the delivery date).
18
16 Approvals
Specify the names and titles of all persons who must approve this plan. Provide
space for the signatures and dates.
3.3.2 TEST SPECIFICATION
3.3.2.1 Purpose of Document
To specify refinements of the test approach, to identify the features to be tested, to
specify the steps for executing the tests and specify the test case for each tests.
3.3.2.2 Outline of Document
The test specification should provide, to the least, the following information:
(a) Test Control Procedures
To specify the following:
1) Error reporting procedures;
2) Change / Version control procedures of S/W modules;
3) Set-up & Wind-down procedures of the testing environment.
(b) Testing Environment
To specify at least the following items that are required in the testing progress:
1) H/W & System S/W required;
2) Number of terminals required;
3) Testing facilities / tools required;
4) Test database; and
5) Operations support / operating hour.
(c) Test Termination Criteria
To specify the criteria (e.g. Failing on certain critical test cases, when no. of error
reaches a certain limit, etc.) under which the testing would be terminated.
19
(d) Test Cases
Identify and briefly describe the test cases selected for the testing. For each test
cases, specify also the steps (e.g. bring up screen, input data, keys pressed etc.), the
expected outputs (e.g. message displayed, file changes, etc.), programs involved
and specify whether the test cases has passed or failed after the testing has been
conducted.
It should be noted that definition of test cases is a “design” process and do vary for
different projects. Please refer to Appendices A to F for test case design checklists.
3.3.3 TEST INCIDENT REPORT
3.3.3.1 Purpose of Document
To document any event that occurs during the testing process, which requires investigation. The report is to be issued to the designers/programmers for the errors found in the testing progress.
3.3.3.2 Outline of Document
The test incident report should provide the following information:
(a) Test-Incident-report Identifier
Specify the unique identifier assigned to the test incident report.
(b) Summary
Summarize the incident.Identify the test items involved indicating their version/revision level.
(c) Incident Description
Provide a description of the incident. This description should include the following
items:
(i) Inputs
(ii) Expected results
(iii) Anomalies
20
(iv) Date and time
(v) Procedure step
(vi) Environment
(vii) Testers and Observers
(d) Impact
If known, indicate what impact this incident will have on test plan and test
procedure specification.
(e) Results of Investigation
(i) Classification of Incident
- Design / Program error
- Error related to testing environment
- Others
(ii) Action taken
- Design / Program changes.
- Testing Environment Changes
- No action taken.
Test Incident Report
3.3.4 TEST PROGRESS REPORT
3.3.4.1 Purpose of Document
In order that progress of the testing process be controlled properly, a periodic test
progress report should be prepared by the test group, and submitted to the Test
Control Sub-Committee Systems Manager.Frequency of the report is suggested to
be weekly or bi-weekly.
21
3.3.4.2 Terminology
No. of test cases specified : the total number of test cases that have been specified.
No. of test cases tried at least once : the number of the specified cases that have
been put into test execution at least one time. (The quotient between this term and
the pervious term gives the percentage of the specified test cases that have been
executed at least once. More importantly for the complement of this quotient
percentage, which gives the percentage of the specified test cases that no test runs
have ever put against them so far)
No. of test cases completed : the number of the specified test cases that have been
executed and generated the expected output.
3.3.4.3 Outline of Document
(a) Progress Control Summary.
Notes: Testing effort should include ALL the effort directly related to the testing
activities, but excluding the administration overhead.
(b) Highlights of Outstanding Items & Reasons
To bring to the management attentions the problems encountered / foreseen and
where possible, the planned way of solving them.
(c) Test Cases Results (Optional)
Refer to Section 8.3.2 (d).
3.3.5 TEST SUMMARY REPORT
3.3.5.1 Purpose of Document
To summarize the results of the testing activities for documentation purpose and to
provide information for future testing planning references.
3.3.5.2 Outline of Document
(a) Test cases Results
22
(b) Remarks
3.4 TEST PLANNING & CONTROL
3.3.6.1 Progress Control
To monitor day-to-day progress of the testing activities through the use of Test
Progress Reports.
3.3.6.2 Quality Control / Assurance
Testing documentation be compiled by Test Group, cross-checked by Quality
Assurance Staff 1, and endorsed by Test Control Sub-committee / Project
Committee.
3.3.6.3 Resource Estimation
Project teams to submit testing metrics information to the Metrics Team if one
exists for system development and maintenance. It is advisable for the latter to
update the metrics information to a centralized database for future test planning
references.
4. GENERAL CONCEPTS OF TESTING
23
4.1 TESTING OBJECTIVES
Testing is the process of executing program(s) with the intent of finding errors,
rather than (a misconception) of showing the correct functioning of the
program(s). The distinction may sound like a matter of semantics, but it has been
observed to have profound effect on testing success. The difference actually lies on
the different psychological effect caused by the different objectives: If our goal is
to demonstrate that a program has no errors, then we will tend to select tests that
have a low probability of causing the program to fail. On the other hand, if our
goal is to demonstrate that a program has errors, our test data will have a higher
probability of finding errors.
Specifically, testing should bear the following objectives:
(a) To reveal design errors;
(b) To reveal logic errors;
(c) To reveal performance bottleneck;
(d) To reveal security loophole; and
(e) To reveal operational deficiencies.
All these objectives and the corresponding actions contribute to increase quality
and reliability of the application software.
4.2 TESTING STRATEGY
There are two strategies for testing software, namely White-Box Testing and
Black-Box Testing.
White-Box Testing, also known as Code Testing, focuses on the independent
logical internals of the software to assure that all code statements and logical paths
have been tested.
Black-Box Testing, also known as Specification Testing, focuses on the functional
externals to assure that defined input will produce actual results that agreed with
required results documented in the specifications.
Both strategies should be used, according on the levels of testing.
24
4.3 LEVELS OF TESTING
There are 5 levels of Testing, each of which carries a specific functional purpose,
to be carried out in chronological order.
TestingStrategy Applied
(a) Unit Testing
- Testing of the program modules in isolation White Box Test
with the objective to find discrepancy between
the programs and the program specifications
(b) Link Testing
- Testing of the linkages between tested program White Box Test
modules with the objective to find discrepancy
between the programs and system specifications
(c) Function Testing
- Testing of the integrated software on a function Black Box Test
by function basis with the objective to find
discrepancy between the programs and the
function specifications
(d) Systems Testing
- Testing of the integrated software with the Black Box Test
objective to find discrepancy between the
programs and the original objectives with
regard to the operating environment of the
system (e.g. Recovery, Security, Performance,
Storage, etc.)
(e) Acceptance Testing
- Testing of the integrated software by the end users Black Box Test
(or their proxy) with the objective to find
25
discrepancy between the programs and the end user
needs
4.3.1. UNIT TESTING
4.3.1.1 Scope of Testing
Unit testing (or Module Testing) is the process of testing the individual
subprograms, subroutines, or procedures in a program. The goal here is to find
discrepancy between the programs and the program specifications.
4.3.1.2 Activities, Documentation and Parties Involved
(a) Designers to include test guidelines (i.e. areas to be tested) in the design and
program specifications.
(b) Programmers to define test cases during program development time.
(c) Programmers to perform Unit Testing after coding is completed.
(d) Programmers to include the final results of Unit Testing in the program
documentation.
4.3.1.3 Practical Guidelines
(a) Testing should first be done with correct data, then with flawed data.
(b) A program unit would only be considered as completed after the program
documentation (with testing results) of the program unit have been submitted to
the project leader/SA.
(c) If there are critical sections of the program, design the testing sequence such
that these sections are tested as early as possible. A ‘critical section’ might be a
complex module, a module with a new algorithm, or a I/O module.
(d) There are three types of the test that the Unit Testing should satisfy:
(i) Functional Test
26
Execute the program unit with normal and abnormal input values for which the
expected results are known.
(ii) Performance Test (Optional)
Determine the amount of execution time spent on various parts of the program
unit, response time and device utilization by the program unit.
(iii) Stress Test (Optional)
Stress test is designed to test the program unit with abnormal situations. A great
deal can be learned about the strengths and limitations of a program unit by
executing it in a manner that demands resources in abnormal quantity, frequency,
or volume.
(e) Please refer to Appendix A for a checklist on Unit Testing.
4.3.2 LINK TESTING
4.3.2.1 Scope of Testing
Link Testing is the process of testing the linkages between program modules as
against the system specifications. The goal here is to find errors associated with
interfacing. As a by-product of the testing process, the software modules would be
integrated together.
It is worth noting that this level of testing is sometimes referred to as “Integration
Testing”, which is understood to mean that the testing process would end up with
the software modules in integration. However after some careful consideration, the
term was abandoned, as it would cause some confusion over the term “System
Integration”, which means integration of the automated and manual operations of
the whole system.
4.3.2.2 Activities, Documentation and Parties Involved
(a) Test Group to prepare a Link Testing test plan.
27
(b) Test Group to prepare a Link Testing test specification before testing commences.
(c) Test Group, with the aid of the designers/programmers, to set up the testing environment.
(d) Test Group to perform Link Testing; and upon fault found issue Test Incident
Reports to Designers/programmers, who would fix up the liable errors.
(e) Test Group to report progress of Link Testing through periodic submission of
the Link Testing Progress Report.
4.3.2.3 Practical Guidelines
(a) Both control and data interface between the programs must be tested.
(b) Both Top-down and Bottom-up approaches can be applied. Top-down
integration is an incremental approach to the assembly of software structure.
Modules are integrated by moving downward through the control hierarchy,
beginning with the main control module (‘main program’). Modules subordinate
(and ultimately subordinate) to the main control module are incorporated into the
structure in either a depth-first or the breadth-first manner.
Bottom-up integration, as its name implies, begins assembly and testing with
modules at the lowest levels in the software structure. Because modules are
integrated from the bottom up, processing required for modules subordinate to a
given level is always available, and the need for stubs (i.e. dummy modules) is
eliminated.
(c) As a result of the testing, integrated software should be produced.
(d) Please refer to Appendix B for a checklist on Link Testing.
4.3.3 FUNCTION TESTING
28
4.3.3.1 Scope of Testing
Function Testing is the process of testing the integrated software on a function-by-
function basis as against the function specifications. The goal here is to find
discrepancy between the programs and the functional requirements.
4.3.3.2 Activities, Documentation and Parties Involved
(a) Test Group to prepare Function Testing test plan, to be endorsed by the Project
Committee via the Test Control Sub-Committee, before testing commences.
(b) Test Group to prepare a Function Testing test specification before testing
commences.
(c) Test Group, with the aid of the designers/programmers, to set up the testing
environment.
(d) Test Group (participated by user representatives) to perform Function Testing;
and upon fault found issue test incident reports to Designers/programmers, who fix
up the liable errors.
(e) Test Group to report progress of Function Testing through periodic submission
of the Function Testing Progress Report.
4.3.3.3 Practical Guidelines
(a) It is useful to involve some user representatives in this level of testing, in order
to give them familiarity with the system prior to Acceptance test and to highlight
differences between users’ and developers’ interpretation of the specifications.
However, degree of user involvement may differ from project to project, and even
from department to department, all depending on the actual situation.
(b) User involvement, if applicable, could range from testing data preparation to
staging out of the Function Testing.
29
(c) It is useful to keep track of which functions have exhibited the greatest number
of errors; this information is valuable because it tells us that these functions
probably still contain some hidden, undetected errors.
(d) Please refer to Appendix C for a checklist on Function Testing.
4.3.4 SYSTEM TESTING
4.3.4.1 Scope of Testing
System testing is the process of testing the integrated software with regard to the
operating environment of the system. (i.e. Recovery, Security, Performance,
Storage, etc.)
It may be worthwhile to note that the term has be used with different
environments. In its widest definition especially for the small-scale projects, it also
covers the scope of Link Testing and the Function Testing.
For small-scale projects, which combine the Link Testing, Function Testing and
System Testing in one test, plan and one test specification, it is crucial that the test
specification should include distinct sets of test cases for each of these 3 levels of
testing.
4.3.4.2 Activities, Documentation and Parties Involved
(a) Test group to prepare a System Testing test plan, to be endorsed by the Project
Committee via the Test Control Sub-Committee, before testing commences.
(b) Test group to prepare a System Testing test specification before testing
commences.
(c) Test group, with the aid of the designers/programmers, to set up the testing
environment.
(d) Test group (participated by the computer operators and user representatives to
perform System Testing; and upon fault found issue test incident reports to the
Designers/programmers, who would fix up the liable errors.
30
(e) Test group to report progress of the System Testing through periodic
submission of the System Testing Progress Report.
4.3.4.3 Practical Guidelines
(a) Eight types of Systems Tests are discussed below. It is not claimed that all 8
types will be mandatory to every application system nor are they meant to be an
exhaustive list. To avoid possible overlooking, all 8 types should be explored
when designing test cases.
(i) Volume Testing
Volume testing is to subject the system to heavy volumes of data, and the attempt
of which is to show that the system cannot handle the volume of data specified in
its objective. Since volume testing being obviously expensive, in terms of both
machine and people time, one must not go overboard. However every system must
be exposed to at least a few volume tests.
(ii) Stress Testing
Stress testing involves subjecting the program to heavy loads or stress. A heavy
stress is a peak volume of data encountered over a short span of time. Although
some stress test may experience ‘never will occur’ situations during its operational
use, but this does not imply that these tests are not useful. If errors are detected by
these ‘impossible’ conditions, the test is valuable, because it is likely that the same
errors might also occur in realistic, less stressful situations.
(iii) Performance Testing
Many programs have specific performance or efficiency objectives, such as
response times and throughput rates under certain workload and configuration
conditions. Performance testing should attempt to show that the system does not
satisfy its performance objectives.
(iv) Recovery Testing
31
If processing must continue during periods in which the application system is not
operational, then those recovery processing procedures/contingent actions should
be tested during the System test. In addition, the users of the system should be
involved in a complete recovery test so that not only the application system is
tested but the procedures for performing the manual aspects of recovery are tested.
(v) Security Testing
The adequacy of the security procedures should be tested by attempting to violate
those procedures. For example, testing should attempt to access or modify data by
an individual not authorized to access or modify that data.
(vi) Procedure Testing
Computer systems may not contain only computer processes but also involve
procedures performed by people. Any prescribed human procedures, such as
procedures to be followed by the system operator, data-base administrator, or
terminal user, should be tested during the System test.
(vii) Regression Testing
Regression testing is the verification that what is being installed does not affect
any already installed portion of the application or other applications interfaced by
the new application.
(viii) Operational Testing
During the System test, testing should be conducted by the normal operations staff.
It is only through having normal operation personnel conduct the test that the
completeness of operator instructions and the ease with which the system can be
operated can be properly evaluated. This testing is optional, and should be
conducted only when the environment is available.
(b) It is understood that in real situations, due to possibly environmental reasons,
some of the tests (e.g. Procedures test, etc.) may not be carried out in this stage and
32
are to be delayed to later stages. There is no objection to such delay provided that
the reasons are documented clearly in the Test Summary Report and the test be
carried out once the constraints removed.
(c) Please refer to Appendix D for a checklist on Systems Testing.
4.3.5 ACCEPTANCE TESTING
4.3.5.1 Scope of Testing
Acceptance Testing is the process of comparing the application system to its initial
requirements and the current needs of its end users. The goal here is to determine
whether the software end product is not acceptable to its user.
4.3.5.2 Activities, Documentation and Parties Involved
(a) User representatives to prepare an Acceptance Testing test plan, which is to be
endorsed by the Project Committee via the Test Control Sub-Committee.
(b) User representatives to prepare an Acceptance Testing test specification, which
is to be endorsed by the Project Committee via the Test Control Sub-Committee.
(c) User representatives, with the aid of officers, to set up the testing environment.
(d) User representatives to perform Acceptance Testing; and upon fault found
issue test incident reports to the Designers/programmers, who will fix up the liable
error.
(e) User representatives to report progress of the Acceptance Testing through
periodic submission of Acceptance Testing Progress Report.
(f) Systems Manager to keep the overall Test Summary Report as documentation
proof.
33
4.3.5.3 Practical Guidelines
(a) There are three approaches for Acceptance Testing, namely,
(i) A planned comprehensive test
Using artificial data and simulated operational procedures, and usually
accompanied with the Big Bang implementation approach but can also be used as
a pre-requisite step of other approaches.
(ii) Parallel run
Using live data and would normally be used when comparison between the
existing system and the new system is required. This approach requires duplicated
resources to operate both systems.
(iii) Pilot run
Using live data and would normally be used when the user is not certain about the
acceptance of the system by its end-users and/or the public.
Users are responsible to select the approach (es) that is most applicable to its
operating environment.
(b) Precautions for liaison officers
(i) To communicate clearly to the users of their commitments on the testing
(ii) Some users may be physically involved for the first time; therefore sufficient
presentation, introduction, and training will be very important
(iii) Development team must be available to resolve problems if required
(iv) If possible, future maintenance team should be identified(v) Ensure all tasks are completed before handover to the maintenance team
(c) Precaution for users
(i) Testing staff should be freed from their routine activities
(ii) Commitment is authorized
34
(d) In the case of contract-out projects, “user representatives” would mean User
department. Similar to as in the case of inhouse projects that assist user
departments in preparing the testing plan & specification, vendors’ assistance in
these areas could be asked for.
(e) Please refer to Appendix E for a checklist on User Acceptance Testing.
4.4 GENERAL TESTING PRINCIPLES
The following points should be noted when conducting training:
(a) As far as possible, testing should be performed by a group of people (referred
to in this document as Test Group) different from those performing design and
coding of the same system. Please refer to Section 9.2 for its description.
(b) Test cases must be written for invalid and unexpected, as well as valid and
expected input conditions. A good test case is one that has a high probability of
detecting undiscovered errors. A successful test case is one that detects an
undiscovered error.
(c) A necessary part of a test case is a definition of the expected outputs or results.
(d) Do not plan testing effort on assumption that no errors will be found.
(e) The probability of the existence of more errors in a section of a program is
proportional to the number of errors already found in that section.
(f) Testing libraries should be set up allowing Regression test be performed at
system maintenance and enhancement times.
(g) The later in the development life cycle a fault is discovered, the higher the cost
of correction.
(h) Successful testing is relying on complete and unambiguous specification.Now a question arise when the testing is started and when it stopped.
4.5. TESTING START PROCESS
When Testing should start:
35
Testing early in the life cycle reduces the errors. Test deliverables are associated
with every phase of development. The goal of Software Tester is to find bugs, find
them as early as possible, and make them sure they are fixed.
The number one cause of Software bugs is the Specification. There are several
reasons specifications are the largest bug producer.
In many instances a Spec simply isn’t written. Other reasons may be that the spec
isn’t thorough enough, its constantly changing, or it’s not communicated well to
the entire team. Planning software is vitally important. If it’s not done correctly
bugs will be created.
The next largest source of bugs is the Design, That’s where the programmers lay
the plan for their Software. Compare it to an architect creating the blue print for
the building, Bugs occur here for the same reason they occur in the specification.
It’s rushed, changed, or not well communicated.
Coding errors may be more familiar to you if you are a programmer. Typically
these can be traced to the Software complexity, poor documentation, schedule
pressure or just plain dump mistakes. It’s important to note that many bugs that
appear on the surface to be programming errors can really be traced to
specification. It’s quite common to hear a programmer say, “ oh, so that’s what its
supposed to do. If someone had told me that I wouldn’t have written the code that
way.”
36
The other category is the catch-all for what is left. Some bugs can blamed for false
positives, conditions that were thought to be bugs but really weren’t. There may be
duplicate bugs, multiple ones that resulted from the square root cause. Some bugs
can be traced to Testing errors.
Costs: The costs re logarithmic- that is, they increase tenfold as time increases. A
bug found and fixed during the early stages when the specification is being written
might cost next to nothing, or 10 cents in our example. The same bug, if not found
until the software is coded and tested, might cost $1 to $10. If a customer finds it,
the cost would easily top $100.
4.6. TESTING STOP PROCESS
When to Stop Testing
This can be difficult to determine. Many modern software applications are so
complex, and run in such as interdependent environment, that complete testing can
never be done. "When to stop testing" is one of the most difficult questions to a
test engineer. Common factors in deciding when to stop are:
Deadlines ( release deadlines,testing deadlines.)
Test cases completed with certain percentages passed
Test budget depleted
Coverage of code/functionality/requirements reaches a specified point
The rate at which Bugs can be found is too small
Beta or Alpha Testing period ends
The risk in the project is under acceptable limit.
Practically, we feel that the decision of stopping testing is based on the level of the
risk acceptable to the management. As testing is a never ending process we can
never assume that 100 % testing has been done, we can only minimize the risk of
shipping the product to client with X testing done. The risk can be measured by
Risk analysis but for small duration / low budget / low resources project, risk can
be deduced by simply: -
37
Measuring Test Coverage.
Number of test cycles.
Number of high priority bugs.
4.7. SOFTWARE TESTING-REQUIREMENTS TRACEABILITY MATRIX
What is the need for Requirements Traceability Matrix?
Automation requirement in an organization initiates it to go for a custom built
Software. The client who had ordered for the product specifies his requirements to
the development Team and the process of Software Development gets started. In
addition to the requirements specified by the client, the development team may
also propose various value added suggestions that could be added on to the
software. But maintaining a track of all the requirements specified in the
requirement document and checking whether all the requirements have been met
by the end product is a cumbersome and a laborious process. But if high priority is
not provided to this aspect of Software development cycle, it may result in a lot of
confusion and arguments between the development team and the client once the
product is built.
The remedy for this problem is the Traceability Matrix.
What is Traceability Matrix?
Requirements tracing is the process of documenting the links between the user
requirements for the system you're building and the work products developed to
implement and verify those requirements. These work products include Software
requirements, design specifications, Software code, test plans and other artifacts of
the systems development process. Requirements tracing helps the project team to
understand which parts of the design and code implement the user's requirements,
and which tests are necessary to verify that the user's requirements have been
implemented correctly.
Requirements Traceability Matrix Document is the output of Requirements
Management phase of SDLC.
38
For this project a firm understanding of Regression testing is required.
4.8. REGRESSION TESTING
Regression testing is any type of software testing which seeks to uncover
regression bugs. Regression bugs occur whenever software functionality that
previously worked as desired, stops working or no longer works in the same way
that was previously planned. Typically regression bugs occur as an unintended
consequence of program changes.
Common methods of regression testing include re-running previously run tests and
checking whether previously fixed faults have re-emerged.
4.8.1. USAGE:
All aspects of system remain functional after testing.
Change in one segment does not change the functionality of other segment.
4.8.2. OBJECTIVE:
Determine System documents remain current
Determine System test data and test conditions remain current
Determine Previously tested system functions properly without getting
effected though changes are made in some other segment of application
system.
4.8.3. HOW TO USE
Test cases, which were used previously for the already tested segment is,
re-run to ensure that the results of the segment tested currently and the
results of same segment tested earlier are same.
Test automation is needed to carry out the test transactions (test condition
execution) else the process is very time consuming and tedious.
In this case of testing cost/benefit should be carefully evaluated else the
efforts spend on testing would be more and payback would be minimum.
39
4.8.4. WHEN TO USE
When there is high risk that the new changes may effect the unchanged
areas of application system.
In development process: Regression testing should be carried out after the
pre-determined changes are incorporated in the application system.
In Maintenance phase : regression testing should be carried out if there is a
high risk that loss may occur when the changes are made to the system
4.8.5. EXAMPLE Re-running of previously conducted tests to ensure that the unchanged
portion of system functions properly.
Reviewing previously prepared system documents (manuals) to ensure that
they do not get effected after changes are made to the application system.
4.8.6. DISADVANTAGE
Time consuming and tedious if test automation not done
For this project a firm understanding of Black-Box testing is required.
4.9. BLACK BOX TESTING
4.9 .1.Introduction
Black box testing attempts to derive sets of inputs that will fully exercise all the
functional requirements of a system. It is not an alternative to white box testing.
Black Box Testing is testing without knowledge of the internal workings of the
item being tested. For example, when black box testing is applied to software
engineering, the tester would only know the "legal" inputs and what the expected
outputs should be, but not how the program actually arrives at those outputs. It is
because of this that black box testing can be considered testing with respect to the
specifications; no other knowledge of the program is necessary. For this reason,
40
the tester and the programmer can be independent of one another, avoiding
programmer bias toward his own work. For this testing, test groups are often used,
"Test groups are sometimes called professional idiots...people who are good at
designing incorrect data." 1 Also, do to the nature of black box testing; the test
planning can begin as soon as the specifications are written. The opposite of this
would be glass box testing, where test data are derived from direct examination of
the code to be tested. For glass box testing, the test cases cannot be determined
until the code has actually been written. Both of these testing techniques have
advantages and disadvantages, but when combined, they help to ensure thorough
testing of the product.
Synonyms for black box include: behavioral, functional, opaque-box, and closed-
box.
This type of testing attempts to find errors in the following categories:
1. incorrect or missing functions,
2. interface errors,
3. errors in data structures or external database access,
4. performance errors, and 5. initialization and termination errors.
Tests are designed to answer the following questions:
1. How is the function's validity tested?
2. What classes of input will make good test cases?
3. Is the system particularly sensitive to certain input values?
4. How are the boundaries of a data class isolated?
5. What data rates and data volume can the system tolerate?
6. What effect will specific combinations of data have on system operation?
White box testing should be performed early in the testing process, while black
box testing tends to be applied during later stages. Test cases should be derived
which
1. Reduce the number of additional test cases that must be designed to achieve
reasonable testing, and
2. Tell us something about the presence or absence of classes of errors, rather than
41
an error associated only with the specific test at hand.
4.9 . 2. Equivalence Partitioning
This method divides the input domain of a program into classes of data from
which test cases can be derived. Equivalence partitioning strives to define a test
case that uncovers classes of errors and thereby reduces the number of test cases
needed. It is based on an evaluation of equivalence classes for an input condition.
An equivalence class represents a set of valid or invalid states for input conditions.
Equivalence classes may be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence
classes are defined.
2. If an input condition requires a specific value, then one valid and two invalid
equivalence classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid
equivalence class are defined.
4. If an input condition is Boolean, then one valid and one invalid equivalence
class are defined.
4.9 . 3. Boundary Value Analysis
This method leads to a selection of test cases that exercise boundary values. It
complements equivalence partitioning since it selects test cases at the edges of a
class. Rather than focusing on input conditions solely, BVA derives test cases
from the output domain also. BVA guidelines include:
1. For input ranges bounded by a and b, test cases should include values a and b
and just above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be
developed to exercise the minimum and maximum numbers and values just above
and below these limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be
42
designed to exercise the data structure at its boundary.
4.9 . 4. Cause-Effect Graphing Techniques
Cause-effect graphing is a technique that provides a concise representation of
logical conditions and corresponding actions. There are four steps:
1. Causes (input conditions) and effects (actions) are listed for a module and an
identifier is assigned to each.
2. A cause-effect graph is developed.
3. The graph is converted to a decision table.
4. Decision table rules are converted to test cases.
What is black box testing, difference between black box testing and white box
testing, Black box Testing plans, unbiased black box testing
4.9 . 5.Advantages of Black Box Testing
More effective on larger units of code than glass box testing
Tester needs no knowledge of implementation, including specific
programming languages
Tester and programmer are independent of each other
Tests are done from a user's point of view
Will help to expose any ambiguities or inconsistencies in the specifications
Test cases can be designed as soon as the specifications are complete
4.9 . .6. Disadvantages of Black Box Testing
Only a small number of possible inputs can actually be tested, to test
every possible input stream would take nearly forever
Without clear and concise specifications, test cases are hard to design
There may be unnecessary repetition of test inputs if the tester is not
informed of test cases the programmer has already tried
May leave many program paths untested
43
Cannot be directed toward specific segments of code which may be
very complex (and therefore more error prone)
most testing related research has been directed toward glass box testing
4.9 . 7.Testing Strategies/Techniques
Black box testing should make use of randomly generated inputs (only
a test range should be specified by the tester), to eliminate any
guesswork by the tester as to the methods of the function
Data outside of the specified input range should be tested to check the
robustness of the program
Boundary cases should be tested (top and bottom of specified range) to
make sure the highest and lowest allowable inputs produce proper
output
The number zero should be tested when numerical data is to be input
Stress testing should be performed (try to overload the program with
inputs to see where it reaches its maximum capacity), especially with
real time systems
Crash testing should be performed to see what it takes to bring the
system down
Test monitoring tools should be used whenever possible to track which
tests have already been performed and the outputs of these tests to
avoid repetition and to aid in the software maintenance
Other functional testing techniques include: transaction testing, syntax
testing, domain testing, logic testing, and state testing.
Finite state machine models can be used as a guide to design functional
tests
44
According to Beizer 2 the following is a general order by which tests
should be designed:
1. Clean tests against requirements.
2. Additional structural tests for branch coverage, as needed.
3. Additional tests for data-flow coverage as needed.
4. Domain tests not covered by the above.
5. Special techniques as appropriate--syntax, loop, state, etc.
6. Any dirty tests not covered by the above.
Also known as functional testing. A software testing technique whereby the
internal workings of the item being tested are not known by the tester. For
example, in a black box test on a software design the tester only knows the inputs
and what the expected outcomes should be and not how the program arrives at
those outputs. The tester does not ever examine the programming code and does
not need any further knowledge of the program other than its specifications.
The advantages of this type of testing include:
a) The test is unbiased because the designer and the tester are independent of
each other.
b) The tester does not need knowledge of any specific programming
languages.
c) The test is done from the point of view of the user, not the designer.
d) Test cases can be designed as soon as the specifications are complete.
Being team member of testing team my work comprises following:
1. Analysis of provided document and application to get familiar with product
2. Preparing questionnaires for the developers
45
3. Attending formal meetings with the developers to understand what are their
requirements and what tester’s queries about the application are so that best
possible approach can be followed for testing.
4. Attending team meetings and follow instructions provided by Test Manager
and immediate supervisors.
At ground level
1. Analysis of document (System Requirement Specifications, User Manual)
2. Preparing Testing Scenarios
3. Preparing Test cases, Test Procedures and mapping them with Test Cases
4. Executing Test Cases and preparing Test Run Report.
5. Logging the bug with defect tracking tool (bugzilla, JIRA)
6. Regression Testing
7. Exhaustive testing.
5. STRATEGY OF TESTING
The Software Testing Process
46
Text Cases
Test Data
Test Results
Test Reports
Design test cases
Prepare test data
Run program with test data
Compare results with test cases
5.1. SOFTWARE TESTING LIFE CYCLE
SOFTWARE TESTING LIFE CYCLE identifies what test activities to carry out
and when (what is the best time) to accomplish those test activities. Even though
testing differs between organizations, there is a testing life cycle.
Software Testing Life Cycle consists of six (generic) phases:
Test Planning,
Test Analysis,
Test Design,
Construction and verification,
Testing Cycles,
Final Testing and Implementation and
Post Implementation.
Software testing has its own life cycle that intersects with every stage of the
SDLC. The basic requirements in software testing life cycle is to control/deal
with software testing – Manual, Automated and Performance.
5.1.1.TEST PLANNING
This is the phase where Project Manager has to decide what things need to be
tested, do I have the appropriate budget etc. Naturally proper planning at this
stage would greatly reduce the risk of low quality software. This planning will be
an ongoing process with no end point.
Activities at this stage would include preparation of high level test plan-
(according to IEEE test plan template The Software Test Plan (STP) is designed
to prescribe the scope, approach, resources, and schedule of all testing activities.
The plan must identify the items to be tested, the features to be tested, the types
47
of testing to be performed, the personnel responsible for testing, the resources
and schedule required to complete testing, and the risks associated with the
plan.). Almost all of the activities done during this stage are included in this
software test plan and revolve around a test plan.
5.1.2. TEST ANALYSIS
Once test plan is made and decided upon, next step is to delve little more into the
project and decide what types of testing should be carried out at different stages
of SDLC, do we need or plan to automate, if yes then when the appropriate time
to automate is, what type of specific documentation I need for testing.
Proper and regular meetings should be held between testing teams, project
managers, development teams, Business Analysts to check the progress of things
which will give a fair idea of the movement of the project and ensure the
completeness of the test plan created in the planning phase, which will further
help in enhancing the right testing strategy created earlier. We will start creating
test case formats and test cases itself. In this stage we need to develop Functional
validation matrix based on Business Requirements to ensure that all system
requirements are covered by one or more test cases, identify which test cases to
automate, begin review of documentation, i.e. Functional Design, Business
Requirements, Product Specifications, Product Externals etc. We also have to
define areas for Stress and Performance testing.
5.1.3. TEST DESIGN
Test plans and cases which were developed in the analysis phase are revised.
Functional validation matrix is also revised and finalized. In this stage risk
assessment criteria is developed. If you have thought of automation then you
have to select which test cases to automate and begin writing scripts for them.
Test data is prepared. Standards for unit testing and pass / fail criteria are defined
here. Schedule for testing is revised (if necessary) & finalized and test
48
environment is prepared.
5.1.4. CONSTRUCTION AND VERIFICATION
In this phase we have to complete all the test plans, test cases, complete the
scripting of the automated test cases, Stress and Performance testing plans needs
to be completed. We have to support the development team in their unit testing
phase. And obviously bug reporting would be done as when the bugs are found.
Integration tests are performed and errors (if any) are reported.
5.1.5 TESTING CYCLES
In this phase we have to complete testing cycles until test cases are executed
without errors or a predefined condition is reached. Run test cases --> Report
Bugs --> revise test cases (if needed) --> add new test cases (if needed) --> bug
fixing --> retesting (test cycle 2, test cycle 3….).
5.1.6. FINAL TESTING AND IMPLEMENTATION
In this we have to execute remaining stress and performance test cases,
documentation for testing is completed / updated, provide and complete different
matrices for testing. Acceptance, load and recovery testing will also be conducted
and the application needs to be verified under production conditions.
Post Implementation
In this phase, the testing process is evaluated and lessons learnt from that testing
process are documented. Line of attack to prevent similar problems in future
projects is identified. Create plans to improve the processes. The recording of
new errors and enhancements is an ongoing process. Cleaning up of test
environment is done and test machines are restored to base lines in this stage.
Software Testing Life Cycle
49
Phase Activities Outcome
PlanningCreate high level test
plan
Test plan, Refined
Specification
Analysis
Create detailed test plan,
Functional Validation
Matrix, test cases
Revised Test Plan,
Functional validation
matrix, test cases
Design
test cases are revised;
select which test cases
to automate
revised test cases, test
data sets, sets, risk
assessment sheet
Construction
scripting of test cases to
automate,
test procedures/Scripts,
Drivers, test results,
Bugreports.
Testing cycles complete testing cycles Test results, Bug Reports
Final testing
execute remaining stress
and performance tests,
complete documentation
Test results and different
metrics on test efforts
Post implementationEvaluate testing
processes
Plan for improvement of
testing process
50
6. MEASURING SOFTWARE TESTING
Usually, quality is constrained to such topics as correctness, completeness,
security,[citation needed] but can also include more technical requirements as described
under the ISO standard ISO 9126, such as capability, reliability, efficiency,
portability, maintainability, compatibility, and usability.
There are a number of common software measures, often called "metrics", which
are used to measure the state of the software or the adequacy of the testing.
6.1 SOFTWARE QUALITY
This clause describes a quality model which explains the relationship between different approaches to quality. A specific implementation of this quality model is given in clauses 6 and 7.
6.2. SOFTWARE QUALITY CHARACTERISTICS
The quality model in this part of ISO/IEC 9126 categorises software quality
attributes into six characteristics (functionality, reliability, usability, efficiency,
maintainability and portability), which are further sub-divided into
subcharacteristics (Figure 3). The subcharacteristics can be measured by
internal or external metrics.
Figure 3 - Internal and external quality
51
Definitions are given for each quality characteristic and the subcharacteristics of
the software which influence the quality characteristic. For each characteristic and
subcharacteristic, the capability of the software is determined by a set of internal
attributes which can be measured. Examples of internal metrics are given in
ISO/IEC 9126-3. The characteristics and subcharacteristics can be measured
externally by the extent to which the capability is provided by the system
containing the software.
Examples of external metrics are given in ISO/IEC 9126-2.
NOTE Some of the characteristics in this part of ISO/IEC 9126 relate to
dependability. Dependability characteristics are defined for all types of systems in
IEC 50(191), and where a term in this part of ISO/IEC 9126 is also defined in IEC
50(191), the definition given is broadly compatible.
6.2.1 FUNCTIONALITY
The capability of the software product to provide functions, which meet, stated and
implied needs when the software is used under specified conditions.
NOTE 1 This characteristic is concerned with what the software does to fulfill
needs, whereas the other characteristics are mainly concerned with when and how
it fulfils needs.
NOTE 2 For the stated and implied needs in this characteristic, the note to the
definition of quality applies, (see B.21).
NOTE 3 For a system which is operated by a user, the combination of
functionality, reliability, usability and efficiency can be measured externally by
quality in use (see clause 7).
6.2.1. 1 Suitability
The capability of the software product to provide an appropriate set of functions
for specified tasks and user objectives.
NOTE 1 Examples of appropriateness are task-oriented composition of functions
from constituent sub-functions, capacities of tables.
NOTE 2 Suitability corresponds to suitability for the task in ISO 9241-10
52
NOTE 3 Suitability also affects operability.
6.2.1. 2 Accuracy
The capability of the software product to provide the right or agreed results or effects. NOTE This includes the needed degree of precision of calculated values.
6.2.1. 3 Interoperability The capability of the software product to interact with one or more specified
systems.
NOTE Interoperability is used in place of compatibility in order to avoid possible
ambiguity with replace ability
(see 6.6.4).
6.2.1. 4 Security
The capability of the software product to protect information and data so that
unauthorized persons or systems cannot read or modify them and authorized
persons or systems are not denied access to them.
[ISO/IEC 12207: 1995]
NOTE 1 This also applies to data in transmission.
NOTE 2 Safety is defined as a characteristic of quality in use, as it does not relate
to software alone, but to a whole system.
6.2.1. 5 Compliance
The capability of the software product to adhere to standards, conventions or
regulations in laws and similar prescriptions.
6.2.2 RELIABILITY
The capability of the software product to maintain a specified level of performance
when used under specified conditions
NOTE 1 Wear or ageing does not occur in software. Limitations in reliability are
due to faults in requirements, design, and implementation. Failures due to these
faults depend on the way the software product is used and the program options
selected rather than on elapsed time.
53
NOTE 2 The definition of reliability in ISO/IEC DIS 2382-14:1994 is "The ability
of functional unit to perform a required function...". In this document, functionality
is only one of the characteristics of software quality. Therefore, the definition of
reliability has been broadened to "maintain a specified level of performance..."
instead of "...perform a required function"
6.2.2.1 Maturity
The capability of the software product to avoid failure as a result of faults in the
software.
6.2.2.2 Fault tolerance
The capability of the software product to maintain a specified level of performance
in cases of software faults or of infringement of its specified interface.
NOTE The specified level of performance may include fail safe capability.
6.2.2.3 Recoverability
The capability of the software product to re-establish a specified level of level of
performance and recover the data directly affected in the case of a failure.
NOTE 1 Following a failure, a software product will sometimes be down for a
certain period of time, the length of which is assessed by its recoverability.
NOTE 2 Availability is the capability of the software product to be in a state to
perform a required function at a given point in time, under stated conditions of use.
Externally, availability can be assessed by the proportion of total time during
which the software product is in an up state. Availability is therefore a
combination of maturity (which governs the frequency of failure), fault tolerance
and recoverability (which governs the length of down time following each failure).
6.2.2.4 Compliance
The capability of the software product to adhere to standards, conventions or
regulations relating to reliability.
54
6.2.3 USABILITY
The capability of the software product to be understood, learned, used and
attractive to the user, when used under specified conditions.
NOTE 1 Some aspects of functionality, reliability and efficiency will also affect
usability, but for the purposes of ISO/IEC 9126 are not classified as usability.
NOTE 2 Users may include operators, end users and indirect users who are under
the influence of or dependent on the use of the software. Usability should address
all of the different user environments that the software may affect, which may
include preparation for usage and evaluation of results.
6.2.3.1 Understandability
The capability of the software product to enable the user to understand whether the
software is suitable, and how it can be used for particular tasks and conditions of
use.
NOTE This will depend on the documentation and initial impressions given by the
software.
6.2.3.2 Learnability
The capability of the software product to enable the user to learn its application.
NOTE The internal attributes correspond to suitability for learning as defined in
ISO 9241-10.
6.2.3.3 Operability
The capability of the software product to enable the user to operate and control it.
NOTE 1 Aspects of suitability, changeability, adaptability and installability may
affect operability.
NOTE 2 Operability corresponds to controllability, error tolerance and conformity
with user expectations as defined in ISO 9241-10.
NOTE 3 For a system which is operated by a user, the combination of
functionality, reliability, usability and efficiency can be measured externally by
quality in use.
55
6.2.3.4 Attractiveness
The capability of the software product to be attractive to the user.
NOTE This refers to attributes of the software intended to make the software more
attractive to the user, such as the use of colour and the nature of the graphical
design.
6.2.3.5 Compliance
The capability of the software product to adhere to standards, conventions, style
guides or regulations relating to usability.
6.2.4 EFFICIENCY
The capability of the software product to provide appropriate performance, relative
to the amount of resources used, under stated conditions.
NOTE 1 Resources may include other software products, the software and
hardware configuration of the system, and materials (e.g. print paper, diskettes).
NOTE 2 For a system which is operated by a user, the combination of
functionality, reliability, usability and efficiency can be measured externally by
quality in use.
6.2.4.1 Time behavior
The capability of the software product to provide appropriate response and
processing times and throughput rates when performing its function, under stated
conditions.
NOTE Human resources are included as part of productivity (7.1.2).
6.2.4.2 Resource utilization
The capability of the software product to use appropriate amounts and types of
resources when the software performs its function under stated conditions.
6.2.4.3 Compliance
The capability of the software product to adhere to standards or conventions
relating to efficiency.
56
6.2.5 MAINTAINABILITY
The capability of the software product to be modified. Modifications may include
corrections, improvements or adaptation of the software to changes in
environment, and in requirements and functional specifications.
6.2.5.1 Analyzability
The capability of the software product to be diagnosed for deficiencies or causes of
failures in the software, or for the parts to be modified to be identified.
6.2. 5.2 Changeability
The capability of the software product to enable a specified modification to be
implemented.
NOTE 1 Implementation includes coding, designing and documenting changes. 35
NOTE 2 If the software is to be modified by the end user, changeability may affect
operability.
6.2. 5.3 Stability
The capability of the software product to avoid unexpected effects from
modifications of the software.
6.2. 5.4 Testability
The capability of the software product to enable modified software to be validated.
6.2. 5.5 Compliance
The capability of the software product to adhere to standards or conventions
relating to maintainability.
6.2.6 PORTABILITY
The capability of the software product to be transferred from one environment to
another.
NOTE The environment may include organizational, hardware or software
environment.
57
6.2.6.1 Adaptability
The capability of the software product to be adapted for different specified
environments without applying actions or means other than those provided for this
purpose for the software considered.
NOTE 1 Adaptability includes the scalability of internal capacity (e.g. screen
fields, tables, transaction volumes, report formats, etc.).
NOTE 2 If the software is to be adapted by the end user, adaptability corresponds
to suitability for individualisation as defined in ISO 9241-10, and may affect
operability.
6.2.6.2 Installability
The capability of the software product to be installed in a specified environment.
NOTE If the software is to be installed by an end user, installability can affect the
resulting suitability and operability.
6.2.6.3 Co-existence
The capability of the software product to co-exist with other independent software
in a common environment sharing common resources.
6.2.6.4 Replaceability
The capability of the software product to be used in place of another specified
software product for the same purpose in the same environment.
NOTE 1 For example, the replaceability of a new version of a software product is
important to the user when upgrading.
NOTE 2 Replaceability is used in place of compatibility in order to avoid possible
ambiguity with interoperability (see 6.1.3).
NOTE 3 Replaceability may include attributes of both installability and
adaptability. The concept has been introduced as a subcharacteristic of its own
because of its importance.
6.2.6.5 Compliance
58
The capability of the software product to adhere to standards or conventions
relating to portability.
6.3 QUALITY IN USE CHARACTERISTICS
The attributes of quality in use are categorized into four characteristics:
effectiveness, productivity, safety and satisfaction (Figure 4).
Figure 4 - Quality in use
Quality in use is the user’s view of quality. Achieving quality in use is dependent
on achieving the necessary external quality, which in turn is dependent on
achieving the necessary internal quality (Figure 3). Measures are normally
required at all three levels, as meeting criteria for internal measures is not usually
sufficient to ensure achievement of criteria for external measures, and meeting
criteria for external measures of subcharacteristics is not usually sufficient to
ensure achieving criteria for quality in use. Examples of quality in use metrics are
given in ISO/IEC 9126-4
6.3.1 Quality In Use
The capability of the software product to enable specified users to achieve
specified goals with effectiveness, productivity, safety and satisfaction in specified
contexts of use.
NOTE 1 Quality in use is the user's view of the quality of an environment
containing software, and is measured from the results of using the software in the
environment, rather than properties of the software itself.
NOTE 2 Examples of metrics for quality in use are given in ISO/IEC 9126-4.
59
NOTE 3 The definition of quality in use in ISO/IEC 14598-1 (which is reproduced
in Annex B) does not currently include the new characteristic of “safety”.
NOTE 4 Usability is defined in ISO 9241-11 in a similar way to the definition of
quality in use in this part of ISO/IEC 9126. Quality in use may be influenced by
any of the quality characteristics, and is thus broader than usability, which is
defined in this part of ISO/IEC 9126 in terms of understandability, learnability,
operability, attractiveness and compliance.
6.3.1 1 Effectiveness
The capability of the software product to enable users to achieve specified goals
with accuracy and completeness in a specified context of use.
6.3.1 . 2 Productivity
The capability of the software product to enable users to expend appropriate
amounts of resources in relation to the effectiveness achieved in a specified
context of use.
NOTE Relevant resources can include time, effort, materials or financial cost.
6.3.1. 3 Safety
The capability of the software product to achieve acceptable levels of risk of harm
to people, software, equipment or the environment in a specified context of use.
NOTE 2 Risks to safety are usually a result of deficiencies in the functionality,
reliability, usability or maintainability.
6.3.1. 4 Satisfaction
The capability of the software product to satisfy users in a specified context of use.
NOTE Psychometrically-valid questionnaires can be used to obtain reliable
measures of satisfaction.
60
7. PROCESS FOLLOWED AT STQC
7.1. STUDY THE MANUAL
The SRS (Software Requirement Specification) consisted of various functionality to be
present in the software.
For various testing SRS & User Manual play a vital role. Documentation Testing
is done on the basis of these documents.
7.1.1.SRS (Software Requirement Specification):
It consisted of various functionality to be present in the software. SRS described
about the different terminologies used in the software. Various constraints and
validations required by the user are mentioned in it. Also it consisted of the
formulas and methods for the calculation of certain fields.
7.1.1.1.The parts of an SRS
Table of Contents
1. Introduction
1.1 Purpose
1.2 Scope
1.3 Definitions, acronyms, and abbreviations
1.4 References
1.5 Overview
2. Overall description
2.1 Product perspective
2.2 Product functions
2.3 User characteristics
2.4 Constraints
2.5 Assumptions and dependencies
61
3. Specific requirements (See 5.3.1 through 5.3.8 for explanations of possible
specific requirements. See also Annex A for several different ways of organizing
this section of the SRS.)
Appendixes
Index
7.2.1.2.1. Introduction (Section 1 of the SRS)
The introduction of the SRS should provide an overview of the entire SRS. It
should contain the following subsections:
a) Purpose;
b) Scope;
c) Definitions, acronyms, and abbreviations;
d) References;
e) Overview.
7.2.1.2.1.1. Purpose (1.1 of the SRS)
This subsection should
a) Delineate the purpose of the SRS;
b) Specify the intended audience for the SRS.
7.2.1.2.1.2. Scope (1.2 of the SRS)
This subsection should
a) Identify the software product(s) to be produced by name.
b) Explain what the software product(s) will, and, if necessary, will not do.
c) Describe the application of the software being specified, including relevant
benefits, objectives, and goals.
d) Be consistent with similar statements in higher-level specifications (e.g., the
system requirements specification), if they exist.
7.2.1.2.1.3. Definitions, acronyms, and abbreviations (1.3 of the SRS)
62
This subsection should provide the definitions of all terms, acronyms, and
abbreviations required to properly interpret the SRS. This information may be
provided by reference to one or more appendixes in the SRS or by reference to
other documents.
7.2.1.2.1.4. References (1.4 of the SRS)
This subsection should
a) Provide a complete list of all documents referenced elsewhere in the SRS;
b) Identify each document by title, report number (if applicable), date, and
publishing organization;
c) Specify the sources from which the references can be obtained.
This information may be provided by reference to an appendix or to another
document.
7.2.1.2.1.5. Overview (1.5 of the SRS)
This subsection should
a) Describe what the rest of the SRS contains;
b) Explain how the SRS is organized.
7.2.1.2.2. Overall description (Section 2 of the SRS)
This section describes the general factors that affect the product and its
requirements. It does not state specific requirements. Instead, it provides a
background for those requirements, which are defined in detail in Section 3 of the
SRS, and makes them easier to understand.
This section usually consists of six subsections, as follows:
a) Product perspective;
b) Product functions;
c) User characteristics;
d) Constraints;
e) Assumptions and dependencies;
f) Apportioning of requirements.
63
7.2.1.2.2.1. Product perspective (2.1 of the SRS)
This subsection of the SRS should put the product into perspective with other
related products. If the product is independent and totally self-contained, it should
be so stated here. If the SRS defines a product that is a component of a larger
system, as frequently occurs, then this subsection should relate the requirements of
that larger system to functionality of the software and should identify interfaces
between that system and the software. The major components:-
7.2.1.2.2.1.1. System interfaces
This should list each system interface and identify the functionality of the software
to accomplish the system requirement and the interface description to match the
system.
7.2.1.2.2.1.2. User interfaces
This should specify the following:
a) The logical characteristics of each interface between the software product and
its users.
This includes those configuration characteristics (e.g., required screen formats,
content of any reports or menus) necessary to accomplish the software
requirements.
b) All the aspects of optimizing the interface with the person who must use the
system.
This may simply comprise a list of do;s and don;ts on how the system will appear
to the user.
7.2.1.2.2.1.3. Hardware interfaces
This should specify the logical characteristics of each interface between the
software product and the hardware components of the system. It also covers such
matters as what devices are to be supported, how they are to be supported, and
protocols.
64
7.2.1.2.2.1.4. Software interfaces
This should specify the use of other required software products, and interfaces
with other application systems. For each required software product, the following
should be provided:
- Name;
- Mnemonic;
- Specification number;
- Version number;
- Source
- Discussion of the purpose of the interfacing software as related to this software
product.
- Definition of the interface in terms of message content and format. It is not
necessary to detail any well-documented interface, but a reference to the document
defining the interface is required.
7.2.1.2.2.1.5. Communications interfaces
This should specify the various interfaces to communications such as local
network protocols, etc.
7.2.1.2.2.1.6. Memory constraints
This should specify any applicable characteristics and limits on primary and
secondary memory.
7.2.1.2.2.1.7. Operations
This should specify the normal and special operations required by the user such as
a) The various modes of operations in the user organization;
b) Periods of interactive operations and periods of unattended operations;
c) Data processing support functions;
d) Backup and recovery operations.
65
7.2.1.2.2.1.8. Site adaptation requirements
This should
a) Define the requirements for any data or initialization sequences that are specific
to a given site;
b) Specify the site or mission-related features that should be modified to adapt the
software to a particular installation.
7.2.1.2.2.2. Product functions (2.2 of the SRS)
This subsection of the SRS should provide a summary of the major functions that
the software will perform.
7.2.1.2.2.3. User characteristics (2.3 of the SRS)
This subsection of the SRS should describe those general characteristics of the
intended users of the product including educational level, experience, and
technical expertise. It should not be used to state specific requirements, but rather
should provide the reasons why certain specific requirements are later specified in
Section 3 of the SRS.
7.2.1.2.2.4. Constraints (2.4 of the SRS)
This subsection of the SRS should provide a general description of any other items
that will limit the developer’s options. These include
a) Regulatory policies;
b) Hardware limitations;
c) Interfaces to other applications;
d) Parallel operation;
e) Audit functions;
f) Control functions;
g) Higher-order language requirements;
h) Signal handshake protocols;
i) Reliability requirements;
j) Criticality of the application;
66
k) Safety and security considerations.
7.2.1.2.2.5 Assumptions and dependencies (2.5 of the SRS)
This subsection of the SRS should list each of the factors that affect the
requirements stated in the SRS. These factors are not design constraints on the
software but are, rather, any changes to them that can affect the requirements in
the SRS.
7.2.1.2.2.6. Apportioning of requirements (2.6 of the SRS)
This subsection of the SRS should identify requirements that may be delayed until
future versions of the system.
7.2.1.2.3. Specific requirements (Section 3 of the SRS)
This section of the SRS should contain all of the software requirements to a level
of detail sufficient to enable designers to design a system to satisfy those
requirements, and testers to test that the system satisfies those requirements. These
requirements should include at a minimum a description of every input (stimulus)
into the system, every output (response) from the system, and all functions
performed by the system in response to an input or in support of an output. As this
is often the largest and most important part of the SRS, the following principles
apply:
a) Specific requirements should be stated in conformance with all the
characteristics
b) Specific requirements should be cross-referenced to earlier documents that
relate.
c) All requirements should be uniquely identifiable.
d) Careful attention should be given to organizing the requirements to maximize
readability.
7.2.1.2.3.1. External interfaces
This should be a detailed description of all inputs into and outputs from the
software system.
67
It should include both content and format as follows:
a) Name of item;
b) Description of purpose;
c) Source of input or destination of output;
d) Valid range, accuracy, and/or tolerance;
e) Units of measure;
f) Timing;
g) Relationships to other inputs/outputs;
h) Screen formats/organization;
i) Window formats/organization;
j) Data formats;
k) Command formats;
l) End messages.
7.2.1.2.3.2. Functions
Functional requirements should define the fundamental actions that must take
place in the software in accepting and processing the inputs and in processing and
generating the outputs.
These include
a) Validity checks on the inputs
b) Exact sequence of operations
c) Responses to abnormal situations, including
1) Overflow
2) Communication facilities
3) Error handling and recovery
d) Effect of parameters
e) Relationship of outputs to inputs, including
1) Input/output sequences
2) Formulas for input to output conversion
7.2.1.2.3.3 Performance requirements
68
This subsection should specify both the static and the dynamic numerical
requirements placed on the software or on human interaction with the software as a
whole. Static numerical requirements may include the following:
a) The number of terminals to be supported;
b) The number of simultaneous users to be supported;
c) Amount and type of information to be handled.
7.2.1.2.3.4. Logical database requirements
This should specify the logical requirements for any information that is to be
placed into a database. This may include the following:
a) Types of information used by various functions;
b) Frequency of use;
c) Accessing capabilities;
d) Data entities and their relationships;
e) Integrity constraints;
f) Data retention requirements.
7.2.1.2.3.5 Design constraints
This should specify design constraints that can be imposed by other standards,
hardware limitations, etc.
7.2.1.2.3.5.1. Standards compliance
This subsection should specify the requirements derived from existing standards or
regulations. They may include the following:
a) Report format;
b) Data naming;
c) Accounting procedures;
d) Audit tracing.
7.2.1.2.3.6 Software system attributes
69
There are a number of attributes of software that can serve as requirements. It is
important that required attributes be specified so that their achievement can be
objectively verified.
7.2.1.2.3.6.1 Reliability
This should specify the factors required to establish the required reliability of the
software system at time of delivery.
7.2.1.2.3.6.2 Availability
This should specify the factors required to guarantee a defined availability level
for the entire system such as checkpoint, recovery, and restart.
7.2.1.2.3.6.3 Security
This should specify the factors that protect the software from accidental or
malicious access, use, modification, destruction, or disclosure. Specific
requirements in this area could include the need to
a) Utilize certain cryptographical techniques;
b) Keep specific log or history data sets;
c) Assign certain functions to different modules;
d) Restrict communications between some areas of the program;
e) Check data integrity for critical variables.
7.2.1.2.3.6.4 Maintainability
This should specify attributes of software that relate to the ease of maintenance of
the software itself. There may be some requirement for certain modularity,
interfaces, complexity, etc. Requirements should not be placed here just because
they are thought to be good design practices.
7.2.1.2.3.6.5 Portability
70
This should specify attributes of software that relate to the ease of porting the
software to other host machines and/or operating systems. This may include the
following:
a) Percentage of components with host-dependent code;
b) Percentage of code that is host dependent;
c) Use of a proven portable language;
d) Use of a particular compiler or language subset;
e) Use of a particular operating system.
7.2.1.2.3.7 Organizing the specific requirements
For anything but trivial systems the detailed requirements tend to be extensive. For
this reason, it is recommended that careful consideration be given to organizing
these in a manner optimal for understanding. There is no one optimal organization
for all systems.
7.2.1.2.4. Supporting information
The supporting information makes the SRS easier to use. It includes the following:
a) Table of contents;
b) Index;
c) Appendixes.
7.2.1.2.4.1 Table of contents and index
The table of contents and index are quite important and should follow general
compositional practices.
7.2.1.2.4.2 Appendixes
The appendixes are not always considered part of the actual SRS and are not
always necessary. They may include
71
a) Sample input/output formats, descriptions of cost analysis studies, or results of
user surveys;
b) Supporting or background information that can help the readers of the SRS;
c) A description of the problems to be solved by the software;
d) Special packaging instructions for the code and the media to meet security,
export, initial loading, or other requirements.
7. 1.1.2. SRS evolution:
The SRS may need to evolve as the development of the software product
progresses. Additional changes may ensue as deficiencies, shortcomings, and
inaccuracies are discovered in the SRS.
Two major considerations in this process are the following:
a) Requirements should be specified as completely and thoroughly as is known at
the time, even if evolutionary revisions can be foreseen as inevitable. The fact that
they are incomplete should b noted.
b) A formal change process should be initiated to identify, control, track, and
report projected changes. Approved changes in requirements should be
incorporated in the SRS in such a way as to
1) Provide an accurate and complete audit trail of changes;
2) Permit the review of current and superseded portions of the SRS.
EXAMPLE: Software Requirements Specifications Document Review Sr. No.
Requirements Observation
A. Completeness PLEASE REFER CHECKLIST FOR COMPLETENESS AGAINST IEEE 830
1. Are the requirements in scope of the project?2. Are all the requirements fully defined (Input, Processing,
Output)?
3. Are inverse requirements explicitly stated?4. Does the requirements include all of the known customer or
system needs?5. Are requirements statements of customer’s need, not
solution?6. Are all cross-references to other requirements defined?7. Are the requirements sufficient (i.e., do the requirements
provide an adequate basis for design)?
72
8. ARE ALL THE DEFINED REQUIREMENTS USED IN SOFTWARE DESIGN?
9. IS THE IMPLEMENTATION PRIORITY OF EACH REQUIREMENT INCLUDED?
10. HAVE FUNCTIONAL AND NON-FUNCTIONAL REQUIREMENTS BEEN CONSIDERED?
11. Are all performance requirements properly & adequately specified?
12. ARE THE EXTERNAL HARDWARE, SOFTWARE, AND COMMUNICATION INTERFACES DEFINED?
13. Are all security and safety considerations properly specified?
14. Are quality attribute goals explicitly documented & quantified, with acceptable trade-offs specified?Are the time-critical functions identified and their timing criteria specified?
15. ARE USER CHARACTERISTICS DEFINED?16. ARE CONSTRAINTS, ASSUMPTIONS AND DEPENDENCIES
DEFINED?17. ARE ALGORITHMS INTRINSIC TO THE FUNCTIONAL
REQUIREMENTS DEFINED?
18. Are data definition and database requirements defined?19. Does the set of requirements adequately address all
appropriate exception conditions?20. Is the expected behaviour documented for all anticipated
error conditions?21. Does the set of requirements adequately address boundary
conditions?22. Have customisation/internationalisation issues been
adequately addressed?
23. Is any necessary information missing from a requirement? If so, is it identified as TBD with an owner, and a timetable for closing it?
B CORRECTNESS
1. Is each requirement free from content & grammatical errors?
-
2. Are the requirements correct? -3. Are specified error messages unique and meaningful? -4. Are all internal cross-references to other requirements correct? -C Clarity1. Are requirements stand-alone, discrete & uniquely
identifiable?-
2. Are the requirements clearly and appropriately prioritised? -3. Are the requirements clear, precise, concise &
unambiguous?-
4. Are the requirements stated in as simple or atomic a form as possible?
-
5. ARE THE REQUIREMENTS WRITTEN IN THE CUSTOMER’S LANGUAGE, USING THE CUSTOMER’S TERMINOLOGY?
-
6. Are all requirements actually requirements, not design or implementation solutions?
-
7. Is the goal, or measurable value, of the requirement clear? -8. IS WRITING STYLE CLEAR?
DO PARAGRAPHS EXPRESS ONLY CONNECTED IDEAS AND NO MORE?
-
73
ARE LARGER LOGICAL UNITS BROKEN BY SUBHEADINGS?
IS THE FOG INDEX TO HIGH FOR THE AUDIENCE?
DOES IT TALK DOWN TO THE TYPICAL READER?
DOES IT PUT YOU TO SLEEP? IS THERE AN
ABSTRACT/SUMMARY?
9. EXAMPLES: ARE EXAMPLES USED WHERE NECESSARY? ARE EXAMPLES RELEVANT WHERE USED? DO EXAMPLES CONTRIBUTE TO UNDERSTANDING? ARE EXAMPLES MISLEADING? ARE EXAMPLES WRONG? ARE EXAMPLES LESS EFFECTIVE THAN THEIR POTENTIAL?
-
10. Diagrams/ Pictures: ARE DIAGRAMS OR PICTURES USED WHERE NECESSARY? ARE DIAGRAMS OR PICTURES RELEVANT WHERE USED? DO DIAGRAMS OR PICTURES CONTRIBUTE TO
UNDERSTANDING? ARE DIAGRAMS OR PICTURES MISLEADING? ARE DIAGRAMS OR PICTURES WRONG? ARE DIAGRAMS OR PICTURES LESS EFFECTIVE THAN THEIR
POTENTIAL? DO DIAGRAMS OR PICTURES CONTAIN AN APPROPRIATE
AMOUNT OF INFORMATION?
-
11. Terminologies/ Definitions/ Glossaries: IS TERMINOLOGY CONSISTENT THROUGHOUT ALL
DOCUMENTS? IS TERMINOLOGY CONFORMING TO STANDARDS? IS THERE TOO MUCH TECHNICAL TERMINOLOGY? IS THERE A GLOSSARY, IF APPROPRIATE? IS THE GLOSSARY COMPLETE? ARE DEFINITIONS CORRECT? ARE DEFINITIONS CLEAR?
YES YES
-NOT AVAILABLE
-YES
YES
12. TABLE OF CONTENTS: IS THERE A TABLE OF CONTENTS, IF APPROPRIATE? IS THE TABLE OF CONTENTS WELL PLACED? IS THE TABLE OF CONTENTS CORRECT?
NOT AVAILABLE
13. Indexing / Paging: IS THERE AN INDEX, IF APPROPRIATE? IS THE INDEX WELL PLACED? IS THE INDEX CORRECT? ARE PAGE REFERENCES
ACCURATE? ARE THE ENTRIES UNDER THE
RIGHT TITLES? ARE THERE ALTERNATE TITLES
THAT MIGHT BE ACCESSED USING DIFFERENT TERMINOLOGY?
ARE TERMS BROKEN DOWN ADEQUATELY, OR ARE THERE TOO MANY PAGE REFERENCES UNDER SINGLE TERMS, INDICATING THAT MORE SUBCATEGORIES ARE NEEDED?
NOT AVAILABLE
Documentation Organization:
74
Does the organization of the document themselves contribute to the ease of finding information? IS PAGE NUMBERING SENSIBLE? IS PAGE NUMBERING COMPLETE?
ORGANIZATION OF DOCUMENT IS AS PER STD.
14. Related References:Is there a bibliography of related publications, which may contain further information? ARE THE REFERENCES COMPLETE ENOUGH TO LOCATE
THE PUBLICATIONS? ARE THERE ANNOTATIONS TO HELP THE READER CHOOSE
THE APPROPRIATE DOCUMENT?
D CONSISTENCY
1. Standard representations are used? -2. Are the requirements consistent (i.e., no internal
contradictions)? -
3. DO ANY REQUIREMENTS CONFLICT WITH OR DUPLICATE OTHER REQUIREMENTS?
-
4. ARE ALL REQUIREMENTS WRITTEN AT A CONSISTENT AND APPROPRIATE LEVEL OF DETAIL?
-
5. EXTERNAL CONSISTENCY WITH SYSTEM REQUIREMENTS? -
E TRACEABILITY
1. Are all requirement uniquely identified? -2. IS EACH SOFTWARE FUNCTIONAL REQUIREMENT TRACED TO
HIGHER-LEVEL REQUIREMENTS, E.G., SYSTEM REQUIREMENTS OR SOFTWARE REQUIREMENTS?
-
3. ARE REQUIREMENTS TRACEABILITY TO SYSTEM REQUIREMENTS AND SYSTEM DESIGN?
-
F Document Control
1. Is documentation adhering to the specified standards?2. Does the document contain a title page?3. DOES THE DOCUMENT ASSIGNED A UNIQUE ID?4. DOES THE DOCUMENT ASSIGNED A VERSION NUMBER AND
DATE?5. DOES THE DOCUMENT CONTAIN TABLE OF CONTENTS?6. DOES THE DOCUMENT HAVE A DISTRIBUTION LIST?7. DOES THE DOCUMENT REVIEWED & APPROVED BY THE
DEFINED AUTHORITIES?8. DOES THE DOCUMENT CHANGE HISTORY MAINTAINED?9. DOES THE DOCUMENT FOLLOW AND STANDARD TEMPLATE LIKE
IEEE/ISO?10. DOES THE DOCUMENT CONTAIN LIST OF REFERENCES &
APPENDICES (IF NECESSARY)?11. DOES THE DOCUMENT CONTROL MECHANISM DEFINED AND
FOLLOWED?GENERAL ISSUES
1. Are the requirements feasible & realistic (i.e., a solution to the set of requirements exists)?
2. CAN THE REQUIREMENTS BE IMPLEMENTED WITHIN KNOWN CONSTRAINTS?
-
3. ARE REQUIREMENTS REQUIREMENT VERIFIABLE BY TESTING, DEMONSTRATION, REVIEW, OR ANALYSIS?
-
4. ARE THE REQUIREMENTS AS MODIFIABLE AS POSSIBLE? -
75
7.1.2 User Manual:
The user manual provided by the developer consists of various functionalities present in
the software. It consisted of information about general use of the software, its installation,
procedures, concept of operations etc. It also consisted of various illustrations about the
software use.
Study the Documents [User Manual & SRS], Extract the basic requirements and test for the
software meeting the basic requirements.
According to IEEE Standard - Software User Documentation
1. Overview
This clause presents the scope, purpose, organization, and candidate uses of this
standard.
1.1 Scope
This standard provides minimum requirements for the structure, information
content, and format of user documentation, including both printed and electronic
documents used in the work environment by users of systems containing software.
This standard is limited to the software documentation product and does not
include the processes of developing or managing software user documentation; it
applies to printed user manuals, online help, and user reference documentation. It
does not apply to specialized course materials intended primarily for use in formal
training programs.
1.2 Purpose
It addresses the interests of software acquirers, producers, and users in standards
for consistent, complete, accurate, and usable documentation
76
1.3 Structure of software user documentation
The structure of software user documentation, both printed and electronic, includes
how it is organized into segments and in what order the segments are presented. It
can be a single document or a document set of printed and/or electronic
documents. The structure of document should aid the user in locating and
understanding the information content. When a document set will address
audiences with widely differing needs, at least one of the following structures shall
be used:
— Separate sections devoted to the needs of specific audiences.
— Separate documents or document sets for each specific audience.
Components of software user documentation
Component Required?Identification data (package label/title page)
Yes
Table of contents Yes, in documents of more than eight pages after the identification data
List of illustrations OptionalIntroduction YesInformation for use of the documentation
Yes
Concept of operations YesProcedures Yes (instructional mode)Information on software commands Yes (reference mode)Error messages and problem resolution YesGlossary Yes, if documentation contains
unfamiliar termsRelated information sources OptionalNavigational features YesIndex Yes, in documents of more than 40
pagesSearch capability
Search capability Yes, in electronic documents
1.4 Overall structure of documentation
A document set may consist of one or more documents, and each document of a
document set may be one or more volumes. Documents shall be structured into
77
units with unique content. Well-structured documentation makes information
available where it is needed without redundancy. Documents shall be structured
into units with unique content. Well-structured documentation makes information
available where it is needed without redundancy.
Task-oriented instructional mode documentation shall include procedures
structured according to the user’s tasks. Related tasks should be grouped in the
same chapter or topic. Chapters and topics should be organized to facilitate
learning by presenting simpler, more common, or initial tasks before more
complex, less utilized, or subsequent tasks.
Reference mode documentation should be arranged to facilitate random access to
individual units of information.
Document Review follows the following criteria: -
1 Completeness of information
Documentation shall provide complete instructional and reference information for
all critical software functions. Instructional mode documentation shall include
complete information to enable performance of selected tasks using the software
functions by the least experienced members. Reference mode documentation shall
include all instances of the selected elements being documented. It shall include all
user-entered and system displayed commands and error messages in that subset.
2 Accuracy of information
Documentation shall accurately reflect the functions and results of the applicable
software version. If the previous documentation version is no longer accurate, new
documentation shall be available with software updates or upgrades.
Documentation corrections and updates may be provided via a new manual, a
read-me file, or a downloaded file from a web site.
78
3 Content of identification data
Documentation shall contain unique identification data. The identification data
shall include the following:
a) Documentation title
b) Documentation version and date published
c) Software product and version
d) Issuing organization
4 Information for use of the documentation
The documentation shall include information on how it is to be used e.g. help, and
an explanation of the notation
5 Concept of operations
Documentation shall explain the conceptual background for use of the software,
using such methods as a visual or verbal overview of the process or workflow; or
general concept of operation. Explanations of the concept of operation should be
adapted to the expected familiarity of the users with any specialized terminology
for user tasks and software functions. Documentation shall relate each documented
function to the overall process or tasks. Conceptual information may be presented
in one section or immediately preceding each applicable procedure.
6 Information for general use of the software
Task-oriented instructional mode documentation shall include instructions for
routine activities that are applied to several functions:
— Software installation and de-installation, if performed by the user
— Orientation to use of the features of the graphical user interface
— Access, or log-on and sign-off the software
— Navigation through the software to access and to exit from functions
— Data operations (enter, save, read, print, update, and delete)
— Methods of canceling, interrupting, and restarting operations
79
These common procedures should be presented once to avoid redundancy when
they are used in more complex functions.
7 Information for procedures and tutorials
Instructional mode documentation provides directions for performing procedures.
Instructions shall include preliminary information, instructional steps, and
completion information. Preliminary information common to several procedures
may be grouped and presented once to avoid redundancy.
Preliminary information for instructions shall include the following:
— A brief overview of the purpose of the procedure and definitions or
explanations of necessary concepts not elsewhere included
— Identification of technical or administrative activities that must be done before
starting the task
— A list of materials the user will need to complete the task, which may include
data, documents, passwords, additional software, and identification of drivers,
interfaces, or protocols
— Relevant warnings, cautions, and notes that apply to the entire procedure
8 Information on software commands
Documentation shall explain the formats and procedures for user-entered software
commands, including required parameters, optional parameters, default options,
order of commands, and syntax.
9 Information on error messages and problem resolution
Documentation should address all known problems in using the software in
sufficient detail such that the users can either recover from the problems
themselves or clearly report the problem to technical support personnel.
10 Information on terminology
Documentation shall include a glossary, if terms or their specific uses in the
software user interface or documentation are likely to be unfamiliar to the user.
The glossary shall include an alphabetical list of terms and definitions.
Documentation using abbreviations and acronyms unfamiliar to the user shall
80
include a list with definitions, which may be integrated with the glossary. Terms
included in the glossary should also be defined on their first appearance in printed
documentation. Electronic documentation may include links from terms to
glossaries or explanations in secondary windows.
11 Information on related information sources
Documentation may contain information on accessing related information sources,
such as a bibliography, list of references, or links to related web pages. Related
information sources and references may include the following:
Requirement specifications, design specifications, and applicable standards for the
software and the documentation
— Test plans and procedures for the software and the documentation
— Configuration management policies and procedures for the software and the
documentation
— Documentation for the hardware and software environment
— Explanations of the concept of operations or scientific, technical, or business
processes embodied in the software.
The documentation should indicate whether the references contain mandatory
requirements or informative background material.
1.5 Format of software user documentation
The documentation format includes the selected electronic or print media and
presentation conventions for stylistic, typographic, and graphic elements. This
clause specifies formats for various documentation components.
The format for reference mode documentation should be accessible and usable in
the expected users’ work environment. The size of printed and bound reference
documentation, or required equipment, electronic storage space, operating system
software, and browsers to access online reference documentation, shall be
consistent with the capacity of the expected users’ work environment.
81
The documentation should be provided in media and formats that allow its use by
those with a vision, hearing, or other physical limitation, if they are able to use the
software and to perform the tasks supported by the software.
1.5.1 Consistency of terminology and formats
Documentation shall use consistent terminology throughout a document set for
elements of the user interface, data elements, field names, functions, pages, topics,
and processes. Formatting conventions for highlighting information of special
importance, such as warnings, cautions and notes, shall be applied consistently
throughout the document set. The documentation may use special formatting to
identify new or changed content.
1.5.2 Use of printed or electronic formats
Whether or not electronic documentation is provided, the following documentation
shall be presented in printed form:
— Hardware, operating system, and browser software requirements for the
software and the documentation
— Installation information, including recovery and troubleshooting information
for installation instructions
— Instructions for starting the software Instructions for access to any electronic
documentation
— Information for contacting technical support or performing recovery actions
available to users
Study the Documents [User Manual & SRS], Extract the basic requirements and
test for the software meeting the basic requirements. For functional testing Test
Plan is made. According to that testing is performed. Extensive test sheets were
made which includes the various cases, which are expected during the use of the
software. The software was tested using these test sheets and each result was
recorded on the sheet.
82
7.2 PREPARE TEST SCENARIOS
7.2.1. What is a test scenario?
Test scenario is a document that shows the flow of the software or we can say that
this document helps to test the flow of the software.
7.2.2. How to make a test scenario
While preparing the scenario we include
a) Step description.
b) Action performed.
c) Expected output.
d) Actual output.
e) Pass/Fail.
a) Step description
All the steps that the user perform is included in it, like -Double click the LRIS
application icon on the desktop to start the application. Enter the user id and
password etc.
b) Action performed
The different actions that the user takes to move /maintain the flow/run the
software are included here like-
Save.
Double click etc.
c) Expected output
This includes the output that the user expects after he/she has performed any
action it can include - An intermediate screen shall appear having all the
details.
d) Actual output
It includes the description of the action that is actually happening in the software.
83
e) Pass/Fail
When the actual output is as the expected output, it is marked as Pass.
When the actual output and the expected output differ, it is marked as Fail.
These steps are reported as defects.
7.2.3. Template- Test Scenario
7.3. PREPARE TEST CASE
7.3.1. What is a test case?
A test case can be defined as, identifying different input combinations, and we can
then decide whether to define a test using that particular input combination. Once
we decide which outline paths become test cases, the next step is to define an
unambiguous input state and its associated expected result. A complete set of test
contains one or more test cases for every leaf in the outline.
7.3.2 How to make test case?
While preparing the test case we include
a) Input
b) Operation
c) Expected output.
d) Actual output.
e) Pass/Fail.
7.3.3. Template – Test Case
84
Test Scenario for <Application Name>
Pre Condition:
Navigation:
Step Id Steps Description Detailed Steps Expected Results Actual Result Status
7.4 PREPARE DEFECT REPORT
7.4.1. What is a defect report?
These errors/defects are reported in a well-formatted document known as defect
report. This document contains all the details such as in which process the
particular defect was seen, which action was performed that the error was
observed, test case id, the severity of the defect are also marked. The severity are
marked as per rules
a) Urgent
b) High
c) Medium
d) Low
e) None
a) Urgent: The failure causes the system crash or unrecoverable data
loss or jeopardizes personnel.
b) High: The failure causes impairment of critical system functions
and no workaround solutions exist.
c) Medium: The failure causes impairment of critical system
functions though a work around solution does exist.
d) Low: the failure causes inconvenience or annoyances.
e) None: None of the above or the anomaly concerns and
enhancement rather than a failure.
The total number of defects are calculated and then categorized accordingly.
85
Test Scenario for <Application Name>
Pre Condition:
Navigation:
TC Field Field Operatio Expected Actual Rem
7.4.2. Why do we need defect report?
In order to run the software testing process smoothly defects encountered by the
tester should be formally reported back to the developer for further modifications.
The developer can workout on the defects according to the severities mentioned in
the defect report.
7.4.3. Template – Defect Report
7.4.4. Software Testing-Defect Profile
1. Defect-Nonconformance to requirements or functional / program specification.
2. Bug-A fault in a program, which causes the program to perform in an
unintended or unanticipated manner.
3. Bug Report comes into picture once the actual testing starts.
4. If a particular Test Case's Actual and Expected Result will mismatch then we
will report a Bug against that Test Case.
5. For each bug we are having a Life Cycle.First time when tester identifies a bug
then he will give the Status of that bug as "New".
6. Once the Developer Team lead go through the Bug Report and he will assign
each bug to the concerned Developer and he changes the bug status to
"Assigned". After that developer starts working on it during that time by changing
the bug status as "Open" once it got fixed he will change the status to "Fixed". In
the next Cycle we have to check all the Fixed bugs if those are really fixed then
concerned tester change the status of that bug to "Closed" else change the status
to "Reviewed-not-ok". Finally "Deferred", those bugs which are going to be
fixed in the next iteration.
Sr. No. Defect Severity Number of Defects1. Urgent2. High3. Medium4. Low5. None
86
S.N Defect Location Severity
See the following sample template used for Bug Reporting.
7. Here also the name of Bug Report file follows some naming convention like
Project-> Name-> Bug-> Report-> Ver No-> Release Date
8. All the bolded words should be replaced with the actual Project Name, Version
Number and Release Date.
For eg., Bugzilla Bug Report 1.2.0.3 01_12_04
9. After seeing the name of the file anybody can easily recognize that this is a Bug
Report of so and so project and so and so version released on the particular date.
10. It reduces the complexity of opening a file and finding for which project it
belongs to.
7. It maintains the details of Project ID, Project Name, Release Version Number
and Date on the top of the Sheet.
12. For each bug it maintains.
7.5 PREPARE TEST REPORT
7.5.1 What is a test report?
A document describing the conduct and results of the testing carried out for a
system or system component.
7.5.2. Template – Test Report
7.6 REGRESSION TESTING:
Regression testing is any type of software testing which seeks to uncover
regression bugs. Regression bugs occur whenever software functionality that
previously worked as desired, stops working or no longer works in the same way
that was previously planned. Typically regression bugs occur as an unintended
consequence of program changes.
87
S.No Function Document Reference Specified Condition Observation Result
Testing Life Cycle followed in STQC
7.7. PERFORMANCE TESTING:
Approaches to Performance Testing
Performance testing is testing that is performed, from one perspective, to
determine how fast some aspect of a system performs under a particular workload.
It can also serve to validate and verify other quality attributes of the system, such
as scalability, reliability and resource usage. Performance testing is a subset of
Performance engineering, an emerging computer science practice which strives to
88
User ManualSRS Application
PreparingTest Plan
Preparing Test Scenarios
Preparing Test
Cases
Verification (Using SRS+UM)
Running The Application
Executing The Test cases
Executing The Test Scenario
Defect Report
Client
DeliverableFixing The Bug
Note: In regression testing the whole process is repeated again
build performance into the design and architecture of a system, prior to the onset
of actual coding effort.
Performance testing can serve different purposes. It can demonstrate that the
system meets performance criteria. It can compare two systems to find which
performs better. Or it can measure what parts of the system or workload cause the
system to perform badly. In the diagnostic case, software engineers use tools such
as profilers to measure what parts of a device or software contribute most to the
poor performance or to establish throughput levels (and thresholds) for maintained
acceptable response time. It is critical to the cost performance of a new system,
that performance test efforts begin at the inception of the development project and
extend through to deployment. The later a performance defect is detected, the
higher the cost of remediation. This is true in the case of functional testing, but
even more so with performance testing, due to the end-to-end nature of its scope.
Performance testing can be combined with stress testing, in order to see what
happens when an acceptable load is exceeded –does the system crash? How long
does it take to recover if a large load is reduced? Does it fail in a way that causes
collateral damage?
Tasks to perform such a test would include:
Decide whether to use internal or external resources to perform the tests,
depending on inhouse expertise (or lack thereof)
Gather or elicit performance requirements (specifications) from users
and/or business analysts
Develop a high-level plan (or project charter), including requirements,
resources, timelines and milestones
Develop a detailed performance test plan (including detailed scenarios and
test cases, workloads, environment info, etc)
Choose test tool(s)
Specify test data needed and charter effort (often overlooked, but often the
death of a valid performance test)
89
Develop proof-of-concept scripts for each application/component under
test, using chosen test tools and strategies
Develop detailed performance test project plan, including all dependencies
and associated timelines
Install and configure injectors/controller
Configure the test environment (ideally identical hardware to the
production platform), router configuration, quiet network (we don’t want
results upset by other users), deployment of server instrumentation,
database test sets developed, etc.
Execute tests – probably repeatedly (iteratively) in order to see whether any
unaccounted for factor might affect the results
Analyze the results - either pass/fail, or investigation of critical path and
recommendation of corrective action
90
E-governance projects:
LRIS (LAND REGISTRATION INFORMATION SYSTEM)
o Documentation Testingo Functional Testingo Regression Testingo Performance Testing
VATIS (VALUE ADDED TAX INFORMATION SYSTEM)
o Documentation Testingo Functional Testing
FARMER PORTAL o Documentation Testingo Functional Testingo Content Testingo Link Testingo Performance Testingo Usability Testingo Security Testing
91
8. APPLICATION SOFTWARE ASSIGNED FOR TESTING –
8.1 LRIS (LAND REGISTRATION INFORMATION SYSTEM)
8.1.1. Introduction of Project
92
This project is one of the modules of an e-Governance Program. In This Project
Land Records are recorded as computerized. It makes the manual process easier
and less complex. This application software is to be developed for the State
Government and to be implemented at taluka e-Dhara Kendra (e-DK) where
maintenance of land records will take place.
Software component of the e-Dhara system which provides citizen service related
to Land Records from taluka center i.e. e-Dhara Kendra (e-DK). This process
works under guidelines as been laid down by e-governance.
8.1.2. Product function:
ADMIN Module
- In this module, add new records for mutation master, crop master, tree master,
tenure master, land use master etc.
- In addition some general activities like uploading data to central server, updating
database structure etc are also included.
FRONT OFFICE Module
- The front office module has selection of village, application process step by step
MUTATION Module
- Village Selection, Mutation Alerts, Mutation Statistics are initial options.
8.1.3.System Interface:
93
8.1.4. Objective of the Project:
The main objective of the project is to recorded Land Records as
computerized. It make the manual process easier and less complex. It is the
software component of the e-Dhara system which provides citizen service related
to Land Records from taluka center i.e. e-Dhara Kendra (EDK). LRIS has three
modules mainly i.e. ADMIN, FRONTOFFICE AND MUTATION. Using these
modules, any authorized user can issue copy of ROR and register a mutation
request. One can also get taluka wise, village wise statistics as well as any other
category wise statistics.
8.1.5. System Requirement
8.1.5.1.Software Interface
The application software is prepared in Visual Basic 6.0 and Microsoft SQL
Server 2000. The usage of hardware at the backend is necessary for large scale
volume of data generated. The complete system is user friendly.
System software on Server
Windows NT/ 2000/ 2000pro or higher
SQL Server 7.0 / 2000 or higher
System software on Clients
Windows 98 or higher
LRIS Package (Including libraries of Visual Basic6.0,
Crystal reports 7.0, GIST SDK 2.5 for Gujarati support)
94
8.1.5.2. Hardware Interface
The hardware required for this application would be as follows:
Product Qty.Req.A SERVER
Pentium III with 256 KB L2 Cache (or higher configuration), 256 MB RAM, Dual 40 GB Hard Disk (SCSI), CD Drive, 5 ¼ `` Floppy Disk Drive, Tape Drive, Colour Monitor, Key Board, Mouse, Serial ports, parallel ports and USB ports.
1
B. CLIENTS
Pentium III with 256 KB L2 Cache, 128 MB RAM, 40 GB Hard Disk, CD Drive, Colour Monitor, Key Board, Mouse, Serial ports, parallel ports and USB ports
2-4 Depending upon the zone
C. OTHER HARDWARE ITMES
Uninterrupted power supply (UPS), Dot matrix printers (DMP) or Laser printer, Finger print reader (FPR), Air condition (AC), Local area network (LAN)
1
8.1.6. Responsibility:
i. Design & execution of Test Scenario and Test Cases
ii. Mapped the software specification & Document Specifications as per
ISO/IEC standard been prepared for a particular module.
iii. Reviewing the Test Scenario and Test Cases designed by other fellow
trainees of my team Document & Software specifications for module
under 1st and 2nd iteration.
iv. Had prepared Reliability Matrix for a module i.e. no of test cases
available, no of scenarios available, no of passed scenarios, no of failed
scenarios etc.
v. Tested performance through tools such as Borland –Segue
SilkPerformer, HP-Mercury Loadrunner, and IBM-Rational Test Studio
for a particular module.
8.1.7 Methodology:
8.1.7.1. Study the Documents:
95
The SRS (Software Requirement Specification) consisted of various functionality to
be present in the software. SRS described about the different terminologies used in
the software. Various constraints and validations required by the user are mentioned
in it. Also it consisted of the formulas and methods for the calculation of certain fields.
Example: Review Report Check list of SRS:
The User Manual provided by the developer consists of various functionalities
present in the software. It consisted of information about general use of the
software, its installation, procedures, concept of operations etc. It also consisted of
various illustrations about the software use.
96
Example: Review Report of User Manual:
Study the Documents [User Manual & SRS], Extract the basic requirements and
test for the software meeting the basic requirements. For functional testing Test
Plan is made. According to that testing is performed. Extensive test sheets were
made which includes the various cases, which are expected during the use of the
software. The software was tested using these test sheets and each result was
recorded on the sheet.
8.1.7.2. Functional Testing
97
i. Developing Scenarios: Test Scenarios is a document that captures the
major flow of the application, executing the scenarios we test the overall
flow and major functions of the Application.While preparing the scenario
we include
a) Step description.
b) Action performed.
c) Expected output.
d) Actual output.
e) Pass/Fail.
Test Scenario Example:
ii) Developing Test Cases: Test Case is a Validation Test that is being applied to
fields tested against Valid, Invalid & Blank Values.
A single test that is applied to some software with the intention of finding errors.
While preparing the test case we include
a) Input
b) Operation
c) Expected output.
98
d) Actual output.
e) Pass/Fail
Executing Test Cases:
Expected Result != Actual Result [ Status = Fail]
Expected Result = Actual Result [Status = Pass]
Test Case Example:
iii) Developing Defect Report: These errors/defects are reported in a well-
formatted document known as defect report. Defect Report is the summary sheet
of the bugs being found in the software. Defects in the software are being
categorized as High, Medium & Low.
The severity are marked as per rules
i. Urgent
ii. High
iii. Medium
iv. Low
v. None
Defect Report Example:
99
iv) Developing Test Report: A document describing the conduct and results of
the testing carried out for a system or system component.
100
Example: 1.
101
Example: 2.
102
Example: 3.According to the application, Entry Status is mandatory and Entry no and Entry date are not mandatory.
103
According to the document “Entry date and number are mandatory fields depending on the mutation type” but it is not define that for which type of mutation it is mandatory.
104
Example: 4:.Front Office>> Reports>>List of Khata Numbers>> Starting Khata no:(Enter: 1)>> click on OK Button>> Click on Close Button>> Run time Error.
105
8.2 VATIS (VALUE ADDED TAX INFORMATION SYSTEM)
106
8.2.1. Introduction: This project is one of the modules of an e-Governance Program.This e-
Governance project “computerization of Commercial Tax Department (CTD) of
state government.
The nature of work includes the derivation of functional scenarios from supplied
RFP to perform user acceptance testing for functional & non-functional
parameters.
This is one of the biggest initiatives of e-governance project implemented by state
govt. department. Organization assigned me two modules for testing.
8.2.1.1. Assessment and Return Module:
In this project Assessment and Return Module are one of the main module
covering assessment & return filing mechanism. Various processes identified as
part of the gap analysis phase for each module. Document would form the basis for
the development of the application.
The explanation for each process involves following sections ‘AS-IS Business
Process’, ‘AS-IS Workflow’, ‘AS-IS Implementation’, ‘MP Specific
Requirements’, ‘Impact Analysis and Design’ and ‘Proposed Workflow changes’.
8.2.1.1.1 Return Module:
Every dealer who is registered to MP CTD has to furnish return in fixed periods.
Returns module covers the return filing of CST, ET, Composition and works
contractors too.
Users Involved in Returns Module:
Concerning Clerk
Reader to CCT
ACTO
CTO
Addl. CTO
107
AC
DC
Addl. CCT
CCT
8.2.1.1.2. Assessment Module:
Assessment module deals with assessing the dealers. Assessment is done
yearly/quarterly/part of the year.
Users Involved in Returns Module:
Commissioner
Addl.Commissioner
DC
AC
CTO
ACTO
8.2.2. Objectives of the Project
The objectives of the proposed system for VATIS can be listed down as follows:
Time –bound delivery of the output
Citizen centric approach of the system
All key departmental processes to be incorporated in to the application
software
Increase efficiency of work mechanism
Maintenance of all the records into the system related to VATIS and their easy
retrieval
Data redundancy is nullified.
8.2.3. Responsibility:
i. Design & execution of Test Scenario and Test Cases
ii. Mapped the software specification & Document Specifications as per
ISO/IEC standard been prepared for a particular module.
108
iii. Reviewing the Test Scenario and Test Cases designed by other fellow trainees
of my team Document & Software specifications for module under 1 st and 2nd
iteration.
iv. Had prepared Reliability Matrix for a module i.e. no of test cases available, no
of scenarios available, no of passed scenarios, no of failed scenarios etc
8.2.4. Methodology:
8.2.4.1. Study the Documents:
Developer provides Gap Analysis Document, in given document of the module.
Going through this document I extracted the requirements of the user. Study the
Document extract the basic requirements and test for the software meeting the
basic requirements.
8.2.4.2.Functional Testing
i. Developing Scenarios: Test Scenarios is a document that captures the major
flow of the application, executing the scenarios we test the overall flow and
major functions of the Application.While preparing the scenario we include
a) Step description.
b) Action performed.
c) Expected output.
d) Actual output.
e) Pass/Fail.
Test Scenario Example:
109
iii. Developing Test Cases: While preparing the test case we include
a) Input
b) Operation
c) Expected output.
d) Actual output.
e) Pass/Fail
Test Case Example:
110
111
112
8.3. FARMER PORTAL.8.3.1. Introduction Of Project
The State Government has awarded the contract for development of a unified
Agriculture Portal for the benefits of the farmers comprising of entire life cycle of
the crops, animal husbandry, fisheries, poultry including department applications
and call centre to facilitate information to farmers.
8.3.2. Applications:
Total 11 functionality or applications are involve in this portal.
1.Agri Clinic: - Get answers from experts on any problem relater to
agriculture
2. Complaint/ Grievance: - Post your Complaints/post your Grievance:
Various registered users will be able to register there complaint/Grievance on the
portal against Department, People, Corruption, Schemes/grants/projects/programs
and about company or product about the quality of fertilizers, manure, seed quality
etc. These complaints will be forwarded to the respective defined users for
appropriate actions.
3. Post a Query: - Auto answer if an FAQ
- To be directed to experts if new
- Answered thru e-mail if farmer has id
4. SMS: - Mandi prices
- Dos and don’ts
5. Departmental MIS: - MIS for government departments for analysis: The
module will help generating reports based on specific user requirement. Therefore
it is requested that department should confirm that these are the final reports and
nothing more will be required.
6. Schemes Monitoring: - for government departments for analysis
7. Grant Monitoring: - for government departments for analysis
113
These both Application will cover all the schemes/Grants/projects/Programs of the
entire Agriculture department and would register, track the application and exhibit
the selected beneficiaries on Individual, Group and, Area Specific. Various reports
can be generated on the schemes/ grants/ project/and programs both financial and
physical reports
8. Chat Room: - for community
- Chats events with experts
- Video- Conferencing with experts
9. Registration: - Create E-mail Id
10. Call Centre; - FAQ database
-Voice based help line
- Videoconferencing
- Upload pix, video, voice
An application for an IVR based call center will be developed and hosted on the
portal. A toll free number for this call center will be provided. Calls made to this
number would land at Pant Nagar. IVR will handle the first level of calls and then
pass on the same to the agents for more help and information to the caller.
11. FAQs: - On weather
- On market commodity prices
- On commodity market prices
11.3.3. Objective: Farmer portal is for better delivery of services to the farmers and enhance the effectiveness, transparency and accountability in the processes of the government/departments.The ultimate goal of the Portal is to integrate agricultural community together for:Better production planning for farmers and other agricultural producers
Providing solutions to Farmers for the various problems they are facing
with crops etc. (this can be online Video chat with the Pant Nagar
University Professors / can also be Email based call center, or voice based
call center) Advisory services through using IT initiatives.
Establishing a platform for commerce on the internet to promote the
regional economy and commerce (Phase II)
114
Increasing regional sales by enabling local producers to reach out to other
states in India (Domestic Market)
Greater investment throughout the economic area
Creation of an information platform for agriculture.
Provision of new sales channels over the internet
News: Related up-to-date news for the community (meteorology etc).
Direct access of small farmers to the market
Good visibility under the shared virtual umbrella of the local region
Quick source of regional information_
Added value through services such as an events calendar, news, chats, small
ads, tourism, cinema
Simple operation, purchasing and information at the click of a mouse (Phase
II)
Good overview of the regional offering and greater familiarity with regional
customs and suppliers
8.3.4. Responsibility:
1. Design & execution of Test Scenario and Test Cases for use cases.
2. Mapped the software specification & Document Specifications as per IEEE
standard been prepared for a particular module.
3. Reviewing the Test Scenario and Test Cases designed by other fellow trainees
of my team Document & Software specifications for module under 1 st and 2nd
iteration.
4. Had prepared Reliability Matrix for a module i.e. no of test cases available, no
of scenarios available, no of passed scenarios, no of failed scenarios etc.
5. Tested performance through tools such as Borland –Segue SilkPerformer, HP-
Mercury Loadrunner, and IBM-Rational Test Studio for a particular module
8.3.5. Methodology:
8.3.5.1. Study the Documents:
115
Developer provides SRS (Software Requirement Study) document. Going through
these documents I extracted the requirements of the user. Study the Documents
Extract the basic requirements and test for the software meeting the basic
requirements.
8.3.5.2. Functional Testing
i) Developing Scenarios: Test Scenarios is a document that captures the major
flow of the application, executing the scenarios we test the overall flow and major
functions of the Application.While preparing the scenario we include
a) Step description.
b) Action performed.
c) Expected output.
d) Actual output.
e) Pass/Fail.
Test Scenario Example:
ii) Developing Test Cases: While preparing the test case we include
a) Input
b) Operation
116
c) Expected output.
d) Actual output.
e) Pass/Fail
Test Case Example:
9 AUTOMATED TOOLS FOR TESTING9.1 INTRODUCTIONBecause software testing often accounts for as much as 50% of all effort expended on a software implementation project, tools that can reduce test time (but without reducing thoroughness) are very valuable. For that purpose, use of the following types of automated tools would be most desirable."Automated Testing" is automating the manual testing process currently in use. Automated testing tools make continuous testing possible. In a manual testing environment, test setup, execution and evaluation take too much time. With the reduction in the time and effort to perform testing with automated tools and iterative approach based on continuous testing is possible.The real use and purpose of automated test tools is to automate regression testing. This means that one must have or must develop a database of detailed test cases that are repeatable, and this suite of tests is run every time there is a change to the application to ensure that the change does not produce unintended consequences.Automated testing is expensive. It does not replace the need for manual testing or enable one to "down-size" your testing department. Automated testing is an addition to one’s testing process.Considering this, apart from manually testing the LRIS application, we have done its automated testing also using the Rational tool- Rational Robot.9.2. RATIONAL ROBOT Rational Robot is a complete set of components for
117
automating the testing of Microsoft Windows client/server, Internet applications and ERP applications running under Windows NT 4.0, Windows XP, Windows 2000, Windows 98, and WindowsMe. The main component of Robot lets you start recording tests in as few as two mouse clicks. After recording, Robot plays back the tests in a fraction of the time it would take to repeat the actions manually.Rational Robot is an automated functional regression testing tool. A Functional test is one that is merely concerned with whether or not the application operates the way it was intended. It does not concern itself with performance,robustness, etc. A Regression test is one where an application is subjected to a suite of functional tests against build after build of an application to verify everything that was working continues to work. The Rational Robot suite significantly reduces the learning curve of performing functional regression testing.Components of Rational Robot:● Rational Administrator -- Create and manage Rational projects to store your testing information.● Rational Test Manager -- Review and analyze test results.● Object Properties, Text, Grid, and Image Comparators – View and analyze the results of verification point playback.● Rational Site Check -- Manage Internet and intranet Web sites.Rational Robot provides test cases for common objects such as menus, lists and bitmaps, and specialized test cases for objects specific to the development environment. In addition, it includes built-in test management, and integrates with the tools in the IBM Rational Unified Process for defect tracking, change management and requirements traceability. It supports multiple UI technologies like Java, the Web, all VS.NET controls, Oracle Forms, Borland Delphi and Sybase Power Builder applications.Rational Robot is a test automation tool for functional testing of client/server applications.Test automation tool for QA teams for testing client/server applications. Enables defect detection, includes test cases and test management, supports multiple UI technologies.Provides a general-purpose test automation tool for QA teams for functional testing of client/server applications Lowers learning curve for testers discovering the value of test automation
processes
Enables test-automation engineers to detect defects by extending test scripts
and to define test cases
Provides test cases for common objects and specialized test cases to
development environment objects
Includes built-in test management, integrates with IBM Rational Unified
Process tools
Aids in defect tracking, change management and requirements traceability
Supports multiple UI technologies
Operating systems supported: Windows
118
9.3 LOADRUNNER
Performance Testing Tool: LoadRunner:
LoadRunner (sometimes mistakenly called Load Runner) is a performance testing
tool from HP (formerly Mercury) used for predicting system behavior and
identifying performance issues. It is an element of the HP Mercury Performance
Center.
LoadRunner is a performance and load testing product by Hewlett-Packard (since
it acquired Mercury Interactive in November 2006) for examining system
behaviour and performance, while generating actual load. LoadRunner can
emulate hundreds or thousands of concurrent users to put the application through
the rigors of real-life user loads, while collecting information from key
infrastructure components (Web servers, database servers etc). The results can
then be analysed in detail, to explore the reasons for particular behaviour.
Consider the client-side application for an automated teller machine (ATM).
Although each client is connected to a server, in total there may be hundreds of
ATMs open to the public. There may be some peak times — such as 10 a.m.
Monday, the start of the work week — during which the load is much higher than
normal. In order to test such situations, it is not practical to have a testbed of
hundreds of ATMs. So, given an ATM simulator and a computer system with
LoadRunner, one can simulate a large number of users accessing the server
simultaneously. Once activities have been defined, they are repeatable. After
debugging a problem in the application, managers can check whether the problem
persists by reproducing the same situation, with the same type of user interaction.
Working in LoadRunner involves using three different tools which are part of LoadRunner. They
are Virtual User Generator (VuGen), Controller and Analysis.
119
Virtual User Generator
The Virtual User Generator (VuGen) allows a user to record and/or script the test
to be performed against the application under test, and enables the performance
tester to play back and make modifications to the script as needed. Such
modifications may include Parameterization (selecting data for keyword-driven
testing), Correlation and Error handling.
During recording, VuGen records a tester's actions by routing data through a
proxy. The type of proxy depends upon the protocol being used, and affects the
form of the resulting script. For some protocols, various recording modes can be
selected to further refine the form of the resulting script. For instance, there are
two types of recording modes used in LoadRunner Web/HTTP testing: URL
based, and HTML based.
Once a script is prepared in VuGen, it is run via the Controller. LoadRunner
provides for the usage of various machines to act as Load Generators. For
example, to run a test of 1000 users, we can use three or more machines with a
LoadRunner agent installed on them. These machines are known as Load
Generators because the actual load will be generated from them (Load Generators
were previously known as "Injectors" - the latter term is still widely used). Each
run is configured with a scenario, which describes which scripts will run, when
they will run, how many virtual users will run, and which Load Generators will be
used for each script. The tester connects each script in the scenario to the name of
a machine which is going to act as a Load Generator, and sets the number of
virtual users to be run from that Load Generator.
LoadRunner uses monitors during a load test to monitor the performance of
individual components under load. Some monitors include Oracle monitors,
WebSphere monitors, etc... Once a scenario is set and the run is completed, the
result of the scenario can be viewed via the Analysis tool.
120
This tool takes the completed scenario result and prepares the necessary graphs for
the tester to view. Also, graphs can be merged to get a good picture of the
performance. The tester can then make needed adjustments to the graph and
prepare a LoadRunner report. The report, including all the necessary graphs, can
be saved in several formats, including HTML and Microsoft Word format.
NAVIGATIONAL STEPS FOR LOADRUNNER LAB-EXERCISES
1.Creating Script Using Virtual User Generator
Start-> program Files->Load Runner->Virtual User Generator
Choose File->New
Select type and Click Ok Button
Start recording Dialog Box appears
Besides Program to Record, Click Browser Button and Browse for the
Application
Choose the Working Dir
Let start recording into sections Vuser_Init and click Ok button
After the application appears, change sections to Actions.
Do some actions on the application
Change sections to Vuser_End and close the application
Click on stop Recording Icon in the tool bar of Vuser Generator
Insert the Start_Transaction and End_Transactions.
Insert the Rendezvous Point
Choose :Vuser->Run, Verify the status of script at the bottom in Execution
Log.
Choose:File->Save.(Remember the path of the script).
2.Running the script in the Controller with Wizard
Start-> program Files->Load Runner->Controller.
Choose: wizard option and click OK.
Click Next in the welcome Screen
121
In the host list , click add button and mention the machine name Click Next
Button
Select the related script you are generated in Vuser Generator(GUI Vuser
Script,DB script,RTE script)
Select Simulation group list, cilck edit button and change the group
name ,No of Vuser.
Click Next Button
Select Finish Button.
Choose: Group->Init or Group->Run or Scenario->Start.
Finally Load runner Analysis graph report appears.
Analysis Graphs:
Figure 1. The throughput of the system in pages per second as load increases over
time
Note that the throughput increases at a constant rate and then at some point levels
off.
122
Figure 2. The execute queue length of the system as load increases over time
Note that the queue length is zero for a period of time, but then starts to grow at a
constant rate. This is because there is a steady increase in load on the system, and
although initially the system had enough free threads to cope with the additional
load, eventually it became overwhelmed and had to start queuing them up.
Figure 3. The response times of two transactions on the system as load increases
over time
123
Note that at the same time as the execute queue (above) starts to grow, the
response time also starts to grow at an increased rate. This is because the requests
cannot be served immediately.
Figure 4. This is what a flat run looks like. All the users are loaded
simultaneously.
124
Figure 5. This is what a ramp-up run looks like. The users are added at a constant
rate (x number per second) throughout the duration of the test.
Figure 6. The throughput of the system in pages per second as measured during a flat runNote the appearance of waves over time. The throughput is not smooth but rather resembles a wave pattern.
This is visible from all aspects of the system including the CPU utilization.
125
Figure 7. The CPU utilization of the system over time, as measured during a flat runNote the appearance of waves over a period of time. The CPU utilization is not smooth but rather has very sharp peaks that resemble the throughput graph's waves.
Figure 8. The execute queue of the system over time as measured during a flat run
126
Note the appearance of waves over time. The execute queue exactly mimics the
CPU utilization graph above.
Finally, the response time of the transactions on the system will also resemble this
wave pattern.
Figure 9. The response time of a transaction on the system over time as measured
during a flat run
Note the appearance of waves over time. The transaction response time lines up
with the above graphs, but the effect is diminished over time.
10.4 BORLAND® SILK PERFORMER
ENSURE THE SCALABILITY, RESPONSIVENESS AND RELIABILITY OF
CRITICAL APPLICATIONS Borland SilkPerformer is a proven, powerful and
easy-to-use load and stress testing solution for optimizing the performance of
business applications. Easy-to-create, accurate and realistic tests simulate tens or
even tens of thousands of IT system users in a wide range of enterprise
127
environments and platforms. The tests isolate issues and bottlenecks that could
impact reliability, performance and scalability. Versatile visual scenario modeling
enables any load scenario to be tested and compared – whether it involves a single,
massive flood of requests to a website or the expected behavior of an enterprise
application under daily load. Any bottlenecks are
identified, then intuitive diagnostic and analysis capabilities help resolve the issue
quickly, reducing test-and-fix cycles, accelerating time-to-market, and supporting
critical release decisions related to application performance. To further reduce
costs and promote more testing by more people, SilkPerformer removes the usage
restrictions common in other solutions with its flexible, sharable deployment
model.
REDUCE COSTS, FEWER RISKS OF PERFORMANCERELATED
FAILURES
SilkPerformer ensures the quality of business applications by measuring their
performance from the end-user perspective— while also monitoring system
performance—in a variety of scenarios under dynamic load conditions.
SilkPerformer can reduce costs and minimize performance risks by helping you:
Accurately assessing application performance, scalability and reliability
characteristics before deployment
Create realistic, reproducible load test scenarios to cover all critical use
cases and requirements
Isolate and resolve the root cause of performance problems in cross-
platform systems quickly and easily
Lower IT infrastructure costs through tuning and accurate capacity
planning before deployment
REALISTIC, LIGHTWEIGHT & ACCURATE SIMULATION
The innovative SilkPerformer technology minimizes the hardware resources
needed per virtual user, enabling more and larger load tests using significantly less
hardware than other popular solutions, reducing this often hidden cost. With
remotely managed load agent machines, the performance of various
128
configurations, user scenarios and network connections can be measured and
contrasted. And, within a single load test, virtual users working with different
Internet, middleware and database protocols—across varied computing
environments— can be simulated. For internationalized applications that utilize
Unicode , SilkPerformer supports multibyte character sets and UTF-8. Client IP
address simulation allows for the testing of load-balanced sites.
PROBLEM ISOLATION AND CORRECTION
Powerful end-to-end diagnostics capabilities help identify the root cause of
performance problems, then take corrective action and report on activities.
Client-Side Diagnostics The unrivaled TrueLog™ technology of SilkPerformer
provides visual front-end diagnostics from the end-user perspective. TrueLog
visually re-creates the data that users provide and receive during load tests—for
HTML pages this includes all embedded objects—enabling you to visually analyze
the behavior of your application as errors occur during load tests.
129
Borland SilkPerformer executes load tests and monitors results in real time.
Detailed response timer statistics help you uncover the root causes of missed
service levels before your application goes live. Server-Side Diagnostics With the
addition of the Server Analysis Module, you can monitor server statistics and
automatically correlate data with load test results to identify ongoing problems
with your system’s back-end servers, even those located behind firewalls. Code-
Level Root-Cause Resolution For deep-down, code-level resolution of
performance issues, Borland offers dynaTrace Diagnostics . Fully integrated,
click-through drill down delivers a multi-tier performance breakdown to identify
the root cause of performance bottlenecks, down to the offending line of code for
both Java and .NET applications.
130
14. CONCLUSION
I was a part of the Testing team and was actively involved in writing test
plan, testing scenarios, test cases, reporting defect report, test report doing
manual testing and automated testing of the project.
In this project we have applied our complete knowledge and are able to
deliver what we have promised. We have focused on the core functionality
of the project and many new ideas can be implemented. It was a
completely sincere effort from our part.
In the end of testing phase of this project, we are very much sure that our
project has an easy to use interface.
While developing the project we learnt so many things:
Team dynamics and skills
So over all it was fantastic experience I have faced.
131
15 REFERENCES
1. A Practitioner’s Guide to Software Test Design By, Lee Copeland
2. Software engineering 6th Edition By Roger S. Pressman
3. Software Requirement Specification of the concerned module
4. Software User Documentation of the concerned module
5. Gap Analysis Document of the concerned module
6. IEEE 830 standard for SRS
7. IEEE 1063 standard for User Documentation
8. IEEE 829 standard for Software Test Documentation.
9. www.wikipedia.org and www.istqb.org for definitions.
10. www.ogcio.gov.hk for “Guideline for Application Software Testing”
11. www.stqc.nic.in for Organization details.
132
16. DEFINITIONS
A
Abstract test case: See high level test case.
Acceptance: See acceptance testing.
Acceptance criteria: The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity. [IEEE 610]
Acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. [After IEEE 610]
Accessibility testing: Testing to determine the ease by which users with disabilities can use a component or system. [Gerrard]
Accuracy: The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. [ISO 9126] See also functionality testing.
Actual outcome: See actual result.
Actual result: The behavior produced/observed when a component or system is tested.
Ad hoc review: See informal review.
Ad hoc testing: Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and arbitrariness guides the test execution activity.
Adaptability: The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered. [ISO 9126] See also portability.
Agile testing: Testing practice for a project using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm. See also test driven development.
PROJECT REPORTSoftware Testing Techniques
Algorithm test [TMap]: See branch testing.
Alpha testing: Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.
Analyzability: The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified. [ISO 9126] See also maintainability.
Analyzer: See static analyzer.
Anomaly: Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, etc. or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation. [IEEE 1044] See also defect, deviation, error, fault, failure, incident, problem.
Arc testing: See branch testing.
Attractiveness: The capability of the software product to be attractive to the user. [ISO 9126] See also usability.
Audit: An independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specifications, and/or procedures based on objective criteria, including documents that specify:
o the form or content of the products to be producedo the process by which the products shall be producedo how compliance to standards or guidelines shall be measured. [IEEE
1028]
Audit trail: A path by which the original input to a process (e.g. data) can be traced back through the process, taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out. [After TMap]
Automated test ware: Testware used in automated testing, such as tool scripts.
Availability: The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. [IEEE 610]
3
PROJECT REPORTSoftware Testing Techniques
B
Back-to-back testing: Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies. [IEEE 610]
Baseline: A specification or software product that has been formally reviewed or agreed upon, that thereafter serves as the basis for further development, and that can be changed only through a formal change control process. [After IEEE 610]
Basic block: A sequence of one or more consecutive executable statements containing no branches.
Basis test set: A set of test cases derived from the internal structure of a component or specification to ensure that 100% of a specified coverage criterion will be achieved.
bebugging: See error seeding. [Abbott]
Behavior: The response of a component or system to a set of input values and preconditions.
Benchmark test: (1) A standard against which measurements or comparisons can be made.
(2) A test that is be used to compare components or systems to each other or to a standard as in (1). [After IEEE 610]
Bespoke software: Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.
Best practice: A superior method or innovative practice that contributes to the improved performance of an organization under given context, usually recognized as ‘best’ by other peer organizations.
Beta testing: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.
Big-bang testing: A type of integration testing in which software elements,
4
PROJECT REPORTSoftware Testing Techniques
hardware elements, or both are combined all at once into a component or an overall system, rather than in stages. [After IEEE 610] See also integration testing.
Black-box technique: See black box test design technique.
Black box testing: Testing, either functional or non-functional, without reference to the internal structure of the component or system.
Black-box test design technique: Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.
Blocked test case: A test case that cannot be executed because the preconditions for its execution are not fulfilled.
Bottom-up testing: An incremental approach to integration testing where the lowest level components are tested first, and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested. See also integration testing.
Boundary value: An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.
Boundary value analysis: A black box test design technique in which test cases are designed based on boundary values.
Boundary value coverage: The percentage of boundary values that have been exercised by a test suite.
Boundary value testing: See boundary value analysis.
Branch: A basic block that can be selected for execution based on a program construct in which one of two or more alternative program paths are available, e.g. case, jump, go to, ifthen- else.
Branch condition: See condition.
Branch condition combination coverage: See multiple condition coverage.
Branch condition combination testing: See multiple condition testing.
Branch condition coverage: See condition coverage.
5
PROJECT REPORTSoftware Testing Techniques
Branch coverage: The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.
Branch testing: A white box test design technique in which test cases are designed to execute branches.
Bug: See defect.
Bug: See defect report.
Business process-based testing: An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.
C
Capability Maturity Model (CMM): A five level staged framework that describes the key elements of an effective software process. The Capability Maturity Model covers best practices for planning, engineering and managing software development and maintenance. [CMM]
Capability Maturity Model Integration (CMMI): A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning, engineering and managing product development and maintenance. CMMI is the designated successor of the CMM. [CMMI]
Capture/playback tool: A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.
Capture/replay tool: See capture/playback tool.
CASE: Acronym for Computer Aided Software Engineering.
CAST: Acronym for Computer Aided Software Testing. See also test automation.
Cause-effect graph: A graphical representation of inputs and/or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.
6
PROJECT REPORTSoftware Testing Techniques
Cause-effect graphing: A black box test design technique in which test cases are designed from cause-effect graphs. [BS 7925/2]
Cause-effect analysis: See cause-effect graphing.
Cause-effect decision table: See decision table.
Certification: The process of confirming that a component, system or person complies with its specified requirements, e.g. by passing an exam.
Changeability: The capability of the software product to enable specified modifications to be implemented. [ISO 9126] See also maintainability.
Change control: See configuration control.
Change control board: See configuration control board.
Checker: See reviewer.
Chow's coverage metrics: See N-switch coverage. [Chow]
Classification tree method: A black box test design technique in which test cases, described by means of a classification tree, are designed to execute combinations of representatives of input and/or output domains. [Grochtmann]
Code: Computer instructions and data definitions expressed in a programming language or in a form output by an assembler, compiler or other translator. [IEEE 610]
Code analyzer: See static code analyzer.
Code coverage: An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.
Code-based testing: See white box testing.
Co-existence: The capability of the software product to co-exist with other independent software in a common environment sharing common resources. [ISO 9126] See also portability.
Commercial off-the-shelf software: See off-the-shelf software.
7
PROJECT REPORTSoftware Testing Techniques
Comparator: See test comparator.
Compatibility testing: See interoperability testing.
Compiler: A software tool that translates programs expressed in a high order language into their machine language equivalents. [IEEE 610]
Complete testing: See exhaustive testing.
Completion criteria: See exit criteria.
Complexity: The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify. See also cyclomatic complexity.
Compliance: The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions. [ISO 9126]
Compliance testing: The process of testing to determine the compliance of the component or system.
Component: A minimal software item that can be tested in isolation.
Component integration testing: Testing performed to expose defects in the interfaces and interaction between integrated components.
Component specification: A description of a component’s function in terms of its output values for specified input values under specified conditions, and required non-functional behavior (e.g. resource-utilization).
Component testing: The testing of individual software components. [After IEEE 610]
Compound condition: Two or more single conditions joined by means of a logical operator (AND, OR or XOR), e.g. ‘A>B AND C>1000’.
Concrete test case: See low level test case.
Concurrency testing: Testing to determine how the occurrence of two or more activities within the same interval of time, achieved either by interleaving the activities or by simultaneous execution, is handled by the component or system. [After IEEE 610]
8
PROJECT REPORTSoftware Testing Techniques
Condition: A logical expression that can be evaluated as True or False, e.g. A>B. See also test condition.
Condition combination coverage: See multiple condition coverage.
Condition combination testing: See multiple condition testing.
Condition coverage: The percentage of condition outcomes that have been exercised by a test suite. 100% condition coverage requires each single condition in every decision statement to be tested as True and False.
Condition determination coverage: The percentage of all single condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. 100% condition determination coverage implies 100% decision condition coverage.
Condition determination testing: A white box test design technique in which test cases are designed to execute single condition outcomes that independently affect a decision outcome.
Condition testing: A white box test design technique in which test cases are designed to execute condition outcomes.
Condition outcome: The evaluation of a condition to True or False.
Confidence test: See smoke test.
Configuration: The composition of a component or system as defined by the number, nature, and interconnections of its constituent parts.
Configuration auditing: The function to check on the contents of libraries of configuration items, e.g. for standards compliance. [IEEE 610]
Configuration control: An element of configuration management, consisting of the evaluation, co-ordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification. [IEEE 610]
Configuration control board (CCB): A group of people responsible for evaluating and approving or disapproving proposed changes to configuration items, and for ensuring implementation of approved changes. [IEEE 610]
9
PROJECT REPORTSoftware Testing Techniques
Configuration identification: An element of configuration management, consisting of selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation. [IEEE 610]
Configuration item: An aggregation of hardware, software or both, that is designated for configuration management and treated as a single entity in the configuration management process. [IEEE 610]
Configuration management: A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements. [IEEE 610]
Configuration management tool: A tool that provides support for the identification and control of configuration items, their status over changes and versions, and the release of baselines consisting of configuration items.
Configuration testing: See portability testing.
Confirmation testing: See re-testing.
Conformance testing: See compliance testing.
Consistency: The degree of uniformity, standardization, and freedom from contradiction among the documents or parts of a component or system. [IEEE 610]
Control flow: A sequence of events (paths) in the execution through a component or system.
Control flow graph: A sequence of events (paths) in the execution through a component or system.
Control flow path: See path.
Conversion testing: Testing of software used to convert data from existing systems for use in replacement systems.
COTS: Acronym for Commercial Off-The-Shelf software. See off-the-shelf software.
10
PROJECT REPORTSoftware Testing Techniques
Coverage: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
Coverage analysis: Measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so, which test cases are needed.
Coverage item: An entity or property used as a basis for test coverage, e.g. equivalence partitions or code statements.
Coverage tool: A tool that provides objective measures of what structural elements, e.g. statements, branches have been exercised by a test suite.
Custom software: See bespoke software.
Cyclomatic complexity: The number of independent paths through a program. Cyclomatic complexity is defined as: L – N + 2P, where
- L = the number of edges/links in a graph- N = the number of nodes in a graph- P = the number of disconnected parts of the graph (e.g. a called graph
and a subroutine) [After McCabe]
Cyclomatic number: See cyclomatic complexity.
D
Daily build: a development activity where a complete system is compiled and linked every day (usually overnight), so that a consistent system is available at any time including all latest changes.
Data definition: An executable statement where a variable is assigned a value.
Data driven testing: A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools. [Fewster and Graham] See also keyword driven testing.
Data flow: An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is any of: creation, usage, or destruction. [Beizer]
11
PROJECT REPORTSoftware Testing Techniques
Data flow analysis: A form of static analysis based on the definition and usage of variables.
Data flow coverage: The percentage of definition-use pairs that have been exercised by a test suite.
Data flow testing: A white box test design technique in which test cases are designed to execute definition and use pairs of variables.
Data integrity testing: See database integrity testing.
Database integrity testing: Testing the methods and processes used to access and manage the data(base), to ensure access methods, processes and data rules function as expected and that during access to the database, data is not corrupted or unexpectedly deleted, updated or created.
Dead code: See unreachable code.
Debugger: See debugging tool.
Debugging: The process of finding, analyzing and removing the causes of failures in software.
Debugging tool: A tool used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step, to halt a program at any program statement and to set and examine program variables.
Decision: A program point at which the control flow has two or more alternative routes. A node with two or more links to separate branches.
Decision condition coverage: The percentage of all condition outcomes and decision outcomes that have been exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100% decision coverage.
Decision condition testing: A white box test design technique in which test cases are designed to execute condition outcomes and decision outcomes.
Decision coverage: The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.
12
PROJECT REPORTSoftware Testing Techniques
Decision table: A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.
Decision table testing: A black box test design techniques in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. [Veenendaal]
Decision testing: A white box test design technique in which test cases are designed to execute decision outcomes.
Decision outcome: The result of a decision (which therefore determines the branches to be taken).
Defect: A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
Defect density: The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-ofcode, number of classes or function points).
Defect Detection Percentage (DDP): the number of defects found by a test phase, divided by the number found by that test phase and any other means afterwards.
Defect management: The process of recognizing, investigating, taking action and disposing of defects. It involves recording defects, classifying them and identifying the impact. [After IEEE 1044]
Defect management tool: A tool that facilitates the recording and status tracking of defects.They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of defects and provide reporting facilities. See also incident management tool.
Defect masking: An occurrence in which one defect prevents the detection of another. [After IEEE 610]
Defect report: A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function. [After IEEE 829]
Defect tracking tool: See defect management tool.
13
PROJECT REPORTSoftware Testing Techniques
Definition-use pair: The association of the definition of a variable with the use of that variable. Variable uses include computational (e.g. multiplication) or to direct the execution of a path (“predicate” use).
Deliverable: Any (work) product that must be delivered to someone other than the (work) product’s author.
Design-based testing: An approach to testing in which test cases are designed based on the architecture and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).
Desk checking: Testing of software or specification by manual simulation of its execution. See also static analysis.
Development testing: Formal or informal testing conducted during the implementation of a component or system, usually in the development environment by developers. [After IEEE 610]
Deviation: See incident.
Deviation report: See incident report.
Dirty testing: See negative testing.
Documentation testing: Testing the quality of the documentation, e.g. user guide or installation guide.
Domain: The set from which valid input and/or output values can be selected.
Driver: A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [After TMap]
Dynamic analysis: The process of evaluating behavior, e.g. memory performance, CPU usage, of a system or component during execution. [After IEEE 610]
Dynamic analysis tool: A tool that provides run-time information on the state of the software code. These tools are most commonly used to identify unassigned pointers, check pointer arithmetic and to monitor the allocation, use and de-allocation of memory and to flag memory leaks.
Dynamic comparison: Comparison of actual and expected results, performed while the software is being executed, for example by a test execution tool.
14
PROJECT REPORTSoftware Testing Techniques
Dynamic testing: Testing that involves the execution of the software of a component or system.
E
Efficiency: The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions. [ISO 9126]
Efficiency testing: The process of testing to determine the efficiency of a software product.
Elementary comparison testing: A black box test design techniques in which test cases are designed to execute combinations of inputs using the concept of condition determination coverage. [TMap]
Emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. [IEEE 610] See also simulator.
Entry criteria: the set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria. [Gilb and Graham]
Entry point: The first executable statement within a component.
Equivalence class: See equivalence partition.
Equivalence partition: A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.
Equivalence partition coverage: The percentage of equivalence partitions that have been exercised by a test suite.
Equivalence partitioning: A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.
Error: A human action that produces an incorrect result. [After IEEE 610]
15
PROJECT REPORTSoftware Testing Techniques
Error guessing: A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.
Error seeding: The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects. [IEEE 610]
Error tolerance: The ability of a system or component to continue normal operation despite the presence of erroneous inputs. [After IEEE 610].
Evaluation: See testing.
Exception handling: Behavior of a component or system in response to erroneous input, from either a human user or from another component or system, or to an internal failure.
Executable statement: A statement which, when compiled, is translated into object code, and which will be executed procedurally when the program is running and may perform an action on data.
Exercised: A program element is said to be exercised by a test case when the input value causes the execution of that element, such as a statement, decision, or other structural element.
Exhaustive testing: A test approach in which the test suite comprises all combinations of input values and preconditions.
Exit criteria: The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing. [After Gilb and Graham]
Exit point: The last executable statement within a component.
Expected outcome: See expected result.
Expected result: The behavior predicted by the specification, or another source, of the component or system under specified conditions.
16
PROJECT REPORTSoftware Testing Techniques
Experienced-based test design technique: Procedure to derive and/or select test cases based on the tester’s experience, knowledge and intuition.
Exploratory testing: An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. [After Bach]
F
Fail: A test is deemed to fail if its actual result does not match its expected result.
Failure: Deviation of the component or system from its expected delivery, service or result. [After Fenton]
Failure mode: The physical or functional manifestation of a failure. For example, a system in failure mode may be characterized by slow operation, incorrect outputs, or complete termination of execution. [IEEE 610]
Failure Mode and Effect Analysis (FMEA): A systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence.
Failure rate: The ratio of the number of failures of a given category to a given unit of measure, e.g. failures per unit of time, failures per number of transactions, failures per number of computer runs. [IEEE 610]
Fault: See defect.
Fault density: See defect density.
Fault Detection Percentage (FDP): See Defect Detection Percentage (DDP).
Fault masking: See defect masking.
Fault tolerance: The capability of the software product to maintain a specified level of performance in cases of software faults (defects) or of infringement of its specified interface. [ISO 9126] See also reliability.
Fault tree analysis: A method used to analyze the causes of faults (defects).
17
PROJECT REPORTSoftware Testing Techniques
Feasible path: A path for which a set of input values and preconditions exists which causes it to be executed.
Feature: An attribute of a component or system specified or implied by requirements documentation (for example reliability, usability or design constraints). [After IEEE 1008]
Field testing: See beta testing.
Finite state machine: A computational model consisting of a finite number of states and transitions between those states, possibly with accompanying actions. [IEEE 610]
Finite state testing: See state transition testing.
Formal review: A review characterized by documented procedures and requirements, e.g. inspection.
Frozen test basis: A test basis document that can only be amended by a formal change control process. See also baseline.
Function Point Analysis (FPA): Method aiming to measure the size of the functionality of an information system. The measurement is independent of the technology. This measurement may be used as a basis for the measurement of productivity, the estimation of the needed resources, and project control.
Functional integration: An integration approach that combines the components or systems for the purpose of getting a basic functionality working early. See also integration testing.
Functional requirement: A requirement that specifies a function that a component or system must perform. [IEEE 610]
Functional test design technique: Procedure to derive and/or select test cases based on an analysis of the specification of the functionality of a component or system without reference to its internal structure. See also black box test design technique.
Functional testing: Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing.
Functionality: The capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions. [ISO 9126]
18
PROJECT REPORTSoftware Testing Techniques
Functionality testing: The process of testing to determine the functionality of a software product.
G
Glass box testing: See white box testing.
H
Heuristic evaluation: A static usability test technique to determine the compliance of a user interface with recognized usability principles (the so-called “heuristics”).
High level test case: A test case without concrete (implementation level) values for input data and expected results. Logical operators are used; instances of the actual values are not yet defined and/or available. See also low level test case.
Horizontal traceability: The tracing of requirements for a test level through the layers of test documentation (e.g. test plan, test design specification, test case specification and test procedure specification or test script).
I
Impact analysis: The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.
Incident: Any event occurring that requires investigation. [After IEEE 1008]
Incident logging: Recording the details of any incident that occurred, e.g. during testing.
Incident management: The process of recognizing, investigating, taking action and disposing of incidents. It involves logging incidents, classifying them and identifying the impact. [After IEEE 1044]
Incident management tool: A tool that facilitates the recording and status tracking of incidents. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of incidents and provide reporting facilities. See also defect management tool.
19
PROJECT REPORTSoftware Testing Techniques
Incident report: A document reporting on any event that occurred, e.g. during the testing, which requires investigation. [After IEEE 829]
Incremental development model: A development life cycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the appropriate increment. In some (but not all) versions of this life cycle model, each subproject follows a ‘mini V-model’ with its own design, coding and testing phases.
Incremental testing: Testing where components or systems are integrated and tested one or some at a time, until all the components or systems are integrated and tested.
Independence: Separation of responsibilities, which encourages the accomplishment of objective testing. [After DO-178b]
Infeasible path: A path that cannot be exercised by any set of possible input values.
Informal review: A review not based on a formal (documented) procedure.
Input: A variable (whether stored within a component or outside) that is read by a
component.
Input domain: The set from which valid input values can be selected. See also domain.
Input value: An instance of an input. See also input.
Inspection: A type of peer review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure. [After IEEE 610, IEEE 1028] See also peer review.
Inspection leader: See moderator.
Inspector: See reviewer.
Installability: The capability of the software product to be installed in a specified
environment [ISO 9126]. See also portability.
20
PROJECT REPORTSoftware Testing Techniques
Installability testing: The process of testing the installability of a software product. See also portability testing.
Installation guide: Supplied instructions on any suitable media, which guides the installer through the installation process. This may be a manual guide, step-by-step procedure, installation wizard, or any other similar process description.
Installation wizard: Supplied software on any suitable media, which leads the installer through the installation process. It normally runs the installation process, provides feedback on installation results, and prompts for options.
Instrumentation: The insertion of additional code into the program in order to collect information about program behavior during execution, e.g. for measuring code coverage.
Instrumenter: A software tool used to carry out instrumentation.
Intake test: A special instance of a smoke test to decide if the component or system is ready for detailed and further testing. An intake test is typically carried out at the start of the test execution phase. See also smoke test.
Integration: The process of combining components or systems into larger assemblies.
Integration testing: Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. See also component integration testing, system integration testing.
Integration testing in the large: See system integration testing.
Integration testing in the small: See component integration testing.
Interface testing: An integration test type that is concerned with testing the interfaces between components or systems.
Interoperability: The capability of the software product to interact with one or more specified components or systems. [After ISO 9126] See also functionality.
Interoperability testing: The process of testing to determine the interoperability of a software product. See also functionality testing.
21
PROJECT REPORTSoftware Testing Techniques
Invalid testing: Testing using input values that should be rejected by the component or system. See also error tolerance.
Isolation testing: Testing of individual components in isolation from surrounding
components, with surrounding components being simulated by stubs and drivers, if needed.
Item transmittal report: See release note.
Iterative development model: A development life cycle where a project is broken into a usually large number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product, a subset of the final product under development, which grows from iteration to iteration to become the final product.
K
Key performance indicator: See performance indicator.
Keyword driven testing: A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test. See also data driven testing.
L
LCSAJ: A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.
LCSAJ coverage: The percentage of LCSAJs of a component that have been exercised by a test suite. 100% LCSAJ coverage implies 100% decision coverage.
LCSAJ testing: A white box test design technique in which test cases are designed to execute LCSAJs.
22
PROJECT REPORTSoftware Testing Techniques
learnability: The capability of the software product to enable the user to learn its application. [ISO 9126] See also usability.
Level test plan: A test plan that typically addresses one test level. See also test plan.
Link testing: See component integration testing.
Load testing: A test type concerned with measuring the behavior of a component or system with increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or system. See also stress testing.
Logic-coverage testing: See white box testing. [Myers]
Logic-driven testing: See white box testing.
Logical test case: See high level test case.
Low level test case: A test case with concrete (implementation level) values for input data and expected results. Logical operators from high level test cases are replaced by actual values that correspond to the objectives of the logical operators. See also high level test case.
M
Maintenance: Modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment. [IEEE 1219]
Maintenance testing: Testing the changes to an operational system or the impact of a changed environment to an operational system.
Maintainability: The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment. [ISO 9126]
Maintainability testing: The process of testing to determine the maintainability of a software product.
Management review: A systematic evaluation of software acquisition, supply, development, operation, or maintenance process, performed by or on behalf of management that monitors progress, determines the status of plans
23
PROJECT REPORTSoftware Testing Techniques
and schedules, confirms requirements and their system allocation, or evaluates the effectiveness of management approaches to achieve fitness for purpose. [After IEEE 610, IEEE 1028]
Master test plan: A test plan that typically addresses multiple test levels. See also test plan.
Maturity: (1) The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. See also Capability Maturity Model, Test Maturity Model. (2) The capability of the software product to avoid failure as a result of defects in the software. [ISO 9126] See also reliability.
Measure: The number or category assigned to an attribute of an entity by making a measurement. [ISO 14598]
Measurement: The process of assigning a number or category to an entity to describe an attribute of that entity. [ISO 14598]
Measurement scale: A scale that constrains the type of data analysis that can be performed on it. [ISO 14598]
Memory leak: A defect in a program's dynamic store allocation logic that causes it to fail to reclaim memory after it has finished using it, eventually causing the program to fail due to lack of memory.
Metric: A measurement scale and the method used for measurement. [ISO 14598]
Migration testing: See conversion testing.
Milestone: A point in time in a project at which defined (intermediate) deliverables and results should be ready.
Mistake: See error.
Moderator: The leader and main person responsible for an inspection or other review process.
Modified condition decision coverage: See condition determination coverage.
Modified condition decision testing: See condition determination coverage testing.
24
PROJECT REPORTSoftware Testing Techniques
Modified multiple condition coverage: See condition determination coverage.
Modified multiple condition testing: See condition determination coverage testing.
Module: See component.
Module testing: See component testing.
Monitor: A software tool or hardware device that runs concurrently with the component or system under test and supervises, records and/or analyses the behavior of the component or system. [After IEEE 610]
Monitoring tool: See monitor.
Multiple condition: See compound condition.
Multiple condition coverage: The percentage of combinations of all single condition outcomes within one statement that have been exercised by a test suite. 100% multiple condition coverage implies 100% condition determination coverage.
Multiple condition testing: A white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).
Mutation analysis: A method to determine test suite thoroughness by measuring the extent to which a test suite can discriminate the program from slight variants (mutants) of the program.
Mutation testing: See back-to-back testing.
N
N-switch coverage: The percentage of sequences of N+1 transitions that have been exercised by a test suite. [Chow]
N-switch testing: A form of state transition testing in which test cases are designed to execute all valid sequences of N+1 transitions. [Chow] See also state transition testing.
25
PROJECT REPORTSoftware Testing Techniques
Negative testing: Tests aimed at showing that a component or system does not work. Negative testing is related to the testers’ attitude rather than a specific test approach or test design technique, e.g. testing with invalid input values or exceptions. [After Beizer].
Non-conformity: Non fulfillment of a specified requirement. [ISO 9000]
Non-functional requirement: A requirement that does not relate to functionality, but to attributes such as reliability, efficiency, usability, maintainability and portability.
Non-functional testing: Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability.
Non-functional test design techniques: Procedure to derive and/or select test cases for nonfunctional
testing based on an analysis of the specification of a component or systemwithout reference to its internal structure. See also black box test design technique.
O
Off-the-shelf software: A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
Operability: The capability of the software product to enable the user to operate and control it. [ISO 9126] See also usability.
Operational environment: Hardware and software products installed at users’ or customers’ sites where the component or system under test will be used. The software may include operating systems, database management systems, and other applications.
Operational profile testing: Statistical testing using a model of system operations (short duration tasks) and their probability of typical use. [Musa] Operational testing: Testing conducted to evaluate a component or system in its operational environment. [IEEE 610]
Oracle: See test oracle.
26
PROJECT REPORTSoftware Testing Techniques
Outcome: See result.
Output: A variable (whether stored within a component or outside) that is written by a component.
Output domain: The set from which valid output values can be selected. See also domain.
Output value: An instance of an output. See also output.
P
Pair programming: A software development approach whereby lines of code (production and/or test) of a component are written by two programmers sitting at a single computer.This implicitly means ongoing real-time code reviews are performed.
Pair testing: Two persons, e.g. two testers, a developer and a tester, or an end-user and a tester, working together to find defects. Typically, they share one computer and trade control of it while testing.
Partition testing: See equivalence partitioning. [Beizer]
Pass: A test is deemed to pass if its actual result matches its expected result.
Pass/fail criteria: Decision rules used to determine whether a test item (function) or feature has passed or failed a test. [IEEE 829]
Path: A sequence of events, e.g. executable statements, of a component or system from an entry point to an exit point.
Path coverage: The percentage of paths that have been exercised by a test suite. 100% pathcoverage implies 100% LCSAJ coverage.
Path sensitizing: Choosing a set of input values to force the execution of a given path.
Path testing: A white box test design technique in which test cases are designed to execute paths.
27
PROJECT REPORTSoftware Testing Techniques
Peer review: A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.
Performance: The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate. [After IEEE 610] See also efficiency.
Performance indicator: A high level metric of effectiveness and/or efficiency used to guide and control progressive development, e.g. lead-time slip for software development.[CMMI]
Performance testing: The process of testing to determine the performance of a software product. See also efficiency testing.
Performance testing tool: A tool to support performance testing and that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution, response time measurements are taken from selected transactions and these are logged. Performance testing tools normally provide reports based on test logs and graphs of load against response times.
Phase test plan: A test plan that typically addresses one test phase. See also test plan.
Portability: The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]
Portability testing: The process of testing to determine the portability of a software product.
Postcondition: Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.
Post-execution comparison: Comparison of actual and expected results, performed after the software has finished running.
Precondition: Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.
Predicted outcome: See expected result.
28
PROJECT REPORTSoftware Testing Techniques
Pretest: See intake test.
Priority: The level of (business) importance assigned to an item, e.g. defect.
Probe effect: The effect on the component or system by the measurement instrument when the component or system is being measured, e.g. by a performance testing tool or monitor. For example performance may be slightly worse when performance testing tools are being used.
Problem: See defect.
Problem management: See defect management.
Problem report: See defect report.
Process: A set of interrelated activities, which transform inputs into outputs. [ISO 12207]
Process cycle test: A black box test design technique in which test cases are designed to execute business procedures and processes. [TMap]
Product risk: A risk directly related to the test object. See also risk.
Project: A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements, including the constraints of time, cost and resources. [ISO 9000]
Project risk: A risk related to management and control of the (test) project. See also risk.
Program instrumenter: See instrumenter.
Program testing: See component testing.
Project test plan: See master test plan.
Pseudo-random: A series which appears to be random but is in fact generated according to some prearranged sequence.
Q
29
PROJECT REPORTSoftware Testing Techniques
Quality: The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]
Quality assurance: Part of quality management focused on providing confidence that quality requirements will be fulfilled. [ISO 9000]
Quality attribute: A feature or characteristic that affects an item’s quality. [IEEE 610]
Quality characteristic: See quality attribute. Quality management: Coordinated activities to direct and control an
organization with regard to quality. Direction and control with regard to quality generally includes the establishment of the quality policy and quality objectives, quality planning, quality control, quality assurance and quality improvement. [ISO 9000]
R
Random testing: A black box test design technique where test cases are selected, possibly using a pseudo-random generation algorithm, to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performance.
Recorder: See scribe.
Record/playback tool: See capture/playback tool.
Recoverability: The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. [ISO 9126] See also reliability.
Recoverability testing: The process of testing to determine the recoverability of a software product. See also reliability testing.
Recovery testing: See recoverability testing.
Regression testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.
Regulation testing: See compliance testing.
30
PROJECT REPORTSoftware Testing Techniques
Release note: A document identifying test items, their configuration, current status and other delivery information delivered by development to testing, and possibly other stakeholders,at the start of a test execution phase. [After IEEE 829]
Reliability: The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations. [ISO 9126]
Reliability testing: The process of testing to determine the reliability of a software product.
Replaceability: The capability of the software product to be used in place of another specified software product for the same purpose in the same environment. [ISO 9126] See also portability.
Requirement: A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. [After IEEE 610]
Requirements-based testing: An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements, e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.
Requirements management tool: A tool that supports the recording of requirements, requirements attributes (e.g. priority, knowledge responsible) and annotation, and facilitates traceability through layers of requirements and requirements change management. Some requirements management tools also provide facilities for static analysis, such as consistency checking and violations to pre-defined requirements rules.
Requirements phase: The period of time in the software life cycle during which the requirements for a software product are defined and documented. [IEEE 610]
Resource utilization: The capability of the software product to use appropriate amounts and types of resources, for example the amounts of main and secondary memory used by the program and the sizes of required temporary or overflow files, when the software performs its function under stated conditions. [After ISO 9126] See also efficiency.
Resource utilization testing: The process of testing to determine the resource-utilization of a software product. See also efficiency testing.
31
PROJECT REPORTSoftware Testing Techniques
Result: The consequence/outcome of the execution of a test. It includes outputs to screens, changes to data, reports, and communication messages sent out. See also actual result, expected result.
Resumption criteria: The testing activities that must be repeated when testing is re-started after a suspension. [After IEEE 829]
Re-testing: Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
Review: An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough. [After IEEE 1028]
Reviewer: The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.
Review tool: A tool that provides support to the review process. Typical features include review planning and tracking support, communication support, collaborative reviews and a repository for collecting and reporting of metrics.
Risk: A factor that could result in future negative consequences; usually expressed as impact and likelihood.
Risk analysis: The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood).
Risk-based testing: Testing oriented towards exploring and providing information about product risks. [After Gerrard]
Risk control: The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels.
Risk identification: The process of identifying risks using techniques such as brainstorming, checklists and failure history.
Risk management: Systematic application of procedures and practices to the tasks of identifying, analyzing, prioritizing, and controlling risk.
Risk mitigation: See risk control.
32
PROJECT REPORTSoftware Testing Techniques
Robustness: The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions. [IEEE 610] See also error-tolerance, fault-tolerance.
Robustness testing: Testing to determine the robustness of the software product.
Root cause: An underlying factor that caused a non-conformance and possibly should be permanently eliminated through process improvement.
S
Safety: The capability of the software product to achieve acceptable levels of risk of harm to people, business, software, property or the environment in a specified context of use. [ISO9126]
Safety testing: Testing to determine the safety of a software product.
Sanity test: See smoke test.
Scalability: The capability of the software product to be upgraded to accommodate increased loads. [After Gerrard]
Scalability testing: Testing to determine the scalability of the software product.
Scenario testing: See use case testing.
Scribe: The person who records each defect mentioned and any suggestions for process improvement during a review meeting, on a logging form. The scribe has to ensure that the logging form is readable and understandable.
Scripting language: A programming language in which executable test scripts are written, used by a test execution tool (e.g. a capture/playback tool).
Security: Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data. [ISO 9126] See also functionality.
Security testing: Testing to determine the security of the software product. See also functionality testing.
33
PROJECT REPORTSoftware Testing Techniques
Security testing tool: A tool that provides support for testing security characteristics and vulnerabilities.
Security tool: A tool that supports operational security.
Serviceability testing: See maintainability testing.
Severity: The degree of impact that a defect has on the development or operation of a component or system. [After IEEE 610]
Simulation: The representation of selected behavioral characteristics of one physical or abstract system by another system. [ISO 2382/1]
Simulator: A device, computer program or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs. [After IEEE 610, DO178b] See also emulator.
Site acceptance testing: Acceptance testing by users/customers at their site, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes, normally including hardware as well as software.
Smoke test: A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices. See also intake test.
Software: Computer programs, procedures, and possibly associated documentation and data pertaining to the operation of a computer system [IEEE 610]
Software feature: See feature.
Software quality: The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]
Software quality characteristic: See quality attribute.
Software test incident: See incident.
Software test incident report: See incident report.
34
PROJECT REPORTSoftware Testing Techniques
Software Usability Measurement Inventory (SUMI): A questionnaire based usability test technique to evaluate the usability, e.g. user-satisfaction, of a component or system. [Veenendaal]
Source statement: See statement.
Specification: A document that specifies, ideally in a complete, precise and verifiable manner, the requirements, design, behavior, or other characteristics of a component or system, and, often, the procedures for determining whether these provisions have been satisfied. [After IEEE 610]
Specification-based testing: See black box testing.
Specification-based test design technique: See black box test design technique.
Specified input: An input for which the specification predicts a result.
Stability: The capability of the software product to avoid unexpected effects from modifications in the software. [ISO 9126] See also maintainability.
Standard software: See off-the-shelf software.
Standards testing: See compliance testing.
State diagram: A diagram that depicts the states that a component or system can assume, and shows the events or circumstances that cause and/or result from a change from one state to another. [IEEE 610]
State table: A grid showing the resulting transitions for each state combined with each possible event, showing both valid and invalid transitions.
State transition: A transition between two states of a component or system.
State transition testing: A black box test design technique in which test cases are designed to execute valid and invalid state transitions. See also N-switch testing.
Statement: An entity in a programming language, which is typically the smallest indivisible unit of execution.
Statement coverage: The percentage of executable statements that have been exercised by a test suite.
35
PROJECT REPORTSoftware Testing Techniques
Statement testing: A white box test design technique in which test cases are designed to execute statements.
Static analysis: Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software artifacts.
Static analysis tool: See static analyzer.
Static analyzer: A tool that carries out static analysis.
Static code analysis: Analysis of source code carried out without execution of that software.
Static code analyzer: A tool that carries out static code analysis. The tool checks source code, for certain properties such as conformance to coding standards, quality metrics or data flow anomalies.
Static testing: Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static code analysis.
Statistical testing: A test design technique in which a model of the statistical distribution of the input is used to construct representative test cases. See also operational profile testing.
Status accounting: An element of configuration management, consisting of the recording and reporting of information needed to manage a configuration effectively. This information includes a listing of the approved configuration identification, the status of proposed changes to the configuration, and the implementation status of the approved changes. [IEEE 610]
Storage: See resource utilization.
Storage testing: See resource utilization testing.
Stress testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. [IEEE 610] See also load testing.
Structure-based techniques: See white box test design technique.
Structural coverage: Coverage measures based on the internal structure of a component or system.
36
PROJECT REPORTSoftware Testing Techniques
Structural test design technique: See white box test design technique.
Structural testing: See white box testing.
Structured walkthrough: See walkthrough.
Stub: A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component. [After IEEE 610]
Subpath: A sequence of executable statements within a component.
Suitability: The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives. [ISO 9126] See also functionality.
Suspension criteria: The criteria used to (temporarily) stop all or a portion of the testing activities on the test items. [After IEEE 829]
Syntax testing: A black box test design technique in which test cases are designed based upon the definition of the input domain and/or output domain.
System: A collection of components organized to accomplish a specific function or set of functions. [IEEE 610]
System integration testing: Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange, Internet).
System testing: The process of testing an integrated system to verify that it meets specified requirements. [Hetzel]
T
Technical review: A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. [Gilb and Graham, IEEE 1028] See also peer review.
Test: A set of one or more test cases [IEEE 829]
Test approach: The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test)
37
PROJECT REPORTSoftware Testing Techniques
project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.
Test automation: The use of software to perform or support test activities, e.g. test management, test design, test execution and results checking.
Test basis: All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis. [After TMap]
Test bed: See test environment.
Test case: A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. [After IEEE 610]
Test case design technique: See test design technique.
Test case specification: A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item. [After IEEE 829]
Test case suite: See test suite.
Test charter: A statement of test objectives, and possibly test ideas on how to test. Test charters are for example often used in exploratory testing. See also exploratory testing.
Test closure: During the test closure phase of a test process data is collected from completed activities to consolidate experience, testware, facts and numbers. The test closure phase consists of finalizing and archiving the testware and evaluating the test process, including preparation of a test evaluation report. See also test process.
Test comparator: A test tool to perform automated test comparison.
Test comparison: The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.
38
PROJECT REPORTSoftware Testing Techniques
Test completion criteria: See exit criteria.
Test condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.
Test control: A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. See also test management.
Test coverage: See coverage.
Test cycle: Execution of the test process against a single identifiable release of the test object.
Test data: Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.
Test data preparation tool: A type of test tool that enables data to be selected from existing databases or created, generated, manipulated and edited for use in testing.
Test design: See test design specification.
Test design specification: A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases. [After IEEE 829]
Test design technique: Procedure used to derive and/or select test cases.
Test design tool: A tool that supports the test design activity by generating test inputs from a specification that may be held in a CASE tool repository, e.g. requirements management tool, from specified test conditions held in the tool itself, or from code.
Test driver: See driver.
Test driven development: A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases.
Test environment: An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test. [After IEEE 610]
39
PROJECT REPORTSoftware Testing Techniques
Test evaluation report: A document produced at the end of the test process summarizing all testing activities and results. It also contains an evaluation of the test process and lessons learned.
Test execution: The process of running a test on the component or system under test, producing actual result(s).
Test execution automation: The use of software, e.g. capture/playback tools, to control the execution of tests, the comparison of actual results to expected results, the setting up of test preconditions, and other test control and reporting functions.
Test execution phase: The period of time in a software development life cycle during which the components of a software product are executed, and the software product is evaluated to determine whether or not requirements have been satisfied. [IEEE 610]
Test execution schedule: A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.
Test execution technique: The method used to perform the actual test execution, either manually or automated.
Test execution tool: A type of test tool that is able to execute other software using an automated test script, e.g. capture/playback. [Fewster and Graham]
Test fail: See fail.
Test generator: See test data preparation tool.
Test leader: See test manager.
Test harness: A test environment comprised of stubs and drivers needed to execute a test.
Test incident: See incident.
Test incident report: See incident report.
Test infrastructure: The organizational artifacts needed to perform testing, consisting of test environments, test tools, office environment and procedures.
40
PROJECT REPORTSoftware Testing Techniques
Test input: The data received from an external source by the test object during test execution. The external source can be hardware, software or human.
Test item: The individual element to be tested. There usually is one test object and many test items. See also test object.
Test item transmittal report: See release note.
Test leader: See test manager.
Test level: A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test,integration test, system test and acceptance test. [After TMap]
Test log: A chronological record of relevant details about the execution of tests. [IEEE 829]
Test logging: The process of recording information about tests executed into a test log.
Test manager: The person responsible for project management of testing activities and resources, and evaluation of a test object. The individual who directs, controls, administers, plans and regulates the evaluation of a test object.
Test management: The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.
Test management tool: A tool that provides support to the test management and control part of a test process. It often has several capabilities, such as testware management, scheduling of tests, the logging of results, progress tracking, incident management and test reporting.
Test Maturity Model (TMM): A five level staged framework for test process improvement, related to the Capability Maturity Model (CMM) that describes the key elements of an effective test process.
Test monitoring: A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned. See also test management.
Test object: The component or system to be tested. See also test item.
41
PROJECT REPORTSoftware Testing Techniques
Test objective: A reason or purpose for designing and executing a test.
Test oracle: A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), a user manual, or an individual’s specialized knowledge, but should not be the code. [After Adrion]
Test outcome: See result.
Test pass: See pass. Test performance indicator: A high level metric of effectiveness and/or
efficiency used to guide and control progressive test development, e.g. Defect Detection Percentage (DDP).
Test phase: A distinct set of test activities collected into a manageable phase of a project, e.g. the execution activities of a test level. [After Gerrard]
Test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process. [After IEEE 829]
Test planning: The activity of establishing or updating a test plan.
Test policy: A high level document describing the principles, approach and major objectives of the organization regarding testing.
Test Point Analysis (TPA): A formula based test estimation method based on function point analysis. [TMap]
Test procedure: See test procedure specification.
Test procedure specification: A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]
Test process: The fundamental test process comprises planning, specification, execution, recording, checking for completion and test closure activities. [After BS 7925/2]
42
PROJECT REPORTSoftware Testing Techniques
Test Process Improvement (TPI): A continuous framework for test process improvement that describes the key elements of an effective test process, especially targeted at system testing and acceptance testing.
Test record: See test log.
Test recording: See test logging.
Test reproduceability: An attribute of a test indicating whether the same results are produced each time the test is executed.
Test report: See test summary report.
Test requirement: See test condition.
Test run: Execution of a test on a specific version of the test object.
Test run log: See test log.
Test result: See result.
Test scenario: See test procedure specification.
Test script: Commonly used to refer to a test procedure specification, especially an automated one.
Test set: See test suite.
Test situation: See test condition.
Test specification: A document that consists of a test design specification, test case specification and/or test procedure specification.
Test specification technique: See test design technique.
Test stage: See test level.
Test strategy: A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).
Test suite: A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.
43
PROJECT REPORTSoftware Testing Techniques
Test summary report: A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria. [After IEEE 829]
Test target: A set of exit criteria.
Test technique: See test design technique.
Test tool: A software product that supports one or more test activities, such as planning and control, specification, building initial files and data, test execution and test analysis. [TMap] See also CAST.
Test type: A group of test activities aimed at testing a component or system focused on a specific test objective, i.e. functional test, usability test, regression test etc. A test type may take place on one or more test levels or test phases. [After TMap]
Testability: The capability of the software product to enable modified software to be tested. [ISO 9126] See also maintainability.
Testability review: A detailed check of the test basis to determine whether the test basis is at an adequate quality level to act as an input document for the test process. [After TMap]
Testable requirements: The degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met. [After IEEE 610]
Tester: A skilled professional who is involved in the testing of a component or system.
Testing: The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
Testware: Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing. [After Fewster and Graham]
44
PROJECT REPORTSoftware Testing Techniques
Thread testing: A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by levels of a hierarchy.
Time behavior: See performance.
Top-down testing: An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested. See also integration testing.
Traceability: The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.
U
Understandability: The capability of the software product to enable the user to understand whether the software is suitable, and how it can be used for particular tasks and conditions of use. [ISO 9126] See also usability.
Unit: See component.
Unit testing: See component testing.
Unreachable code: Code that cannot be reached and therefore is impossible to execute.
Usability: The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions. [ISO 9126]
Usability testing: Testing to determine the extent to which the software product is
understood, easy to learn, easy to operate and attractive to the users under specified conditions. [After ISO 9126]
Use case: A sequence of transactions in a dialogue between a user and the system with a tangible result.
Use case testing: A black box test design technique in which test cases are designed to execute user scenarios.
45
PROJECT REPORTSoftware Testing Techniques
User acceptance testing: See acceptance testing.
User scenario testing: See use case testing.
User test: A test whereby real-life users are involved to evaluate the usability of a component or system.
V
V-model: A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.
Validation: Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. [ISO 9000]
Variable: An element of storage in a computer that is accessible by a software program by referring to it by a name.
Verification: Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. [ISO 9000]
Vertical traceability: The tracing of requirements through the layers of development documentation to components.
Version control: See configuration control.
Volume testing: Testing where the system is subjected to large volumes of data. See also resource-utilization testing.
W
Walkthrough: A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg, IEEE 1028] See also peer review.
White-box test design technique: Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.
46
PROJECT REPORTSoftware Testing Techniques
White-box testing: Testing based on an analysis of the internal structure of the component or system.
Wide Band Delphi: An expert based test estimation technique that aims at making an accurate estimation using the collective wisdom of the team members.
11. SUMMARY
11.1 TESTING DOCUMENTATION AND PARTIES INVOLVED
47
PROJECT REPORTSoftware Testing Techniques
11.2 TESTING ACTIVITIES IN SYSTEM DEVELOPMENT LIFE CYCLE
48
PROJECT REPORTSoftware Testing Techniques
** For turnkey and contracted out development projects, the Acceptance test plan
and test specification would be prepared by the joint effort of user department
representatives and officers.
APPENDIX A: CHECKLIST ON UNIT TESTING
49
PROJECT REPORTSoftware Testing Techniques
(This checklist to suggest areas for the definition of test cases is for information
purpose only; and in no way is it meant to be an exhaustive list. Please also note
that a negative tone that matches with section 6.1 suggestions has been used)
Input
1. Validation rules of data fields do not match with the program/data specification.
2. Valid data fields are rejected.
3. Data fields of invalid class, range and format are accepted.
4. Invalid fields cause abnormal program end.
Output
1. Output messages are shown with misspelling, or incorrect meaning, or not
uniform.
2. Output messages are shown while they are supposed not to be; or they are not
shown while they are supposed to be.
3. Reports/Screens do not conform to the specified layout with misspelled data
labels/titles, mismatched data label and information content, and/or incorrect data
sizes.
4. Reports/Screens page numbering is out of sequence.
5. Reports/Screens breaks do not happen or happen at the wrong places.
6. Reports/Screens control totals do not tally with individual items.
7. Screen video attributes are not set/reset as they should be.
File Access
1. Data fields are not updated as input.
2. “No-file” cases cause program abnormal end.
3. “Empty-file” cases cause program abnormal end.
4. Program data storage areas do not match with the file layout.
5. The last input record (in a batch of transactions) is not updated.
6. The last record in a file is not read while it should be.
7. Deadlock occurs when the same record/file is accessed by more than 1 user.
Internal Logic
50
PROJECT REPORTSoftware Testing Techniques
1. Counters are not initialized as they should be.
2. Mathematical accuracy and rounding does not conform to the prescribed rules.
Job Control Procedures
1. A wrong program is invoked and/or the wrong library/files are referenced.
2. Program execution sequence does not follow the JCL condition codes setting.
3. Run time parameters are not validated before use.
Program Documentation
1. Documentation is not consistent with the program behavior.
Program Structure (through program walkthrough)
1. Coding structure does not follow installation standards.
Performance
1. The program runs longer than the specified response time.
Sample Test Cases
1. Screen labels checks.
2. Screen videos checks with test data set 1.
3. Creation of record with valid data set 2.
4. Rejection of record with invalid data set 3.
5. Error handling upon empty file 1.
6. Batch program run with test data set 4.
APPENDIX B: CHECKLIST ON LINK TESTING
51
PROJECT REPORTSoftware Testing Techniques
(This checklist to suggest areas for the definition of test cases is for information
purpose only; and in no way is it meant to be an exhaustive list. Please also note
that a negative tone that matches with section 6.1 suggestions has been used)
Global Data (e.g. Linkage Section)
1. Global variables have different definition and/or attributes in the programs that
referenced them.
Program Interfaces
1. The called programs are not invoked while they are supposed to be.
2. Any two interfaced programs have different number of parameters, and/or the
attributes of these parameters are defined differently in the two programs.
3. Passing parameters are modified by the called program while they are not
supposed tobe.
4. Called programs behaved differently when the calling program calls twice with
the same set of input data.
5. File pointers held in the calling program are destroyed after another program is
called.
Consistency among programs
1. The same error is treated differently (e.g. with different messages, with different
termination status etc.) in different programs.
Sample Test Cases
1. Interface test between programs xyz, abc & jkl.
2. Global (memory) data file 1 test with data set 1.
APPENDIX C: CHECKLIST ON FUNCTION TESTING
52
PROJECT REPORTSoftware Testing Techniques
(This checklist to suggest areas for the definition of test cases is for information
purpose only; and in no way is it meant to be an exhaustive list. Please also note
that a negative tone that matches with section 6.1 suggestions has been used)
Comprehensiveness
1. Agreed business function is not implemented by any transaction/report.
Correctness
1. The developed transaction/report does not achieve the said business function.
Sample Test cases
1. Creation of records under user normal environment.
2. Enquiry of the same record from 2 terminals.
3. Printing of records when the printer is in normal condition.
4. Printing of records when the printer is off-line or paper out.
5. Unsolicited message sent to console/supervisory terminal when a certain
time limit is reached.
APPENDIX D: CHECKLIST ON SYSTEMS TESTING
53
PROJECT REPORTSoftware Testing Techniques
(This checklist to suggest areas for the definition of test cases is for information
purpose only; and in no way is it meant to be an exhaustive list. Please also note
that a negative tone that matches with section 6.1 suggestions has been used)
Volume Testing
1. The system cannot handle a pre-defined number of transaction.
Stress Testing
1. The system cannot handle a pre-defined number of transaction over a short span
of time.
Performance Testing
1. The response times is excessive over a pre-defined time limit under certain
workloads.
Recovery Testing
1. Database cannot be recovered in event of system failure.
2. The system cannot be restarted after a system crash.
Security Testing
1. The system can be accessed by a unauthorized person.
2. The system does not log out automatically in event of a terminal failure.
Procedure Testing
1. The system is inconsistent with manual operation procedures.
Regression Testing
1. The sub-system / system being installed affect the normal operation of the other
systems / sub-systems already installed.
Operation Testing
1. The information inside the operation manual is not clear and concise with the
application system.
2. The operational manual does not cover all the operation procedures of the
system.
Sample Test Cases
1. System performance test with workload mix 1.
54
PROJECT REPORTSoftware Testing Techniques
2. Terminal power off when a update tranx is processed.
3. Security breakthrough - pressing different key combinations onto the logon
screen.
4. Reload from backup tape.
APPENDIX E: CHECKLIST ON ACCEPTANCE TESTING
(This checklist to suggest areas for the definition of test cases is for information
purpose only; and in no way is it meant to be an exhaustive list. Please also note
that a negative tone that matches with section 6.1 suggestions has been used)
Comprehensiveness
1. Agreed business function is not implemented by any transaction/report.
Correctness
1. The developed transaction/report does not achieve the said business function.
Sample Test Cases
(Similar to Function Testing)
APPENDIX F: CHECKLIST FOR CONTRACTED-OUT SOFTWARE DEVELOPMENT
55
PROJECT REPORTSoftware Testing Techniques
1. Tailor and refer this guidelines in the tender spec.
2. Check for the inclusion of an overall test plan in the tender proposal or accept it
as the first deliverable from the contractor.
3. Set up a Test Control Sub-committee to monitor the testing progress.
4. Review and accept the different types of test plan, test specifications and test
results.
5. Wherever possible, ask if there are any tools (ref. Section 10) to demonstrate
the
structureness and test coverage of the software developed.
6. Perform sample program walkthrough.
7. Ask for a periodic test progress report. (ref. Section 8.5)
8. Ask for the contractor contribution in preparing for the Acceptance Testing
process.
9. Ask for a Test Summary Report (ref. section 8.6) at the end of the project.
56