Software Testing Fundamentals -[1]

136
Software Testing Fundamentals

Transcript of Software Testing Fundamentals -[1]

Page 1: Software Testing Fundamentals -[1]

Software Testing Fundamentals

Page 2: Software Testing Fundamentals -[1]

V Model

ValidationVerification

LLD

HLD

System testplanning

Integration testplanning

Unit test planning

Unit testing

IntegrationTesting

System Testing

Coding

Deliveryproductiondeployment

Maintenanceand

enhancement

URSUAT

planning

SRS

User AcceptanceTesting

Page 3: Software Testing Fundamentals -[1]

Software Testing Definitions

The process of executing a program or part of a program with the intent of finding errors (Myers)

Testing is the process of trying to discover every conceivable fault of weakness in a work product (Myers)

The process of searching for errors (Kaner)

Testing is the process of evaluating or exercising a system or system component by manual or automated means to verify that the software meets specified requirements (IEEE)

Page 4: Software Testing Fundamentals -[1]

Role of a Tester

Assuring that the software meets user’s needs

Software can be used with negligible risks

This is achieved through Verification Validation

Page 5: Software Testing Fundamentals -[1]

Verification

Verification It is the process of determining whether or not the

product of given phase fulfill the spec. from the previous phase

Uses reviews, inspections, and demonstrations throughout development to ensure the quality of the product of that phase, including that it meets the requirements from the previous phase

“Are we building the product right?”

Page 6: Software Testing Fundamentals -[1]

Validation

The process of evaluating the software at the end of development to ensure compliance with the specified requirements

Includes what is commonly thought of as testing and comparing test results to expected results. Validation occurs at the end of the development process.

“Are we building the right product?”

Page 7: Software Testing Fundamentals -[1]

Static & Dynamic Testing

Most of the Verification and Validation activities can be classified as Static or Dynamic

Static testing (without executing any program) Requirement reviews Design reviews Code reviews

Dynamic testing Testing the software by executing the program

Page 8: Software Testing Fundamentals -[1]

Characteristics of Static Testing

Static Do not observe system behavior Not looking for system failures Faults are directly detected Focus is on evaluating adherence to

Standards, Guidelines and Processes

Page 9: Software Testing Fundamentals -[1]

Characteristics of Dynamic Testing contd.

Dynamic Testing The program is executed System behavior is observed Determine the existence of failures Reveals the presence of faults

Page 10: Software Testing Fundamentals -[1]

White Box Testing (Code based testing)

A software testing technique whereby explicit knowledge of the internal workings of the item being tested

white box testing uses specific knowledge of programming code to examine outputs

Also known as glass box, structural, clear box and open box testing

Page 11: Software Testing Fundamentals -[1]

Advantages of white box testing

Helps to identify the following: Adherence to coding standards Adherence to coding guidelines Indentation Memory Leaks Logical complexity of the program Limitations of the program

Page 12: Software Testing Fundamentals -[1]

Black Box Testing (Requirement based testing)

A Software testing technique where by the expected outcome of the software is verified by providing inputs without considering how the software program arrives at those outputs.

The internal workings of the item being tested are not known by the tester in black box testing.

The tester does not ever examine the programming code and does not need any further knowledge of the program other than its specifications.

Page 13: Software Testing Fundamentals -[1]

Advantages of Black Box testing

The test is unbiased because the designer and the tester are independent of each other.

The tester does not need knowledge of any specific programming language(s).

The test is done from the point of view of the end user, not the designer or programmer.

Test cases can be designed as soon as the specifications are complete.

Page 14: Software Testing Fundamentals -[1]

Conclusions

White box testing does not guarantee 100% conformance to requirementsBlack box testing does not concentrate on logic of the program, but ensures conformance to requirementsHence, both white box and black box testing is required to ensure product quality”All types of testing, whether static or dynamic, white box or black box are part of verification and validation activities.

Let us see verification and validation activities.

Page 15: Software Testing Fundamentals -[1]

Verification & Validation activities

Verification Requirement reviews Design reviews Code reviews

Validation Unit testing Module testing Integration testing System testing Regression testing User acceptance testing Field testing

Page 16: Software Testing Fundamentals -[1]

Software Testing Life Cycle [STLC]

Page 17: Software Testing Fundamentals -[1]

STLC Activities

Test Requirements document

Test Planning

Test Design

Test Execution

Defect Tracking

Page 18: Software Testing Fundamentals -[1]

Test Requirements Document

From the software requirement specification (SRS)document, list of testable requirements are extracted and referred to as Test Requirements document.

All non technical and un-testable requirements are extracted from this document.

Test requirements document is the base for further activities of Testing

Page 19: Software Testing Fundamentals -[1]

Test Planning

Mainly, Test Plan addresses Scope and objectives of testing Schedule, Resources and Reporting Types of testing and methodology Phases of testing applicable and scope of testing in each

phase Software and hardware requirements Identified risks and strategy for mitigating those risks Information regarding tools used through entire testing

life cycle

Page 20: Software Testing Fundamentals -[1]

Test Design

Test Design is applicable to both white box and black box testing

Test design activity involves designing test cases for a given requirement (Black box testing) or for a given program (white box testing).

Test case is defined as “a set of test inputs, execution conditions, and expected

results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement [IEEE]

Page 21: Software Testing Fundamentals -[1]

Test Execution

Test execution involves Executing developed test cases on a piece of program

developed (Code based test cases) or on the entire software application (Requirements based test cases)

The status of test case is updated during execution Possible states include

Pass, Fail, Unable to test, deferred Test execution statistics are collected and analyzed for

test progress monitoring

Page 22: Software Testing Fundamentals -[1]

Defect Tracking

When actual result obtained from the software application during testing, deviates from expected result written in the test case, it is termed as a “defect”.

The test case is failed and a defect posted on the software.

The defect is fixed by the development team and the fix is provided in subsequent releases.

The fix provided for the defect is validated and if found to be working, the test case passes and the defect closed.

The defect posting, tracking, closing the defects are done in a defect tracking tool.

Page 23: Software Testing Fundamentals -[1]

SDLC Vs STLC

Requirements Phase

Design Phase

Coding Phase

Deployment Phase

Test Requirements document

Test Planning

Test Case Design

Unit Test Execution Def

ect T

rack

ing

System Test Execution

Page 24: Software Testing Fundamentals -[1]

Requirement Reviews

Page 25: Software Testing Fundamentals -[1]

Requirement reviews

Requirement quality affects work performed in subsequent phases of the system life cycle. Requirements of poor quality Increase cost and schedule: effort is spent during design

and implementation trying to figure out what the requirements are

Decrease product quality: poor requirements cause the wrong product to be delivered or de-scoping to meet schedule or cost constraints

Page 26: Software Testing Fundamentals -[1]

Requirement reviews contd.

Increase maintenance effort: lack of traceability increases the effort to identify where changes are required, especially as knowledgeable personnel leave

Create disputes with the customer/client: ambiguity causes differences in expectations and contractual issues

Are a major cause of project failure: all of the above

Page 27: Software Testing Fundamentals -[1]

Requirement Quality factors

Cohesive

Complete

Consistent

Feasible

Independent

Necessary

Unambiguous

Mandatory

Usable

Terse

Testable

Traceable

Non redundant

External observability

Metadata

Verifiable and validatable

Page 28: Software Testing Fundamentals -[1]

Requirement quality factors

RequirementsCohesive

Complete

Consistent

Feasible

Independent

NecessaryUnambiguous

Mandatory

Usable

Testable Traceable

Non redundant

External observability

Metadata

Terse

Page 29: Software Testing Fundamentals -[1]

Requirement characteristic: Cohesive

Does each requirement specify only one thing?

Do all parts of the requirement belong together:

Do all parts of a data requirement involve the same data abstraction?

Do all parts of a functional requirement involve the same functional abstraction?

Do all parts of an interface requirement involve the same interface?

Do all parts of a quality requirement involve the same quality factor or sub-factor?

Page 30: Software Testing Fundamentals -[1]

Requirement characteristic: Complete

Is each requirement self contained with no missing information?

Does each requirement contain all relevant information? For example, does the requirement include all relevant preconditions such as the relevant state of the application or component?

Does each requirement need no further amplification or clarification?

Does each requirement provide sufficient information to avoid ambiguity?

Page 31: Software Testing Fundamentals -[1]

Requirement characteristic: Complete

If the requirement is not a part of the current release, then is it specified as completely and as thoroughly as is currently known?

Is each identified “requirement” actually a single requirement and not actually multiple requirements?

Is the use of conjunctions (“and” and “or”) restricted to preconditions and invariants?

Page 32: Software Testing Fundamentals -[1]

Requirement characteristic: Consistent

Is each requirement externally consistent with its documented sources such as higher-level goals and requirements? Is each requirement externally consistent with all other related requirements of the same type or at the same requirements specification? For example, two requirements should neither be contradictory nor describe the same concepts using different words. Are the constituent parts of each requirement internally consistent? For example, are all parts of a compound precondition or post-condition consistent?

Page 33: Software Testing Fundamentals -[1]

Requirement characteristic: Feasible

Can each requirement be implemented given the existing hardware or software technology? Can each requirement be implemented given the endeavor’s budget? Can each requirement be implemented given the endeavor’s schedule? Can each requirement be implemented given the endeavor’s constraints on staffing (e.g., staff size, expertise, and experience)? Can each requirement be implemented given the limitations of physics, chemistry, etc?

Page 34: Software Testing Fundamentals -[1]

Requirement characteristic:Independent

The requirement does not rely on another requirement to be fully understood.

Requirements that need proxies are not independent.

Parent requirements rely on their children to be fully defined.

In testing, a parent is not satisfied until all its children are met.

Why retain them? These may be source requirements that must be retained.

Page 35: Software Testing Fundamentals -[1]

Requirement characteristic:Independent

Also, using them to structure the proxies or children improves understandability.

Example: "user friendly" can be used to assign, talk about, or locate the group of proxies defining "user friendly" for that particular project.

Page 36: Software Testing Fundamentals -[1]

Requirement characteristic: Mandatory

Is each requirement essential to the success of the application or component?

Is each requirement truly mandatory (i.e., a true requirement that must be met and implemented)?

Is each requirement truly required by some stakeholder, typically the customer or user organization?

Is each requirement free from unnecessary constraints (e.g., architecture, design, implementation, testing, and other technology decisions)?

Page 37: Software Testing Fundamentals -[1]

Requirement characteristic: Mandatory

Does each requirement specify a “what” rather than a “how”?

Is each requirement clearly differentiated from:

A “nice to have” item on someone’s wish list (i.e., gold-plating)?

Constraints?

Page 38: Software Testing Fundamentals -[1]

Requirement characteristic: Metadata

Individual requirements should have metadata (i.e., attributes or annotations) that characterizes them.

This metadata can include (but is not limited to) Acceptance criteria, Allocation, Assumptions,

Identification, Prioritization, Rationale, Schedule, Status, and Tracing information

Page 39: Software Testing Fundamentals -[1]

Requirement characteristic: Verifiability

Can each requirement be verified against its source?

Can each requirement be verified against its associated standards (e.g., content and format), guidelines, and/or templates?

Page 40: Software Testing Fundamentals -[1]

Requirement characteristic: Validatability

Is it possible to ensure that each requirement is actually what the customer representatives really want and need?

Is it possible to ensure that each requirement is actually user representatives really want and need?

Is it possible to ensure that each requirement is actually what the marketing representatives really want and need?

Page 41: Software Testing Fundamentals -[1]

Does each requirement only specify behavior and/or characteristics that are externally observable when treating the application or component as a black-box?

Does each requirement avoid specifying any internal architecture, design, implementation, or testing decisions?

If a requirement does specify one or more internal architecture, design, implementation, or testing decisions, is the requirement clearly identified as a constraint rather than as a pure requirement?

Requirement characteristic:External Observability

Page 42: Software Testing Fundamentals -[1]

Requirement characteristic: Testable

Able to prove the object of the requirement satisfies the requirement

Un-testable requirements can lead to disputes with the client.

Example of an un-testable requirement “The system shall produce the ABC report in a

timely manner” “The system shall be written in the approved

language”

Page 43: Software Testing Fundamentals -[1]

Requirement characteristic: Traceable

Examine the statement “The system shall calculate retirement annuities and survivor benefits”

Observations: 2 different requirement clubbed together Cannot maintain distinctness while reporting Can be decomposed as under

The system shall calculate A. Retirement annuities B. Survivor benefits

Page 44: Software Testing Fundamentals -[1]

Requirement attributes

Unique identifier

Organizational information--for example, what are the parents/children of the requirement, its category or type

Method of validation

Item(s) that satisfy the requirement

Source of requirement (legal citation, business policy, etc.)

Association with the test plan/tests(s)

Requirement owners (subject matter expert, analyst)

Requirement status

Page 45: Software Testing Fundamentals -[1]

Requirement attributes contd.

Requirement change historyWBS codeRisk Priority Cost (estimate and actual) Degree of difficulty Metrics Justification for the requirement Cross references to other requirements or documents Comments

Page 46: Software Testing Fundamentals -[1]

Case Study I: Requirements review

Review the software requirement specification (SRS) document for marketing division of ABC pharmaceuticals and provide review comments in the enclosed template.

Categorize each review comment by appropriate severity and category.

At the end, provide statistics of review comments in terms of severity and category.

Page 47: Software Testing Fundamentals -[1]

Design Review

Page 48: Software Testing Fundamentals -[1]

Design reviews

Reviews for software design focus on data design, architectural design and procedural design.

In general, there are two types of design reviews Preliminary design review Design walkthrough

Page 49: Software Testing Fundamentals -[1]

Preliminary design review and design walkthrough…

Preliminary design review Assesses the translation of requirements to the design

of data and architecture

Design walkthrough Concentrates on the procedural correctness of

algorithms as they are implemented within program modules

Page 50: Software Testing Fundamentals -[1]

Design review verifications…

Do designs satisfy all specified requirements for the product?

Have all relevant standards, guidelines applied or met?

Are product design and processing capabilities compatible?

Are safety requirements met?

Page 51: Software Testing Fundamentals -[1]

Design review verifications…

Do designs meet functional and operational requirements.. For example, performance and reliability requirements?

Is the design satisfactory for all the anticipated environmental and load conditions?

Are components or service elements standardized and do they provide reliability, availability and maintainability?

Page 52: Software Testing Fundamentals -[1]

Design review verifications…

Are plans for implementing design technically feasible (in terms of purchasing, production, installation, inspection and testing)

Are the assumptions made during the design process valid?

Page 53: Software Testing Fundamentals -[1]

Case Study II: Design review

Review the Design specification document requirements provided in SRS for marketing division of ABC pharmaceuticals and provide review comments in the enclosed template.

Categorize each review comment by appropriate severity and category.

At the end, provide statistics of review comments in terms of severity and category

Page 54: Software Testing Fundamentals -[1]

Code Reviews

Page 55: Software Testing Fundamentals -[1]

Introduction :Code review

Code review is a phase in the computer program development process.

It is an activity in which, authors of code, peer reviewers, and perhaps quality assurance reviewers get together to review code.

The code is read line by line for real or potential flaws, consistency with the overall program design, comment quality, and adherence to coding standards”

Page 56: Software Testing Fundamentals -[1]

Advantages:Code review

Finding and correcting errors at this stage is relatively inexpensive

Code reviews tend to reduce the more expensive process of handling, locating, and fixing bugs during later stages of development or after code delivery to users

Page 57: Software Testing Fundamentals -[1]

Code review smoke test

The code review smoke test includes Does the code build correctly? Does the code execute as expected? Has the developer tested the code for positive

workflows? As a reviewer, do you understand the code?

Page 58: Software Testing Fundamentals -[1]

Comments and coding conventions

Does the code respect project specific coding conventions?

Does the source file start with an appropriate header and copyright information?

Are variable declarations properly commented?

Are units of numeric data properly commented?

Are units of numeric data clearly stated?

Are all functions, methods and classes documented?

Are complex algorithms, code optimizations adequately commented?

Does the code that have been commented out have an explanation?

Are comments used to identify missing functionality or unresolved issue in the code?

Page 59: Software Testing Fundamentals -[1]

Error handling

Are assertions used everywhere data is expected to have a valid value or range?

Are errors properly handled each time a function returns?

Are resources and memory released in all error paths?

Are all thrown exceptions handled properly?

Is the function caller notified when an error is detected?

Has error handling code been tested?

Page 60: Software Testing Fundamentals -[1]

Resource Leaks

Is allocated memory (non-garbage collected) freed?

Are all objects (Database connections, Sockets, Files, etc.) freed even when an error occurs?

Is the same object released more than once?

Does the code accurately keep track of reference counting?

Page 61: Software Testing Fundamentals -[1]

Thread safeness

Are all global variables thread-safe?

Are objects accessed by multiple threads thread-safe?

Are locks released in the same order they are obtained?

Is there any possible deadlock or lock contention?

Page 62: Software Testing Fundamentals -[1]

Control Structures

Are loop ending conditions accurate?

Is the code free of unintended infinite loops?

Page 63: Software Testing Fundamentals -[1]

Performance

Do recursive functions run within a reasonable amount of stack space?

Are whole objects duplicated when only references are needed?

Does the code have an impact on size, speed, or memory use?

Are you using blocking system calls when performance is involved?

Is the code doing busy waits instead of using synchronization mechanisms or timer events?

Page 64: Software Testing Fundamentals -[1]

Functions

Are function parameters explicitly verified in the code?

Are arrays explicitly checked for out-of-bound indexes?

Are functions returning references to objects declared on the stack?

Are variables initialized before they are used?

Does the code re-write functionality that could be achieved by using an existing API?

Page 65: Software Testing Fundamentals -[1]

Bug fixes

Does a fix made to a function change the behavior of caller functions?

Does the bug fix correct all the occurrences of the bug?

Page 66: Software Testing Fundamentals -[1]

Case Study III

Review the code written in C++ for marketing division of ABC pharmaceuticals and provide review comments in the enclosed template.

Categorize each review comment by appropriate severity and category.

At the end, provide statistics of review comments in terms of severity and category. The categories can include Comments and coding conventions, Error handling,

Resource leaks, Control structures, Bug fixes, Functions, Deviation from Req, Deviation from design.

Page 67: Software Testing Fundamentals -[1]

White Box Testing

Page 68: Software Testing Fundamentals -[1]

White Box Testing (Code based testing)

A software testing technique whereby explicit knowledge of the internal workings of the item being testedWhite box testing uses specific knowledge of programming code to examine outputs Examines the internal design of the programRequires detailed knowledge about structure of the programAllows exhaustive testing of all the logical paths (i.e. each line of code for each condition)Also known as glass box, structural, clear box and open box testing

Page 69: Software Testing Fundamentals -[1]

Advantages of white box testing

Helps to identify the following: Adherence to coding standards Adherence to coding guidelines Indentation Memory Leaks Buffer overflows, stacks Logical complexity of the program Limitations of the program

Page 70: Software Testing Fundamentals -[1]

Statement coverage

Statement Coverage Each statement in the program is executed at least once 100% of the statements in the program should be

executed at-least once

Weakness: It is necessary but not sufficient. When there is a decision, you have to ensure that it takes a correct path. It is not done by statement coverage.

Page 71: Software Testing Fundamentals -[1]

Branch/Decision Coverage

Statement coverage does not address all outcomes of decisions.Branches like If..Else, Do..While are to be evaluated for both true and falseTest each condition for a true and a false value That is, each branch direction must be traversed at-least once Ex: For the condition (A>=5) or (B<2) THEN X=1, the test cases are:A=6 and B=4 …True (Here, A is true and B is false)A=2 and B=3 … False (Here, A is false and B is false)That is, check how many decisions are there. For each decision, write one test case for true and one test case for false

Page 72: Software Testing Fundamentals -[1]

Conditions Coverage

All the conditions should be executed at least once for both false and true conditions.

True and false outcome of each condition in a decision must be tested.

Do not look for combinations.

 Example: For the condition (A>=5) or (B<2) THEN X=1, the test cases are:

A=6 and B=3 …True (Here, A is true and B is False)

A=2 and B=1 … True (Here, A is false and B is true)

Page 73: Software Testing Fundamentals -[1]

Condition/Decision coverage

Condition/Decision Coverage It may not always result in decision coverage. In such

cases, go in for decision +condition coverage.

Multiple Condition Coverage: Go for combinations. For Example: For the condition

(A>=5) or (B<2) THEN X=1, the test cases are: A=6, B=6 A=6, B=3 A=2, B=1 A=2, B=3

Page 74: Software Testing Fundamentals -[1]

Path Coverage

Errors are sometimes revealed in a path including combination of branches.

More general coverage requires executing all possible paths, known as path coverage criteria.

Number of paths may be infinite if there are loops.

100% path coverage is impossible

Page 75: Software Testing Fundamentals -[1]

White box testing steps

Examine the program logic

Design test cases to satisfy logic coverage criteria

Run the test cases

Compare the actual results obtained with expected results in the test case

Report errors in case of deviation from expected results

Compare actual coverage to expected coverage

Page 76: Software Testing Fundamentals -[1]

Cyclomatic Complexity

Cyclomatic complexity provides quantitative measure of logical complexity of the program

Cyclomatic complexity provides minimum number of independent paths in the given program

Based on the Cyclomatic complexity value obtained, the decision to accept the program for testing or not, can be made

Page 77: Software Testing Fundamentals -[1]

Black Box Testing (Requirement Based Testing)

Page 78: Software Testing Fundamentals -[1]

Software Testing Phases

Page 79: Software Testing Fundamentals -[1]

Software Testing Phases

Unit Testing

Module Testing

Integration Testing

System Testing

User Acceptance Testing

Field Testing

Page 80: Software Testing Fundamentals -[1]

Test Case Design Techniques

Page 81: Software Testing Fundamentals -[1]

Client Server Application Testing

Page 82: Software Testing Fundamentals -[1]

Web Based Application Testing

Page 83: Software Testing Fundamentals -[1]

Introduction to web applications

Web Technology

Web Architecture

HTML/DHTML

Web servers

Cookies

Types of testing applicable to web applications

Page 84: Software Testing Fundamentals -[1]

Applicable types of testing

Unit testing

Page flow testing

Usability testing

Functional testing

Load testing

Performance testing

Data volume testing

Security testing

Regression testing

External testing

Connectivity testing

Stress testing

Page 85: Software Testing Fundamentals -[1]

Unit Testing

Unit testing involves testing of the individual modules and pages that make up the application In general, unit tests check the behavior of a given page i.e. does the application behave correctly and consistently given either good or bad input Some of the types of checking would include: Invalid input (Missing output, out of bound input,

entering an integer when float expected, and vice versa, control characters in strings etc.,)

Alternate Input Format (e.g., 0 instead of 0.0, 0.00000001 instead of 0 etc.,)

Page 86: Software Testing Fundamentals -[1]

Unit Testing

Button click testing e.g., multiple clicking with and without pauses between clicks.

Immediate reload after button click prior to response having been received.

Multiple reloads in the same manner as above. 

Random input and random click testing. This testing involves a user randomly pressing buttons

(including multiple clicks on "hrefs") and randomly picking checkboxes and selecting them.

Page 87: Software Testing Fundamentals -[1]

Unit Testing

There are two forms of output screen expected:  An error page indicating the type of error encountered. A normal page showing either the results of the

operation or the normal next page where more options may be selected.

“In no event should a catastrophic error occur”

Page 88: Software Testing Fundamentals -[1]

Page Flow Testing

Page flow testing deals with ensuring that jumping to random pages does not confuse the application.

Each page should typically check to ensure that it can only be viewed via specific previous pages, and if the referring page was not one of that set, then an error page should be displayed.

A page flow diagram is a very useful aid for the tester to use when checking for correct page flow within the application.

 

Page 89: Software Testing Fundamentals -[1]

Impact of page flow on security

Some aspects of page flow testing cross into security. Some simple checks to consider are Forcing the application to move in an unnatural path. The application must resist, and display appropriate

error message

Page 90: Software Testing Fundamentals -[1]

Page flow testing : Details

Log into the system and then attempt to jump to any page in any order once a session has been established.

Use bookmarks and set up temporary web pages to redirect into the middle of an application using faked session information

Page 91: Software Testing Fundamentals -[1]

Usability testing

Usability testing ensures that all pages present a cohesive look to the user, including spelling, graphics, page size, response time, etc

Examples of usability testing include: Spelling checks Graphical user interface checks (colors, dithering,

aliasing, size, etc.,) Adherence to web GUI Standards Meaningful error messages Accuracy of data displayed

Page 92: Software Testing Fundamentals -[1]

Usability testing contd.

Page Navigation Context sensitivity Editorial continuity Accessibility Accuracy of data in the database as a result of user

input Accuracy of data in the database as a result of external

factors (e.g. imported data) Meaningful help pages including context sensitive help

Page 93: Software Testing Fundamentals -[1]

Functional Testing

Functional testing ensures Conformance to functional requirements of the

application Scenarios/Test cases are designed to find out

conformance to the requirements Whole business logic gets tested as part of the

functional testing

Page 94: Software Testing Fundamentals -[1]

Load Testing

Load testing the application involves generation of varying loads (in terms of concurrent users) against web server, the databases supporting the web server and the middle ware/application server logic connecting

those pages to the databases

Load testing includes verification of data integrity on the web pages, within the back end database and also the load ramping or surges in activity against the application

Page 95: Software Testing Fundamentals -[1]

Load Testing

"Does the site scale", "Is the site's response time deterministic, etc.Examples of load testing would include: Sustained low load test (50 users for around 48 hours). Sustained high load test (300+ users for 12 hours). Surge test (e.g. run 50 users, then surge to 500 users

and then return to 50, no memory leaks, lost users, orphaned processes, etc., should be seen).

The system should continue running with multiple surges at various times during the day.

This test should run for 48 hours.

Page 96: Software Testing Fundamentals -[1]

Load Testing contd.

Load testing is also to discover at what load the application would fail and what are the saturation point.

Page 97: Software Testing Fundamentals -[1]

Performance Testing

Performance Testing refers to the response time by the software to process and present the requests made by the end users

Performance depends on Speed of the network Hardware configuration of application server, web

server, database server and the client system (Processor, RAM etc)

Volume of data in the database

Page 98: Software Testing Fundamentals -[1]

Data Volume Testing

Data volume testing involves testing the application under data load, where large quantities of data are passed through the system. (e.g. large number of items in dropdown/combo boxes, large amount of data in text boxes).

Performance of the application should be monitored during this testing, since a slow database could significantly affect response time and data must be collected over this.

 

Page 99: Software Testing Fundamentals -[1]

Data Volume Testing

This data can be used as a control set for contrasting monitoring data from a live system and providing predictive information indicating when major application stress points may be encountered.

No errors should be seen on application pages or in error logs for pages that are data intensive.

Page 100: Software Testing Fundamentals -[1]

Security Testing

Security testing involves verifying weather both the servers and the application are managing security correctly

Security from server perspective Attempt to penetrate system security both internally

and externally to ensure the system that houses the application is secure from bother internal and external attacks.

Attempt to cause things like buffer overflow to result in root access being given accidentally, (such code does exist, but explaining it is beyond the scope of this document)

Page 101: Software Testing Fundamentals -[1]

Security Testing contd.

Attempt to cause the application to crash by giving it false or random information

Ensure that the server OS is up to correct patch levels from security viewpoint

Ensure that the server is physically secure

Page 102: Software Testing Fundamentals -[1]

Security Testing contd.

Application level security testing involves testing some or all the following Unauthenticated access to the application Unauthorized access to the application Unencrypted data passing Protection of the data Log files should be checked to ensure they do not

contain sensitive information

Page 103: Software Testing Fundamentals -[1]

Security Testing contd.

Faked sessions. Sessions information must be valid and secure. (e.g. a URL containing a session identifier cannot be copied from one system to another and then the application be continued from the different system without being detected)

Multiple login testing by a single user from several clients

Page 104: Software Testing Fundamentals -[1]

Security Testing contd.

Attempt to break into the application by running username/password checks using password-cracking program

Security audit, e.g. examine log files, etc., no sensitive information should be left in raw text/human readable form in any log file

Automatic logout after N minutes of inactivity with positive feedback to the user

Page 105: Software Testing Fundamentals -[1]

Regression Testing

Regression testing ensures that during the lifetime of the application, any fixes do not break other parts of the application

This type of testing typically involves running all the tests, or a relevant subset of those tests when defect fixes are made or new functionalities added

The regression tests must also be kept up to date with planned changes in the application. As the application evolves, so must the tests

Page 106: Software Testing Fundamentals -[1]

External Testing

External testing deals with checking the effect of external factors on the application. Example of external factors would be the web server, the database server, the browser, network connectivity issues, etc. Examples of external testing are: Database unavailability test (e.g., is login or further access to

the application permitted should the database go into a scheduled maintenance window)

Database error detection and recovery test (e.g., simulate loss of database connectivity, the application should detect this, and report an error accordingly). The application should be able to recover without human intervention when the database returns

Page 107: Software Testing Fundamentals -[1]

External Testing

Database authentication test (check access privileges to the database).

Connection pooling test (ensure that database connections are used sparingly, and will not run out under load).

Web page authentication test. Browser compatibility tests – for example, does the

application behave the same way on multiple browsers, does the JavaScript work the same way, etc.,

Page 108: Software Testing Fundamentals -[1]

Connectivity Testing

Connectivity testing involves determining if the servers and clients behave appropriately under varying circumstances

This testing is difficult to accomplish from a server perspective since it is expected that the servers will be operating with standby power supplies as well as being in a highly available configuration

Thus the server tests need not be run using a power–off scenario; simply removing the network connection to the PC may be sufficient

Page 109: Software Testing Fundamentals -[1]

Connectivity Testing contd.

Two aspects of connectivity testing Voluntary, where a user actively interacts with the

system in an unexpected way

Involuntary, where the system acts in an unpredictable manner

Page 110: Software Testing Fundamentals -[1]

Connectivity Testing: Involuntary

Test: Forcing the browser to prematurely terminate during a page

load using a task manager to kill the browser, or hitting the ESC key and reloading or revisiting the same page via a bookmark.

Expectation: The testing should cover both a small delay (< 10secs) in

reinstating the browser as well as a long delay (> 10mins). In the latter case the user should not be able to connect back to the application without being redirected to the login page.

Page 111: Software Testing Fundamentals -[1]

Connectivity Testing: Involuntary

Test: Simulation of Hub Failure between PC and the Web Server. Removing the network cable from the PC, attempt to visit a

page; abort the visit, and then reconnect the cable can simulate this.

The test should use two time delays; the first should be under 15 seconds, and the second delay around 15 minutes before reconnecting.

After reconnecting, attempt to reload the previous page Expectation: The user should be able to continue with the session unless a specified timeout has occurred in which case the user should be redirected to a login page.

Page 112: Software Testing Fundamentals -[1]

Connectivity Testing: Involuntary

Test: Web server on/off test. Shutdown the web server, then restart the server

Expectation: The user should be able to connect back to the

application without being redirected to the login page. This will prove the statelessness of individual pages

Note: The shutdown is only for the web server. Do not

attempt this with an application server, as that is a separate test

Page 113: Software Testing Fundamentals -[1]

Connectivity Testing: Involuntary

Test: Database server on/off test. Shutdown the database server and restart it

Expectation: The user should be able to connect back to the application without being redirected to the login page

It may be that a single transaction needs to be redone, and the application should detect this and react accordingly

Page 114: Software Testing Fundamentals -[1]

Connectivity Testing: Involuntary

Application server on/off test Shutdown the application server and restart it There are two possible outcomes for this depending on

how session management is implemented The first outcome is that the application redirects to

an error page indicating loss of connectivity, and the user is requested to login and retry

The second outcome is the application continues normally since no session information was lost because it was held in a persistent state that transcends application server restarts

Page 115: Software Testing Fundamentals -[1]

Connectivity Testing: Voluntary

Examples of voluntary connectivity testing include; Quit from session without the user saving state. Quit from session with the user saving state. Server – forced quit from session due to inactivity. Server – forced quit from session due to server

problem. Client forced quit from session due to visiting another

site in the middle of a session for a brief period of time. Client – forced quit from session due to visiting another

site/application for an extended period of time. Client – forced quit due to browser crashing

Page 116: Software Testing Fundamentals -[1]

Extended Session Testing

Remaining in a session for an extended period of time and click items to navigate the screen. The session must not be terminated by the server except in the case of a deliberate logout initiated by the user Remaining on a single page for an extended length of time. The session should be automatically terminated and the next click by the user should take the user to a page indicating why the session was terminated and the option to log back into the system should be present. The page may have timed redirect associated with it, and if so, a page indicating a timed out session should be displayed.

Page 117: Software Testing Fundamentals -[1]

Extended Session Testing

The following must be tested The user's session should have been saved and may

optionally be restored on re login The user's state must reflect the last complete action the

user performed Leaving the application pages to visit another site or

application and then returning to original application via a bookmark or the back button should result in a restoration of state, and the application should continue as if the person had not left

Page 118: Software Testing Fundamentals -[1]

Power Hit/Reboot/Other Cycle Testing

Power Hit/Cycle testing involves determining if the servers and clients act appropriately during the recovery process Client power off/on test Client hub power off/on test Client network connection removal/reinsertion test Server power off/on test Server Hub power off/on test Server network connection removal/reinsertion test

Page 119: Software Testing Fundamentals -[1]

Standards Conformance Testing

Conformance to Web application standards Web user interface standards and guidelines Web Usability standards Web Security standards Domain specific standards (e.g. HL7, CCOW for

Healthcare, SOX for Banking softwares etc)

Page 120: Software Testing Fundamentals -[1]

Bug Life Cycle

Page 121: Software Testing Fundamentals -[1]

Bug Life Cycle

Submitted

In work

Solved

Validated

TerminatedInvalid

Deferred Unable to fix in

If the bug is not solved

current release

The bug is tested by the tester and closed here.

The bug is solved only by the developer

Developer is solving the bug

bug

Reviewed The bug is reviewed and closed by mgmt

Re-work

Page 122: Software Testing Fundamentals -[1]

Bug life cycle [Notes]

The status “Submitted” or “Posted” is assigned to the defect when the tester raises the defect. In case the submitted bug is found to be invalid, the bug is moved to “Terminated” state or “Rejected” state by the development team.The status of the bug is moved to “In – work” by the developer once the developer starts working on fixing the defect.Once the developer fixes the bug, the developer moves the status of the defect to “Solved” state and the fix shall be made available to the tester in the next release.

Page 123: Software Testing Fundamentals -[1]

Bug life cycle [Notes] contd.

The tester tests the fix for the bug and if found to be working fine, moves the status of the defect to “Validated” state, otherwise puts it back to the developer and the status of the bug is moved back to “In work”.

In case the development team is not in a position to fix the defect in the current release, the development team moves the status of the defect to “Deferred” state meaning it shall be taken up for fixing in the next release.

Page 124: Software Testing Fundamentals -[1]

Reporting Defects

Page 125: Software Testing Fundamentals -[1]

Reporting defects: Attributes

Product name/Application NameVersionModuleSummarySteps to reproduceImpactDatabase informationSeverityPriority

Browser (IE, NN, Mozilla)Screen shots (if required and available)Reproducible (Yes, No, Sporadic)Type of bug (Performance, Functionality, User interface etc)Phase of testing (Unit,, System testing)

Page 126: Software Testing Fundamentals -[1]

Details of the attributes

Product name/Application: Provide the name of the application being tested or

select it from a list

Version Provide version of the application being tested or select

it from a list.. Ex: version 1.0, 1.2 etc

Module Provide module of the application in which the bug

occurred or select it from a list

Page 127: Software Testing Fundamentals -[1]

Details of the attributes contd.

Summary Provide summary of the defect such that this summary,

when viewed, provides sufficient picture, to which team and category this defect belongs to.

Project Leads/Managers assign defect to different individuals based on the details of the summary.

Steps to reproduce (Description) Provide step by step explanation of how you arrived at

the defect. The development team must be able to reproduce the defect with these details.

Page 128: Software Testing Fundamentals -[1]

Details of the attributes contd.

Impact Provide impact of the defect from the application and

end user’s perspective, being posted.

Database information Provide information on database as to whether

it is a new database, or a ported database,

If yes, ported from which previous release

Page 129: Software Testing Fundamentals -[1]

Details of the attributes contd.

Severity Critical (The defect has severe impact on the end user’s

workflow) Serious (The defect has blocked workflow(s), but

alternatives are available) Minor (Does not block any user’s workflows. Trivial

error)Priority High (Needs immediate fixing) Medium (Can be fixed with agreed time period) Low (can be fixed at convenience)

Page 130: Software Testing Fundamentals -[1]

Details of the attributes contd.

Phase of testing Provide or select a phase of testing such as Unit testing,

Module testing, Integration testing, System testing This helps to analyze how many bugs were uncovered

during a particular phase of testing and facilitates comparison of finding out defects across phases

Page 131: Software Testing Fundamentals -[1]

Details of the attributes contd.

Reproducible This attribute generally has 3 options i.e. Yes, No, &

Sporadic Selecting Yes indicates that the defect is reproducible

by following the steps specified as part of the defect. Selecting No indicates that the defect is not

reproducible in a particular given sequence. Selecting “Sporadic” indicates the defect is

reproducible by following the steps specified but the defect does not consistently appear

Page 132: Software Testing Fundamentals -[1]

Details of the attributes contd.

Type of bug Provide or select the type of bug like whether defect

found falls into the category of Functionality, Performance etc.

General categories include Functionality, Performance, Usability, Load, Volume, Stress, Security, User interface

This statistics helps to understand how many functional, performance etc defects appeared in the release and gives direction to identify the bottlenecks

Page 133: Software Testing Fundamentals -[1]

Details of the attributes contd.

Browser Provide or select browser on which the software was

being used when the defect occurred. Ex: Internet explorer, Netscape Navigator, Mozilla etc.

Screenshots Attach screenshots of error messages, system crashes

while posting the defect. That facilitates the development team to understand the defect better

Page 134: Software Testing Fundamentals -[1]

Case Study

Study the following defects observed while testing a software product and re-write them in proper format and assign appropriate severity and priority to the defects.

Page 135: Software Testing Fundamentals -[1]

Thank You

Page 136: Software Testing Fundamentals -[1]