Integration and Higher Level Testing Prepared by Stephen M. Thebaut, Ph.D. University of Florida...

64
Integration and Higher Level Testing Prepared by Stephen M. Thebaut, Ph.D. University of Florida Software Testing and Verification Lecture 11

Transcript of Integration and Higher Level Testing Prepared by Stephen M. Thebaut, Ph.D. University of Florida...

Integration and Higher Level Testing

Prepared by

Stephen M. Thebaut, Ph.D.

University of Florida

Software Testing and Verification

Lecture 11

Context

• Higher-level testing begins with the integration of (already unit-tested) modules to form higher-level program entities (e.g., components).

• The primary objective of integration testing is to discover interface and blatant higher-level design errors associated with the elements being integrated. (Somewhat in keeping with the spirit of “smoke testing”.)

Context (cont’d)

• Once the elements have been successfully integrated (i.e., once they are able to function together), the functional and non-functional characteristics of the higher-level element can be tested thoroughly (via component, product, or system testing).

Integration Testing

• Integration testing is carried out when integrating (i.e., combining):

– Units or modules to form a component

– Components to form a product

– Products to form a system

• The strategy employed can significantly affect the time and effort required to yield a working, higher-level element.

Integration Testing (cont’d)

• Note that ‘‘integration testing’’ is sometimes defined as the level of testing between unit and system. We use a more general model of the testing process.

Levels of Testing

Unit Test

Integration Test

System Test

Unit Test

Integration Test

Component Test

Integration Test

Product Test

System Test

Integration TestA Popular View

A More General View

Integration Testing Strategies

• The first (and usually the easiest...) issue to address is the choice between instantaneous and incremental integration testing.

• The former is sometimes referred to as the big bang approach. (Guess why!) Locating subtle errors can be very difficult after the “bang”.

• “Integration is a process, not an event.”

Integration Testing Strategies (cont’d)

• Incremental integration testing results in some additional overhead, but can significantly reduce error localization and correction time. (What is the overhead?)

• The optimum incremental approach is inherently dependent on the individual project and the pros and cons of the various alternatives.

As an aside…

Here’s a slide from an on-line lecture† that I cameacross recently…

• Proper policy and plans• Advocacy• Manpower training• Realistic tasks

• Coordination with other sectors• Proper support• Access to drugs

Principles of Integration

† “Integration of Substance Abuse Disorders in National Rural Health Mission (NRHM),” Centre for Community Medicine, All India Institute of Medical Sciences, New Delhi

Incremental Strategies

• Suppose we are integrating a group of modules to form a component, the control structure of which will form a ‘‘calling hierarchy’’ as shown.

A

B C D

E F G H I

J K L

Incremental Strategies (cont’d)

• In what order should the modules be integrated?

– From the top (“root”) module toward the bottom?

– From bottom (“leaf”) modules toward the top?

– By function?– Critical or high-risk modules first?– By availability?

Incremental Strategies (cont’d)

• How many should be combined at a time?

• What scaffolding (i.e., drivers and stubs to exercise the modules and oracles to interpret/inspect test results) will be required?

• Is scaffolding ever required outside the context of integration testing?

Top-Down Strategy

1. Start with the ‘‘root’’ and one or more called modules.

2. Test this group using stubs to take the place of missing called modules, and one driver (if necessary) to call the root module.

3. Add one or more other called modules, replacing and providing new stubs as necessary.

(cont’d)

Top-Down Strategy (cont’d)

4. Continue the process until all elements have been integrated and tested.

Top-Down Strategy (cont’d)

A

B C D

E F G H I

J K L

C

G

D

H I

Top-Down Strategy (cont’d)

A

B stub

stub

stub

stub

driver A

B C D

E F G H I

J K L

Top-Down Strategy (cont’d)

A

B

stub

stub

stub

stub

CC

stub

driver A

B C D

E F G H I

J K L

Top-Down Strategy (cont’d)

A

B

stub

stub

stub

stub

CC

stub

DD

stub

driver A

B C D

E F G H I

J K L

Top-Down Strategy (cont’d)

A

B

stub

stub

stub

CC

stub

DD

stub

E

driver A

B C D

E F G H I

J K L

Top-Down Strategy (cont’d)

A

B

stub

stub

stub

CC

stub

DD

stub

E F

driver A

B C D

E F G H I

J K L

Top-Down Strategy (cont’d)

A

B

stub

stub

CC

stub

DD

stub

E F GG

driver A

B C D

E F G H I

J K L

Top-Down Strategy (cont’d)

A

B

stub

stub

CC

stub

DD

stub

E F GG HH

driver A

B C D

E F G H I

J K L

Top-Down Strategy (cont’d)

A

B

stub

stub

CC

stub

DD

E F GG HH II

driver A

B C D

E F G H I

J K L

Top-Down Strategy (cont’d)

A

B

stub

CC

stub

DD

E F GG HH II

J

driver A

B C D

E F G H I

J K L

Top-Down Strategy (cont’d)

A

B

stub

CC DD

E F GG HH II

J K

driver A

B C D

E F G H I

J K L

Top-Down Strategy (cont’d)

A

B CC DD

E F GG HH II

J K

driver

L

Top-Down Strategy (cont’d)

• Potential Advantages:

– Allows early verification of high-level behavior.

– One driver (at most) is required.

– Modules can be added one at a time with each step if desired.

– Supports both ‘‘breadth first’’ and ‘‘depth first’’ approaches.

Top-Down Strategy (cont’d)

• Potential Disadvantages:

– Delays verification of low-level behavior.

– Stubs are required for missing elements.

– Test case inputs may be difficult to formulate.

– Test case outputs may be difficult to interpret. (Oracles may be needed to interpret/inspect test results.)

Bottom-Up Strategy

1. Start at the bottom of the hierarchy with two or more sibling leaf modules, or an only-child leaf module with its parent.

2. Test this group using a driver. (No stubs are required.)

3. Add one or more additional siblings, replacing drivers with modules only when all modules they call have been integrated.

(cont’d)

Bottom-Up Strategy (cont’d)

4. Continue the process until all elements of the subtree have been integrated and tested.

5. Repeat the steps above for the remaining subtrees in the hierarchy (or handle in parallel).

6. Incrementally combine the sub-trees and then add the root module.

Bottom-Up Strategy (cont’d)

A

B C D

E F G H I

J K L

C

G

D

H I

Bottom-Up Strategy (cont’d)

F

J

driver

A

B C D

E F G H I

J K L

Bottom-Up Strategy (cont’d)

F

J

driver

E

A

B C D

E F G H I

J K L

Bottom-Up Strategy (cont’d)

F

J

driver

E

B

A

B C D

E F G H I

J K L

Bottom-Up Strategy (cont’d)

F

J

driver

E

B

K L

driver

A

B C D

E F G H I

J K L

Bottom-Up Strategy (cont’d)

F

J

driver

E

B

K L

driver

HH

A

B C D

E F G H I

J K L

Bottom-Up Strategy (cont’d)

F

J

driver

E

B

K L

driver

HH II

A

B C D

E F G H I

J K L

Bottom-Up Strategy (cont’d)

F

J

driver

E

B

K L

driver

HH II

DD

A

B C D

E F G H I

J K L

Bottom-Up Strategy (cont’d)

F

J

driver

E

B

K L

driver

HH II

DD

GG

driver

A

B C D

E F G H I

J K L

Bottom-Up Strategy (cont’d)

F

J

driver

E

B

K L

driver

HH II

DD

GG

driver

CC

A

B C D

E F G H I

J K L

Bottom-Up Strategy (cont’d)

K L

driver

HH II

DD

GG

driver

CCB

E F

J

A

B C D

E F G H I

J K L

Bottom-Up Strategy (cont’d)

F

J

driver

E

B

K

GG

driver

CC DD

HH II

K L

A

B C D

E F G H I

J K L

Bottom-Up Strategy (cont’d)

B CC DD

E F GG HH II

J K

driver

L

A

B C D

E F G H I

J K L

Bottom-Up Strategy (cont’d)

B CC DD

E F GG HH II

J K

driver

L

A

B C D

E F G H I

J K L

Bottom-Up Strategy (cont’d)

A

B CC DD

E F GG HH II

J K

driver

L

Bottom-Up Strategy (cont’d)

• Potential Advantages:

– Allows early verification of low-level behavior.

– No stubs are required.

– Easier to formulate input data for some subtrees.

– Easier to interpret output data for others.

Bottom-Up Strategy (cont’d)

• Potential Disadvantages:

– Delays verification of high-level behavior.

– Drivers are required for missing elements.

– As subtrees are combined, a large number of elements may be integrated at one time.

Hybrid Incremental Integration Approaches

• Risk Driven

Start by integrating the most critical or complex modules together with modules they call or are called by.

• Schedule Driven

To the extent possible, integrate modules as they become available.

(cont’d)

Hybrid Incremental Integration Approaches (cont’d)

• Function or Thread Driven

Integrate the modules associated with a key function (thread); continue the process by selecting another function, etc.

A

B C D

E F G H I

J K L

How about Object-Oriented Systems?

• Just as a calling hierarchy allows design of an integration strategy for imperative software, use/include relations serve this purpose for object-oriented software.

• Since there may be no single “root” class, testing usually proceeds cluster by cluster in a “bottom-up” fashion, starting with “leaf” classes that depend on no others.

• We will come back to this in Lecture 12.

Higher-Level Testing

• Higher-level tests focus on the core functionality specified for higher level elements, and on certain emergent properties that become more observable as testing progresses toward the system level.

• The black-box testing strategies already considered (e.g., partition and combinatorial approaches) apply to functional testing at any level.

Higher-Level Testing (cont’d)

• Higher-level testing typically includes:

• A brief overview of each follows.

– Usability test– Installability test– Serviceability test– Performance test– Stress test– Security test

– Software compatibility test

– Device and configuration test

– Recovery test– Reliability test

Usability Test

• Focus is on factors which influence the ease with which potential end-users are able to utilize the system to accomplish their goals.

• Specialized and sophisticated: HCI experts conduct experiments in simulated work environments.

• Protocol analysis is utilized to identify “usability bottlenecks.”

• Application-specific metrics related to under-standability, learnability, and operability may be employed.

Installability Test

• Focus is functional and non-functional requirements related to the installation of the product/system.

• Coverage includes:

– Media correctness and fidelity

– Relevant documentation (including examples)

– Installation processes and supporting system functions.

(cont’d)

Installability Test (cont’d)

• Functions, procedures, documentation, etc., associated with product/system decommissioning must also be tested.

Serviceability Test

• Focus is requirements related to upgrading or fixing problems after installation.

• Coverage includes:

– Change procedures (adaptive, perfective, and corrective service scenarios)

– Supporting documentation

– System diagnostic tools

Performance Test

• Focus is requirements related to throughput, response time, memory utilization, input/ output rates, etc.

• Also very specialized in some organizations; sophisticated test-beds and instrumentation are the norm.

• Statistical testing based on an operational profile is often employed.

• Requirements must be stated in quantifiable terms.

Stress Test

• Focus is system behavior at or near overload conditions (i.e., ‘‘pushing the system to failure’’).

• Often undertaken with performance testing.

• In general, products are required to exhibit ‘‘graceful’’ failures and non-abrupt perfor-mance degradation.

Security Test

• Focus is vulnerability of resources to unauthorized access or manipulation.

• Issues include:

– Physical security of computer resources, media, etc.,

– Login and password procedures/policies,

– Levels of authorization for data or procedural access, etc.

Software Compatibility Test

• Focus is compatibility with other products/systems in the environment and/or with interoperability standards.

• May also concern source- or object-code compatibility with different operating environment versions.

• AKA compatibility/conversion testing when conversion procedures or processes are involved.

Device and Configuration Test

• Focus is configurability for, and/or compatibility with, all supported hardware configurations.

• Particularly taxing for client/server-based applications…

• Tests are usually limited to combinations of ‘‘representative’’ devices for each supported protocol.

Recovery Test

• Focus is ability to recover from exceptional conditions associated with hardware, software, or people.

• This can involve:– detecting exceptional conditions,– switch-overs to standby systems,– recovery of data,– maintaining audit trails, etc.

• May also involve external procedures such as storing backup tapes, etc.

Reliability Test

• Requirements may be expressed as:

– the probability of no failure in a specified time interval, or as

– the expected mean time to failure.

• Appropriate interpretations for failure and time are critical.

• Utilizes Statistical testing based on an operational profile.

Integration and Higher Level Testing

Prepared by

Stephen M. Thebaut, Ph.D.

University of Florida

Software Testing and Verification

Lecture 11