Testing Fundamentals

31
Fall semester 20 03 CSE565 1 Testing Fundamentals Wei-Tek Tsai Department of Computer Science and Engineering Arizona State University Tempe, AZ 85287

description

 

Transcript of Testing Fundamentals

Page 1: Testing Fundamentals

Fall semester 2003 CSE565 1

Testing Fundamentals

Wei-Tek Tsai

Department of Computer Science and Engineering

Arizona State University

Tempe, AZ 85287

Page 2: Testing Fundamentals

Fall semester 2003 CSE565 2

Software Testing – Key Terms and Definitions

• Verification:

- Are we building the product right ?

• Validation:

- Are we building the right product ?

• Reliability:

- Probability that a given software program performs as expected for a period of time

without error.

• Testing:

- Examination of the behavior of a software program over a set of sample data.

Page 3: Testing Fundamentals

Fall semester 2003 CSE565 3

Some Good Books on Testing

• Myers, The Art of Software Testing, 1979.

• B. Beizer, most of his books are good. His recent book on black-box testing is good.

• M. Ould and C. Unwin, Testing in Software Development, Cambridge University Press, 1987.

• R. C. Wilson, Software Rx: Secrets of Engineering Quality Software, Prentice Hall, 1997.

• S. Kirani and W. T. Tsai, “Testing Object-Oriented Software”, TR, University of Minnesota, 1994.

Page 4: Testing Fundamentals

Fall semester 2003 CSE565 4

Errors, Bugs and Failures

• Error: A human mistake.

• Faults: Bugs which appear in a give program

• Failure: Running an input sequence that causes a bug, and/or the produces an output that is different from the specified output.

• One error can result in multiple bugs.

• Multiple errors can result in one bug.

• One bug can have one or more failures.

• Multiple bugs can lead to one or multiple failures.

Page 5: Testing Fundamentals

Fall semester 2003 CSE565 5

Why Do We Need Software Testing ?

• No One can write Perfect Code all the time.• Errors in Commercial Products cause Loss in

Revenue.• Failures in High Availability and Safety Critical

Systems can cause serious irreversible damages.• Misunderstanding user requirements can lead to

development of perfectly good wrong products.

Page 6: Testing Fundamentals

Fall semester 2003 CSE565 6

Objectives of Testing

• Testing does not mean “Finding Bugs” ONLY.• The Objectives of Software Testing are

- Find Errors.

- Verify Requirements.

- Make Prediction about the product(s).

Of the above mentioned factors the last one is pretty

difficult. Why ? Because it depends on several

external factors in addition to the standard factors.

Page 7: Testing Fundamentals

Fall semester 2003 CSE565 7

Some Testing Criteria

• Robustness: Does the software component deteriorate gracefully as it approaches the limits specified in the specification.

• Completeness: Does the software solving the problem completely.

• Consistency: Does the software component perform consistently, i.e in the sense does it produce the same output each time for the same input(s).

• Usability: Is the software easy to use.• Testability: Is the software easily testable.• Safety: If the software component is safety critical, is it safe

to use.

Page 8: Testing Fundamentals

Fall semester 2003 CSE565 8

Why is Testing Difficult ?

• Generate Test Inputs– How many inputs to generate ?

– Provide all the setup, environment, databases similar to what the client has.

• Generate expected outputs– Generally testing is done on a prototype. Will the actual

system behave exactly as the prototype.

• Compare the test output with the expected output

Page 9: Testing Fundamentals

Fall semester 2003 CSE565 9

Cost of Testing

• Cost of test input generation (positive)• Cost of expected output generation (positive)• Cost of running the test• Cost of comparing test results and their expected outputs

(positive)• Cost of finding bugs (negative cost)• Cost of missing bugs (positive and can be large)• Cost of test management such as bug reporting, bug

tracking, scheduling (positive)• Most research papers do not consider all the factors.

Page 10: Testing Fundamentals

Fall semester 2003 CSE565 10

Cost of Software Testing contd..

• Usually high, can be as high as 70% to 90% of the cost, especially for those projects which have a poor design and development phase.

• Cost of software testing can be reduced by automation (almost all the activities of testing can be automated, e.g., test input generation, expected output generation, test case reuse, and test running can be automated but many of these techniques are still highly manual).

Page 11: Testing Fundamentals

Fall semester 2003 CSE565 11

Levels of Testing

• Unit/module/component test– Test individual units separately.

– Deals with finding logic errors, syntax errors etc.

– Verify that component adheres to its specification.

• Integration test– Find interface defects.

– Verify component interactions to make sure they are correct.

Page 12: Testing Fundamentals

Fall semester 2003 CSE565 12

Levels of Testing contd..

• System test

– Verify the overall system functionality.

• Alpha testing

– Testing with select customers within the organization.

• Beta testing

– Testing with select customers external to the

organization.

Page 13: Testing Fundamentals

Fall semester 2003 CSE565 13

Attitude(s) That Make A Good Tester.

• Independent.

• Customer Perspective

• Testing intended functionalities.

• Testing unintended functionalities.

• Professionalism.

Page 14: Testing Fundamentals

Fall semester 2003 CSE565 14

Attitude Of a Good Tester

• Independent

- Independent from the developer. Why ?

Developers tend to be biased towards their mistakes.

• Customer Perspective- Must be able to think from a customers perspective.

Why ? Ultimately the customer is the one going to use the product. They bring in the revenue, so a good tester

must be able to think from a customers perspective.

Page 15: Testing Fundamentals

Fall semester 2003 CSE565 15

Attitude Of a Good Tester

• Testing Intended Functionality

- This is one of the basic purpose of testing. A good tester is one who tests each and every intended functionality to make sure that the software is exactly what the client wanted.

• Testing Unintended Functionality

- Sometimes called break-it testing (Dirty Testing). In this process the tester intentionally tries to make the code fail. Helps in detecting some special cases where the code may fail.

Page 16: Testing Fundamentals

Fall semester 2003 CSE565 16

Formal Technical Reviews

• Objectives– To uncover errors in function, logic or implementation– To verify that the software under review meets its req.– To ensure that the software has been represented

according to predefined standards.– To achieve software that is developed in a uniform

manner.– To make projects more manageable.

Page 17: Testing Fundamentals

Fall semester 2003 CSE565 17

Formal Technical Reviews(cont’d)

• The FTR is actually a class of reviews:– Includes walkthroughs, inspections, round-

robin reviews, and other small group technical assessments.

– Goal is to involve all the people involved in design, development and testing to understand the state of a software product.

– To be effective FTR’s must be properly planned, controlled, and attended.

Page 18: Testing Fundamentals

Fall semester 2003 CSE565 18

Inspection Process

• Planning

• Preparation

• Meeting activities

• Rework

• Following up

Page 19: Testing Fundamentals

Fall semester 2003 CSE565 19

What is Inspection ?

• Formal statistical process control method for evaluating work products.

• What do these terms mean ?• Formal: follow a standard set of procedures and maintain

a serious ambience during the inspection process.

• Statistical: collate date and use use standard metrics.

• Process Control Method: the decision that is made using the available metrics and statistics.

Page 20: Testing Fundamentals

Fall semester 2003 CSE565 20

White-Box Testing• A test case design method that uses the control

structure of the procedural design to derive test cases.– Guarantee that all independent paths within a module

have been exercised at least once.– Exercise all logical decision on their true and false

values.– Execute all loops at their boundaries and within their

operational bounds.– Exercise internal data structures to assure their

validity.

Page 21: Testing Fundamentals

Fall semester 2003 CSE565 21

Black-Box Testing• Focuses on the functional requirements of the

software. It is not an alternative approach to white-box testing. Instead it acts as a complement to the WB Testing technique.– Runtime errors (Missing function definitions etc).– Interface errors.– Performance errors, and – Initialization and termination errors.

Page 22: Testing Fundamentals

Fall semester 2003 CSE565 22

Basis Path Testing

• Basis Path Testing is a technique that fulfils the requirements of Path Testing and also Independent Paths that can be used to construct an arbitrary path through a computer program.

• What is a Basis Path ?

It is a unique path through the software with no loops, - all possible paths are a linear combination of them.

Page 23: Testing Fundamentals

Fall semester 2003 CSE565 23

McCabe’s Basis Path Testing

• Draw a control flow graph.

• Calculate Cyclomatic Complexity.

• Choose a Basis Set of Paths.

• Generate Test Cases to test each of the paths selected above.

Page 24: Testing Fundamentals

Fall semester 2003 CSE565 24

McCabe Number

V(G) = e+2*p – n.

e = no. of edges.

n = no. of nodes.

p = no. of connected components.

The higher the McCabe Number, the higher is

the complexity of the software and the more

error prone it becomes.

Page 25: Testing Fundamentals

Fall semester 2003 CSE565 25

Data Flow Testing

• Tests the use of variables along different paths of program execution.

• Most common types of errors occur because of initialization before declaration or usage before declaration.

• Global variables cause more problems than local variables.

• Very Expensive to perform and is used mainly to test High Performance Applications and High Risk Applications.

Page 26: Testing Fundamentals

Fall semester 2003 CSE565 26

Equivalence Partitioning

• Functional Testing Criteria.• Applicable when the inputs are independent, that is there

are no input combinations.

• How is EP done ?

• Divide the input space into finite partitions.

• For each partition defined, create a set of test cases. Develop test cases covering as many partitions as possible.

• For each invalid partition, develop additional test cases.

• Use Coverage Matrix to keep track of the test cases.

Page 27: Testing Fundamentals

Fall semester 2003 CSE565 27

Boundary Value Analysis

• An important technique to detect errors occurring at component interfaces.

• Several errors tend to occur when components interact.

• Programmers tend to look how to implement their code correctly. Generally overlook how to handle exceptions that MAY occur.

• As an example consider an API that tests if a point lies in a rectangle or not.

The CRect class has an API bool PtInRect(POINT p) that accepts a POINT type input parameter and returns a BOOL depending on the position of the point w.r.p to the rectangle.

Page 28: Testing Fundamentals

Fall semester 2003 CSE565 28

Boundary Value Testing

• From a programmers point of view, the implementation is straightforward. Check if the point is within the co -ordinates of the rectangle and return an appropriate value.

• Some Special cases:

- Point is “ON” the rectangle.

- Point is one of the vertices itself. (Spl case of above).

• What should happen in these cases. Have these cases been taken care of by the developer ? BT helps solve

some problems of these types.

Page 29: Testing Fundamentals

Fall semester 2003 CSE565 29

Random Testing

• Select a random input from a given domain– can be either input or output domain, but most

of the time, input domain is used.• Paper by Duran and Ntafos in IEEE

Transaction on Software Engineering on random testing in 1980’s. Several topics about random testing were discussed.

Page 30: Testing Fundamentals

Fall semester 2003 CSE565 30

Assumption Made in the Paper

• Finding a single failure is equivalent to finding a fault.

• Domains of faults do not interact with each other.

• Each domain contains at most one fault.

• Failure rate is assumed to be uniform.

• Pr & Pp: probabilities of finding one or more faults using random and partition testing respectively.

• Er & Ep: expected numbers of faults.

Page 31: Testing Fundamentals

Fall semester 2003 CSE565 31

Interesting Facts

• Pr/Pp = 90%• Er/Ep = 90%• The authors also performed some (close to ten) real

experiment on random testing, and found random testing was almost always as effective as partition testing.

• The authors concluded that the costs incurred to compare the results of running random testing are similar to those of running partition testing. However the cost of generating random test inputs is low when compared to test case generation in partition testing, thus we should be serious in random testing.