Post on 05-Jan-2016
Software EngineeringSoftware Engineering
Testing (Concepts and Principles)
ObjectivesObjectives
To introduce the concepts and principles of testing
To summarize the debugging process
To consider a variety of testing and debugging methods
analysis design code test
Software TestingSoftware Testing
Narrow View: Testing is the process of exercising a program with the specific
intent of finding errors prior to delivery to the end user. A good test case is one that has a high probability of an as-yet-
undiscovered error A successful test is one that uncovers an as-yet-undiscovered
error
Broad View: Testing is the process used to ensure that the software conforms
to its specification and meets the user requirements Validation “Are we building the right product?” Verification “Are we building the product right?” Takes place at all stages of software engineering
What Testing ShowsWhat Testing Shows
errorserrors
requirements conformancerequirements conformance
performanceperformance
an indicationan indicationof qualityof quality
Testing PrinciplesTesting Principles
All tests should be traceable to customer requirements
Tests should be planned long before testing begins
80% of errors occur in 20% of classes
Testing should begin “in the small” and progress toward testing “in the large”
Exhaustive testing is not possible
To be most effective, testing should be conducted by an independent third party
Who Tests the Software?Who Tests the Software?
developerdeveloper independent testerindependent tester
Understands the system but will test “gently” and is driven by delivery
Must learn about the system but will attempt to break it and is driven by quality
Software TestabilitySoftware Testability
Software that is easy to test:1. Operability —“the better it works, the more efficiently it can be tested”.
Bugs are easier to find in software which at least executes
2. Observability—“what you see is what you test”. The results of each test case are readily observed
3. Controlability—“the better we can control the software, the more testing can be automated and optimized”. Easier to set up test cases
4. Decomposability—“by controlling the scope of testing, we can more quickly isolate problems and perform smarter retesting”. Testing can be targeted
5. Simplicity—“the less there is to test, the more quickly we can test it”. Reduce complex architecture and logic to simplify tests
6. Stability—“the fewer the changes the fewer the disruptions to testing”. Changes disrupt test cases
7. Understandability—“the more information we have the smarter we will test”
Test Case DesignTest Case Design
A test case is a controlled experiment that tests the system
Process:Objectives—to uncover errors
Criteria—in a complete manner
Constraints—with a minimum of effort and time
Often badly designed in an ad hoc fashion
“Bugs lurk in corners and congregate at boundaries.” Good test case design applies this maxim
Exhaustive Testing (infeasible)Exhaustive Testing (infeasible)
There are 10^14 possible paths! If we execute one test per millisecond, it would take 3170 years to test this program
Two nested loops containing four if..then..else statements. Each loop can execute up to 20 times
Selective Testing (feasible)Selective Testing (feasible)
Test a carefully selected execution path. Cannot be comprehensive
Testing MethodsTesting Methods
1. Black Box: examines fundamental interface aspects without regard to internal structure
2. White (Glass) Box: closely examine the internal procedural detail of system components
3. Debugging: fixing errors identified during testing
Methods
Strategies
white-boxmethods
black-box methods
[1] White-Box Testing[1] White-Box Testing
Goal: Ensure that all statements
and conditions have been executed at least once
Derive test cases that:1. Exercise all independent
execution paths
2. Exercise all logical decisions on both true and false sides
3. Execute all loops at their boundaries and within operational bounds
4. Exercise internal data structures to ensure validity
Why Cover all Paths?Why Cover all Paths?
Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed.
We often believe that a logical path is not likely to be executed when, in fact, it may be executed on a regular basis
Typographical error are random; it is likely that untested paths will contain some
Basis Path TestingBasis Path Testing
1. Provides a measure of the logical complexity of a method and provides a guide for defining a basis set of execution paths
2. Represent control flow using flow graph notationNodes represent processing, arrows represent control flow
Sequence
IfWhile
Cyclomatic ComplexityCyclomatic Complexity
2. Compute the cyclomatic complexity V(G) of a flow graph G: Number of simple predicates
(decisions) + 1 or
V(G) = E-N+2 (where E are edges and N are nodes) or
Number of enclosed areas + 1
In this case V(G) = 4
Cyclomatic Complexity and ErrorsCyclomatic Complexity and Errors
A number of industry studies have indicated that the higher V(G), the higher the probability of errors
V(G)V(G)
modulesmodules
modules in this range aremodules in this range are more error pronemore error prone
Basis Path TestingBasis Path Testing
3. V(G) is the number of linearly independent paths through the program (each has at least one edge not covered by any other path)
4. Derive a basis set of V(G) independent paths Path 1: 1-2-3-8 Path 2: 1-2-3-8-1-2-3 Path 3: 1-2-4-5-7-8 Path 4: 1-2-4-6-7-8
5. Prepare test cases that will force the execution of each path in the basis set
1
2
34
5 6
7
8
Basis Path TipsBasis Path Tips
You don’t need a flow graph, but it helps in tracing program paths
Count each simple logical test, compound tests (e.g. switch statements) count as 2 or more
Basis path testing should be applied to critical modules only
When preparing test cases use boundary values for the conditions
Other White Box MethodsOther White Box Methods
Condition Testing: exercises the logical (boolean) conditions in a program
Data Flow Testing: selects test paths according to the location of the definition and use of variables in a program
Loop Testing: focuses on the validity of loop constructs
Loop TestingLoop Testing
Nested Nested LoopsLoops
Concatenated Concatenated LoopsLoops Unstructured Unstructured
LoopsLoops
Simple Simple looploop
Simple LoopsSimple Loops
Test cases for simple loops: Skip the loop entirely
Only one pass through the loop
Two passes through the loop
m passes through the loop (m < n)
(n-1), n and (n+1) passes through the loop
Where n is the maximum number of allowable passes
Nested LoopsNested Loops
Test cases for nested loops:1. Start at the innermost loop. Set all the outer loops to
their minimum iteration parameter values
2. Test the min+1, typical, max-1 and max for the innermost loop, while holding the outer loops at their minimum values
3. Move out one loop and set it up as in step 2, holding all other loops at typical values. Continue this step until the outermost loop has been tested
Test cases for concatenated loops:1. If the loops are independent of one another then treat
each as a simple loop, otherwise treat as nested loops
[2] Black-Box Testing[2] Black-Box Testing
Complementary to white box testing. Derive external conditions that fully exercise all functional requirements
requirementsrequirements
eventseventsinputinput
outputoutput
Black Box StrengthsBlack Box Strengths
Attempts to find errors in the following categories: Incorrect or missing functions
Interface errors
Errors in data structures or external database access
Behaviour or performance errors
Initialization or termination errors
Black box testing is performed during later stages of testing
There are a variety of black box techniques: comparison testing (develop independent versions of the system),
orthogonal array testing (sampling of an input domain which has several variables)
Black Box MethodsBlack Box Methods
Equivalence Partitioning: Divide input domain into classes of data.
Each test case then uncovers whole classes of errors.
Examples: valid data (user supplied commands, file names, graphical data (e.g., mouse picks)), invalid data (data outside bounds of the program, physically impossible data, proper value supplied in wrong place)
Boundary Value Analysis: More errors tend to occur at the boundaries of the input domain
Select test cases that exercises bounding values
Examples: an input condition specifies a range bounded by values a and b. Test cases should be designed with values a and b and just above and below a and b
[3] Debugging[3] Debugging
Testing is a structured process that identifies an error’s “symptoms” Debugging is a diagnostic process that identifies an error’s “source”
test casestest cases
resultsresults
DebuggingDebugging
suspectedsuspectedcausescauses
identifiedidentifiedcausescausescorrectionscorrections
regressionregressionteststests
new testnew testcasescases
execution execution of casesof cases
Debugging EffortDebugging Effort
time required to time required to diagnose the diagnose the symptom and symptom and determine the determine the causecause
time requiredtime requiredto correct the to correct the errorerrorand conductand conductregression testsregression tests
Definition (Regression Tests): re-execution of a subset of test cases to ensure that changes do not have unintended side effects
Symptoms and CausesSymptoms and Causes
symptomsymptomcausecause
symptom and cause may be symptom and cause may be geographically separatedgeographically separated
symptom may disappear whensymptom may disappear when another problem is fixedanother problem is fixed
cause may be due to acause may be due to a combination of non-errorscombination of non-errors
cause may be due to a systemcause may be due to a system or compiler erroror compiler error
cause may be due tocause may be due to assumptions that everyoneassumptions that everyone believesbelieves
symptom may be intermittentsymptom may be intermittent
Not all bugs are equalNot all bugs are equal
damagedamage
mildmild annoyingannoying
disturbingdisturbingseriousserious
extremeextremecatastrophiccatastrophic
infectiousinfectious
Bug TypeBug Type
Bug Categories: function-related bugs, system-related bugs, data bugs, coding bugs, design bugs, documentation bugs, standards violations, etc.
Debugging TechniquesDebugging Techniques
Brute Force: Use when all else fails. Memory dumps and run-time traces. Mass of information amongst which the error may be found
Backtracking: Works in small programs where there are few backward paths Trace the source code backwards from the error to the source
Cause Elimination: Create a set of “cause hypotheses” Use error data (or further tests) to prove or disprove these
hypotheses
But debugging is an art. Some people have innate prowess and others don’t
Debugging TipsDebugging Tips
Don’t immediately dive into the code, think about the symptom you are seeing
Use tools (e.g. dynamic debuggers) to gain further insight
If you are stuck, get help from someone else
Ask these questions before “fixing” the bug:1. Is the cause of the bug reproduced in another part of the
program?
2. What bug might be introduced by the fix?
3. What could have been done to fix the bug in the first place?
Be absolutely sure to conduct regression tests when you do “fix” the bug