Post on 20-Jul-2016
description
Testing Foundation Level Training Program
Prepared By Radhesyam Yarramsetty Testing Development Practices
04/28/23 1
It’s Your Course Please join in Asking questions Making appropriate comments Share Experiences Second opinions and disagreements are welcomed.
04/28/23 2
What is Testing?
Software testing is the process used to help identify the correctness, completeness, security, and quality of developed computer software.
Software testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate.
Software Testing furnishes a criticism or comparison that compares the state and behavior of the product against a specification.
04/28/23 3
Continues…….
Software testing proves presence of error. But never its absence.
Software testing is a constructive destruction process.
Software testing involves operation of a system or application under controlled conditions and evaluating the results.
04/28/23 4
Software Systems Context Software Systems are an increasing part of life, from
business applications (e.g. banking) to consumer products ( e.g. cars).
Most people have had an experience with software that did not work as expected.
Software that does not work correctly can lead many problems, including loss of money, time or business reputation, and could even cause injury or death.
04/28/23 5
Percentage of Defect Found in different stages
Others10%
Coding7%
Design27%
Requirements56%
04/28/23 6
Causes of software defects Error: a human action that produces an
incorrect result Fault: a manifestation of an error in software
- also known as a defect or bug or fault also known as a defect or bug or fault - if executed, a fault may cause a failureif executed, a fault may cause a failure
Failure: deviation of the software from its expected delivery or service - (found defect)(found defect)
Failure is an event; fault is a state ofthe software, caused by an error
04/28/23 7
Error - Fault - Failure
A person makesan error ...
… that creates afault in thesoftware ...
… that can causea failure
in operation04/28/23 8
Reliability versus faults
Reliability: the probability that software will not cause the failure of the system for a specified time under specified conditions- Can a system be fault-free? (zero faults, right first Can a system be fault-free? (zero faults, right first
time)time)- Can a software system be reliable but still have Can a software system be reliable but still have
faults?faults?- Is a “fault-free” software application always Is a “fault-free” software application always
reliable?reliable?
04/28/23 9
Why do faults occur in software?
software is written by human beings- who know something, but not everythingwho know something, but not everything- who have skills, but aren’t perfectwho have skills, but aren’t perfect- who do make mistakes (errors)who do make mistakes (errors)
under increasing pressure to deliver to strict deadlines- no time to check but assumptions may be wrongno time to check but assumptions may be wrong- systems may be incompletesystems may be incomplete
04/28/23 10
Why do faults occur in software?
Complexity of the code, infrastructure Changing and meshing technologies. Many system interactions Environmental conditions Misuse ( deliberate and accidental)
04/28/23 11
What do software faults cost?
huge sums- Ariane 5 ($7billion)Ariane 5 ($7billion)- Mariner space probe to Venus ($250m)Mariner space probe to Venus ($250m)- American Airlines ($50m)American Airlines ($50m)
very little or nothing at all- minor inconvenienceminor inconvenience- no visible or physical detrimental impactno visible or physical detrimental impact
software is not “linear”: - small input may have very large effectsmall input may have very large effect
04/28/23 12
Safety-critical systems
software faults can cause death or injury- radiation treatment kills patients (Therac-25)radiation treatment kills patients (Therac-25)- train driver killedtrain driver killed- aircraft crashes (Airbus & Korean Airlines)aircraft crashes (Airbus & Korean Airlines)- bank system overdraft letters cause suicidebank system overdraft letters cause suicide
04/28/23 13
So why is testing necessary?
because software is likely to have faultsbecause software is likely to have faultsto learn about the reliability of the softwareto learn about the reliability of the softwareto fill the time between delivery of the software and the to fill the time between delivery of the software and the
release daterelease dateto prove that the software has no faultsto prove that the software has no faultsbecause testing is included in the project planbecause testing is included in the project planbecause failures can be very expensivebecause failures can be very expensiveto avoid being sued by customersto avoid being sued by customersto stay in businessto stay in business
04/28/23 14
General Testing Principles
Exhaustive testing is impossible
Testing everything( all combinations of inputs and preconditions) is not possible.
Instead of exhaustive testing, we can use risks and priorities to focus testing efforts.
04/28/23 15
Why not just "test everything"?
system has20 screens
Average: 10 loop / option2 types input / field(date of birth as Jan 3 or 3/1)(number as integer or decimal)Around 10^2 possible values
Total for 'exhaustive' testing:20 x 4 x 3 x 10 x 2 x 20 x 4 x 3 x 10 x 2 x 100100 = = 480,000 tests480,000 tests
If 1 second per test, 8000 mins, 133 hrs, 17.7 days(not counting finger trouble, faults or retest)
Avr. 4 menus3 options / menu
10 secs = 34 wks, 1 min = 4 yrs, 10 min = 40 yrs04/28/23 16
Exhaustive testing?
What is exhaustive testing?- when all the testers are exhaustedwhen all the testers are exhausted- when all the planned tests have been executedwhen all the planned tests have been executed- exercising all combinations of inputs and preconditionsexercising all combinations of inputs and preconditions
How much time will exhaustive testing take?- infinite timeinfinite time- not much timenot much time- impractical amount of timeimpractical amount of time
04/28/23 17
The testing paradox
Purpose of testing: to find faults
The best way to build confidenceis to try to destroy it
Purpose of testing: build confidence
Finding faults destroys confidence Purpose of testing: destroy confidence
04/28/23 18
Pesticide paradox
If the same tests are repeated over and again, eventually the same set of test cases will no longer find any new bugs.
To overcome this ‘Pesticide Paradox’, the test cases need to be regularly reviewed and revised, and new and different test cases need to be written.
04/28/23 19
Testing shows presence of defects
Testing can show that defects are present, but cannot prove that there are no defects.
It reduces the probability of undiscovered defects remaining in the system.
If no defects are found, it is not a proof of correctness.
04/28/23 20
Early testing
Testing activities should start as early as possible in the software or system development life cycle and should be focused on defined objectives.
04/28/23 21
Defect clustering
A small number of modules contain most of the defects discovered during pre-release testing or show the most operational failures.
04/28/23 22
Testing is the context dependent
Testing is done differently in different Contexts. For example, safety-critical software is tested differently from an eCommerce site.
04/28/23 23
Absence of Error Fallacy
Finding and fixing defects does not help if the systems built is unusable and does not fulfill the users ‘needs and expectations’.
04/28/23 24
Exercise:
Which general testing principles are characterized by the descriptions below? W. Early Testing X. Defect Clustering Y. Pesticide Paradox Z. Absence-of-error fallacy
1. Testing should start at the beginning of the project 2. Conformance to requirements and fitness for use. 3. Small number of modules contain the most defects. 4. Test Cases must be regularly reviewed and revised.
A. W1, X2, Y3 and Z4B. W1, X3, Y4 and Z2C. W2, X4, Y2, and Z3D. W1, X4,Y2 and Z3
04/28/23 25
How much testing is enough?
- it’s never enoughit’s never enough- when you have done what you plannedwhen you have done what you planned- when your customer/user is happywhen your customer/user is happy- when you have proved that the system works when you have proved that the system works
correctlycorrectly- when you are confident that the system works when you are confident that the system works
correctlycorrectly- it depends on the risks for your systemit depends on the risks for your system
04/28/23 26
How much testing?
It depends on RISKRISK- riskrisk of missing important faults of missing important faults- riskrisk of incurring failure costs of incurring failure costs- riskrisk of releasing untested or under-tested software of releasing untested or under-tested software- riskrisk of losing credibility and market share of losing credibility and market share- riskrisk of missing a market window of missing a market window- riskrisk of over-testing, ineffective testing of over-testing, ineffective testing
04/28/23 27
So little time, so much to test .. test time will always be limited use RISKRISK to determine:
- what to test firstwhat to test first- what to test mostwhat to test most- how thoroughly to test each itemhow thoroughly to test each item } i.e. where to
place emphasis
04/28/23 28
Terms to Remember
Bugs,defect, or faultBugs,defect, or fault Error or Mistake( IEEE 610): Error or Mistake( IEEE 610): FailureFailure QualityQuality RiskRisk Test/ Test CaseTest/ Test Case CodeCode DebuggingDebugging Software DevelopmentSoftware Development ReviewReview RequirementsRequirements Test ObjectiveTest Objective
04/28/23 29
Terms to Remember continues….
Test basis Debugging Software Development Review Requirements Testware Test plan Test Strategy Test log Exit Criteria Test Summary Report Test Coverage Test Condition
04/28/23 30
Alpha and beta testing
Alpha Testing: Services tested by potential user/customers or an independent test team at developer’s site, but outside the development organization. Alpha testing is often employed for off-the-self software as a form of internal acceptance testing.
Beta Testing: Operations tested by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-self software in order to acquire feedback from the market.
04/28/23 31
Testing and quality(K2)
testing measures software quality testing can find faults; when they are removed,
software quality (and possibly reliability) is improved
what does testing test?- system function, correctness of operationsystem function, correctness of operation- non-functional qualities: reliability, usability, non-functional qualities: reliability, usability,
maintainability, reusability, testability, etc.maintainability, reusability, testability, etc.- Software Product Quality (ISO 9126)Software Product Quality (ISO 9126)
04/28/23 32
What is Quality ?
Meeting the customer requirements first time and every time.
Quality is much more higher than absence of defects.
Quality can only seen through the eyes of the customers.
In simple words exceeding customer expectations assures meeting all the definitions of quality.
04/28/23 33
What are software quality factors?
Correctness: Extent to which a program satisfies and fulfills the users' mission objectives.
Reliability: Extent to which a program can be expected to perform its intended function.
Efficiency: Amount of computing resources and code required by a program to perform a function.
Integrity: Extent to which access to software or code can be controlled.
04/28/23 34
Quality factors continue…
Usability: Effort required learning, operating, preparing input, and interpreting output of the program.
Maintainability: Effort required locating and fixing an error in operational program.
Testability: Effort required testing a program to ensure that it performs its intended function.
Flexibility: Effort required modifying an operational program.
04/28/23 35
Quality factors continue….
Portability: Effort required to transfer from one configuration to another.
Reusability: Extent to which a program can be used in other applications related to the packaging and scope of the functions that programs.
Interoperability: Effort required to couple one system with another.
04/28/23 36
Other factors that influence testing
contractual requirements legal requirements industry-specific requirements
- e.g. safety-critical or safety-related such as railroad e.g. safety-critical or safety-related such as railroad switching, air traffic controlswitching, air traffic control
04/28/23 37
Test Planning and control (K1)
Test Planning is an activity of verifying the missing of testing, defining the objectives of testing and the specification of the test activities in order to meet the objective and mission.
Test Control is ongoing activity of comparing actual against the plan and reporting the status, including deviations from the plan. It should be monitored through the project.
04/28/23 38
Test Planning - different levels
TestPolicy
TestStrategy
Company level
High LevelTest Plan
High LevelTest Plan
Project level (IEEE 829)(one for each project)
DetailedTest Plan
DetailedTest Plan
DetailedTest Plan
DetailedTest Plan
Test stage level (IEEE 829)(one for each stage within a project,e.g. Component, System, etc.)04/28/23 39
The test process
specification execution recording checkcompletion
Planning (detailed level)
04/28/23 40
Test specification
specification execution recording checkcompletion
Identify conditionsDesign test cases
Build tests
Planning (detailed level)
04/28/23 41
A good test case
effective
exemplary
evolvable
economic
Finds faults
Represents others
Easy to maintain
Cheap to use04/28/23 42
Test specification
test specification can be broken down into three distinct tasks:1.1. identify: identify: determine ‘what’ is to be tested (identifydetermine ‘what’ is to be tested (identify
test conditions) and prioritisetest conditions) and prioritise2.2. design: design: determine ‘how’ the ‘what’ is to be testeddetermine ‘how’ the ‘what’ is to be tested
(i.e. design test cases)(i.e. design test cases)3.3. build: build: implement the tests (data, scripts, etc.)implement the tests (data, scripts, etc.)
04/28/23 43
Task 1: identify conditions
list the conditions that we would like to test:- use the test design techniques specified in the test planuse the test design techniques specified in the test plan- there may be many conditions for each system function there may be many conditions for each system function
or attribute.or attribute. prioritise the test conditions
- must ensure most important conditions are coveredmust ensure most important conditions are covered
(determine ‘what’ is to be tested and prioritise)
04/28/23 44
Task 2: design test cases
design test input and test data- each test exercises one or more test conditionseach test exercises one or more test conditions
determine expected results- predict the outcome of each test case, what is predict the outcome of each test case, what is
output, what is changed and what is not changedoutput, what is changed and what is not changed design sets of tests
- different test sets for different objectives such as different test sets for different objectives such as regression, building confidence, and finding faultsregression, building confidence, and finding faults
(determine ‘how’ the ‘what’ is to be tested)
04/28/23 45
Designing test cases
Importance
Time
Most importanttest conditionsLeast importanttest conditionsTest cases
04/28/23 46
Task 3: build test cases
prepare test scripts- less system knowledge tester has the more detailed the less system knowledge tester has the more detailed the
scripts will have to bescripts will have to be- scripts for tools have to specify every detailscripts for tools have to specify every detail
prepare test data- data that must exist in files and databases at the start of data that must exist in files and databases at the start of
the teststhe tests prepare expected results
- should be defined before the test is executedshould be defined before the test is executed
(implement the test cases)
04/28/23 47
Test execution
specification execution recording checkcompletion
Planning (detailed level)
04/28/23 48
Execution
Execute prescribed test cases- Execute the test suites and individual test casesExecute the test suites and individual test cases
- would not execute all test cases ifwould not execute all test cases if• testing only fault fixestesting only fault fixes• too many faults found by early test casestoo many faults found by early test cases• time pressure time pressure
- can be performed manually or automatedcan be performed manually or automated
04/28/23 49
Test recording
specification execution recording checkcompletion
Planning (detailed level)
04/28/23 50
Test recording 1
The test record contains:- identities and versions ofidentities and versions of
• software under testsoftware under test• test specificationstest specifications
Follow the plan- mark off progress on test scriptmark off progress on test script- document actual outcomes from the testdocument actual outcomes from the test- capture any other ideas you have for new test casescapture any other ideas you have for new test cases- note that these records are used to establish that all test note that these records are used to establish that all test
activities have been carried out as specifiedactivities have been carried out as specified
04/28/23 51
Test recording 2
Compare actual outcome with expected outcome. Log discrepancies accordingly:- software faultsoftware fault- test fault (e.g. expected results wrong)test fault (e.g. expected results wrong)- environment or version faultenvironment or version fault- test run incorrectlytest run incorrectly
Log coverage levels achieved (for measures specified as test completion criteria)
After the fault has been fixed, repeat the required test activities (execute, design, plan)
04/28/23 52
Check test completion
specification execution recording checkcompletion
Planning (detailed level)
04/28/23 53
Check test completion
Test completion criteria are specified in the test plan
If not met, need to repeat test activities, e.g. test specification to design more tests
specification execution recording checkcompletion
Coverage too low
CoverageOK
04/28/23 54
Test completion criteria
Completion or exit criteria apply to all levels of testing - to determine when to stop- coverage, using a measurement technique, e.g.coverage, using a measurement technique, e.g.
• branch coverage for unit testingbranch coverage for unit testing• user requirementsuser requirements• most frequently used transactionsmost frequently used transactions
- faults found (e.g. versus expected)faults found (e.g. versus expected)- cost or timecost or time
04/28/23 55
Why test?
build confidence prove that the software is correct demonstrate conformance to requirements find faults reduce costs show system meets user needs assess the software quality
04/28/23 56
Fault foundFaults found
Confidence
Time
Confidence
No faults found = confidence?
04/28/23 57
A traditional testing approach
Show that the system:- does what it shoulddoes what it should- doesn't do what it shouldn'tdoesn't do what it shouldn't
Fastest achievement: easy test cases
Goal: show working
Success: system works
Result: faults left in04/28/23 58
A better testing approach
Show that the system:- does what it shouldn'tdoes what it shouldn't- doesn't do what it shoulddoesn't do what it should
Fastest achievement: difficult test cases
Goal: find faults
Success: system fails
Result: fewer faults left in
04/28/23 59
Who wants to be a tester?
A destructive process Bring bad news (“your baby is ugly”) Under worst time pressure (at the end) Need to take a different view, a different
mindset (“What if it isn’t?”, “What could go wrong?”)
How should fault information be communicated (to authors and managers?)
04/28/23 60
Tester’s have the right to:
accurate information about progress and changesaccurate information about progress and changesinsight from developers about areas of the softwareinsight from developers about areas of the softwaredelivered code tested to an agreed standarddelivered code tested to an agreed standardbe regarded as a professional (no abuse!)be regarded as a professional (no abuse!)find faults!find faults!challenge specifications and test planschallenge specifications and test planshave reported faults taken seriously (unreproducible)have reported faults taken seriously (unreproducible)make predictions about future fault levelsmake predictions about future fault levelsimprove your own testing processimprove your own testing process
04/28/23 61
Testers have responsibility to:
follow the test plans, scripts etc. as documented follow the test plans, scripts etc. as documented report faults objectively and factually (no abuse!)report faults objectively and factually (no abuse!)check tests are correct before reporting s/w faultscheck tests are correct before reporting s/w faultsremember it is the software, not the programmer, remember it is the software, not the programmer,
that you are testingthat you are testingassess risk objectivelyassess risk objectivelyprioritise what you report prioritise what you report communicate the truthcommunicate the truth
04/28/23 62
Levels of independence
None: tests designed by the person who wrote the software
Tests designed by a different person Tests designed by someone from a different
department or team (e.g. test team) Tests designed by someone from a different
organisation (e.g. agency) Tests generated by a tool (low quality tests?)
04/28/23 63
Re-testing after faults are fixed
Run a test, it fails, fault reported New version of software with fault “fixed” Re-run the same test (i.e. re-test)
- must be exactly repeatablemust be exactly repeatable- same environment, versions (except for the software same environment, versions (except for the software
which has been intentionally changed!)which has been intentionally changed!)- same inputs and preconditionssame inputs and preconditions
If test now passes, fault has been fixed correctly - or has it?
04/28/23 64
Re-testing (re-running failed tests)
x
x
x
x
New faults introduced by the firstfault fix not found during re-testing
Re-test to checkFault now fixed
04/28/23 65
Regression test
to look for any unexpected side-effects
x
x
x
xCan’t guaranteeto find them all04/28/23 66
Regression testing 1
misnomer: "anti-regression" or "progression" standard set of tests - regression test pack at any level (unit, integration, system,
acceptance) well worth automating a developing asset but needs to be maintained
04/28/23 67
Regression testing 2
Regression tests are performedafter software changes, including faults fixedafter software changes, including faults fixedwhen the environment changes, even if application when the environment changes, even if application
functionality stays the samefunctionality stays the samefor emergency fixes (possibly a subset)for emergency fixes (possibly a subset)
Regression test suitesevolve over timeevolve over timeare run oftenare run oftenmay become rather largemay become rather large
04/28/23 68
Regression testing 3
Maintenance of the regression test pack- eliminate repetitive tests (tests which test the same eliminate repetitive tests (tests which test the same
test condition)test condition)- combine test cases (e.g. if they are always run combine test cases (e.g. if they are always run
together)together)- select a different subset of the full regression suite to select a different subset of the full regression suite to
run each time a regression test is neededrun each time a regression test is needed- eliminate tests which have not found a fault for a eliminate tests which have not found a fault for a
long time (e.g. old fault fix tests)long time (e.g. old fault fix tests)
04/28/23 69
Regression testing and automation
Test execution tools (e.g. capture replay) are regression testing tools - they re-execute tests which have already been executed
Once automated, regression tests can be run as often as desired (e.g. every night)
Automating tests is not trivial (generally takes 2 to 10 times longer to automate a test than to run it manually
Don’t automate everything - plan what to automate first, only automate if worthwhile
04/28/23 70
Expected results
Should be predicted in advance as part of the test design process- ‘‘Oracle Assumption’ assumes that correct outcome can be Oracle Assumption’ assumes that correct outcome can be
predicted.predicted. Why not just look at what the software does and
assess it at the time?- subconscious desire for the test to pass - less work to do, subconscious desire for the test to pass - less work to do,
no incident report to write upno incident report to write up- it looks plausible, so it must be OK - less rigorous than it looks plausible, so it must be OK - less rigorous than
calculating in advance and comparingcalculating in advance and comparing
04/28/23 71
Prioritising tests
We can’t test everything There is never enough time to do all the
testing you would like So what testing should you do?
04/28/23 72
Most important principle
Prioritise testsso that,
whenever you stop testing,you have done the best testing
in the time available.
04/28/23 73
How to prioritise?
Possible ranking criteria (all risk based)test where a failure would be most severetest where a failure would be most severetest where failures would be most visibletest where failures would be most visibletest where failures are most likelytest where failures are most likelyask the customer to prioritise the requirementsask the customer to prioritise the requirementswhat is most critical to the customer’s businesswhat is most critical to the customer’s businessareas changed most oftenareas changed most oftenareas with most problems in the pastareas with most problems in the pastmost complex areas, or technically criticalmost complex areas, or technically critical
04/28/23 74
Evaluating exit criteria and reporting
Checking test logs against the exit criteria specified in test planning.
Assessing if more tests are needed or if the exit criteria specified should be changed.
Writing a test summary report for stakeholders.
04/28/23 75
Test Closure activities
Test closure activities collect data from complete test activities to consolidate experience, i.e when a software system is released, a test project is completed.
Test closure activities include the following major tasks; 1) Checking which planned deliverables have been delivered, the
closure of incident report or raising of change records for any that remain open, and the documentation of the acceptance of the system.
2) Finalizing and archiving test ware, the test environment and the test infrastructure for later reuses.
3) Handover of testware to the maintenance organization. 4) Analyzing lessons learned for future releases and projects, and
the improvement of the test maturity.
04/28/23 76