General Information About Testing

34
Testing Course explaining the ISTQB Certified Tester Foundation Level Syllabus 1 1.1.1 Why is testing necessary – Why do we test? (...the common answer is :) To find bugs! ...but consider also: •To reduce the impact of the failures at the client’s site (live defects) and ensure that they will not affect costs & profitability •To decrease the rate of failures (increase the product’s reliability) •To improve the quality of the product •To ensure requirements are implemented fully & correctly •To validate that the product is fit for its intended purpose •To verify that required standards and legal requirements are met •To maintain the company reputation Testing provides the product’s measure of quality! Can we test everything? Exhaustive testing is possible? -No, sorry ...time & resources make it impractical !...but, instead: -We must understand the risk to the client’s business of the software not functioning correctly •We must manage and reduce risk, carry out a Risk Analysis of the application •Prioritise tests to focus them (time & resources) on the main areas of risk 2 1.1.1 Why is testing necessary - Testing goals and objectives The main goals of testing: • Find defects • Assess the level of quality of the software product and providing related information to the stakeholders • Prevent defects • Reduce risk of operational incidents • Increase the product quality Different viewpoints and objectives: • Unit & integration test – find as many defects as possible • Acceptance testing – confirm that the system works as specified and that the quality is good enough • Testing metrics gathering – provide information to the project manager about the product quality and the risks involved • Design tests early and review requirements – help prevent defects 3 1.1.2 Why is testing necessary - Testing Glossary A programmer (or analyst) can make an error (mistake), which produces a defect (fault, bug) in the program’s code. If such a defect in the code is executed, the system will fail to do what it should do (or it will do something it should not do), causing a failure. Error (mistake) = a human action that produces an incorrect result

Transcript of General Information About Testing

Page 1: General Information About Testing

Testing Courseexplaining theISTQB Certified Tester Foundation Level Syllabus1

1.1.1 Why is testing necessary – Why do we test?(...the common answer is :) To find bugs! ...but consider also:•To reduce the impact of the failures at the client’s site (live defects) and ensure that they will not affect costs & profitability•To decrease the rate of failures (increase the product’s reliability)•To improve the quality of the product•To ensure requirements are implemented fully & correctly•To validate that the product is fit for its intended purpose•To verify that required standards and legal requirements are met •To maintain the company reputationTesting provides the product’s measure of quality!Can we test everything? Exhaustive testing is possible?-No, sorry ...time & resources make it impractical !...but, instead:-We must understand the risk to the client’s business of the software not functioning correctly•We must manage and reduce risk, carry out a Risk Analysis of the application•Prioritise tests to focus them (time & resources) on the main areas of risk2

1.1.1 Why is testing necessary - Testing goals and objectives The main goals of testing:• Find defects• Assess the level of quality of the software product and providing related information to the stakeholders• Prevent defects• Reduce risk of operational incidents• Increase the product qualityDifferent viewpoints and objectives:• Unit & integration test – find as many defects as possible• Acceptance testing – confirm that the system works as specified and that the quality is good enough• Testing metrics gathering – provide information to the project manager about the product quality and the risks involved• Design tests early and review requirements – help prevent defects3

1.1.2 Why is testing necessary - Testing GlossaryA programmer (or analyst) can make an error (mistake), which produces a defect (fault, bug) in the program’s code. If such a defect in the code is executed, the system will fail to do what it should do (or it will do something it should not do), causing a failure.Error (mistake) = a human action that produces an incorrect result

Page 2: General Information About Testing

Defect (bug) = a flaw that can cause the component or system to fail to perform its required functionFailure = deviation of the component or system from its expected delivery, service or resultAnomaly = any condition that deviates from expectations based on requirements specifications, design documents, user documentation, standards or someone’s perceptions or expectationsDefect masking = An occurrence in which one defect prevents the detection of another.4

1.1.2 Why is testing necessary – Causes of the errorsDefects are caused by human errors!Why? Because of:Time pressure - the more pressure we are under the more likely we are to make mistakes•Code complexity or new technology•Too many system interactions•Requirements not clearly defined, changed & not properly documented •We make wrong assumptions about missing bits of information!•Poor communication•Poor training5

1.1.2 Why is testing necessary - Causes of software defects - Defects taxonomy(Boris Beizer)• Requirements (incorrect, logic, completeness, verifiability, documentation, changes)• Features and functionality (correctness, missing case, domain and boundary,messages, exception mishandled)• Structural (control flow, sequence, data processing)• Data (definition, structure, access, handling)• Implementation and Coding• Integration (internal and external interfaces)• System (architecture, performance, recovery, partitioning, environment)• Test definition and execution (test design, test execution, documentation, reporting)(Cem Kaner (b1))• User interface (functionality, communication, missing, performance, output)• Error handling (prevention, detection, recovery)• Boundary (numeric, loops)• Calculation (wrong constants, wrong operation order, over & underflow)• Initialization (data item, string, loop control)• Control flow (stop, crash, loop, if-then-else,...)• Data handling (data type, parameter list, values)• Race & Load conditions (event sequence, no resources)• Source and version control (old bug reappear)

Page 3: General Information About Testing

• Testing (fail to notice, fail to test, fail to report)6

1.1.3 Why is testing necessary - The role of testing in the software life cycleTesters do cooperate with:Analysts – to review the specifications for completeness and correctness, ensure that they are testable• Designers – to improve interfaces testability and usability• Programmers – to review the code and assess structural flaws• Project manager – to estimate, plan, develop test cases, perform tests and report bugs, to assess the quality and risks• Quality assurance staff – to provide defect metrics• Interactions with these project roles are very complex.RACI matrix (Responsible, Accountable, Consulted, Informed)7

1.1.4 Why is testing necessary – What is quality? Quality (ISO) = The totality of the characteristics of an entity that bear on itsability to satisfy stated or implied needs • there are many more definitionsTesting means not only to Verify (the thing is done right) ... but also to Validate (the right thing is done)!Software quality includes: reliability, usability, efficiency, maintainability and portability.RELIABILITY: The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations.USABILITY: The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions.EFFICIENCY: The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions. MAINTAINABILITY: The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment. PORTABILITY:The ease with which the software product can be transferred from one hardware or software environment to another.8

1.1.4 Why is testing necessary - Testing and qualityTesting does not inject Quality into the product, but will measure the level of the Quality of the product.Quality can be measured for:• progress• variance (planned versus actual)Measuring product quality:•Functional compliance - functional software requirements testing •Non functional requirements•Test coverage criteria

Page 4: General Information About Testing

•Defect count or defect trend criteria9

1.1.4 Why is testing necessary – quality attributes The QUINT model (extended ISO model)10

1.1.5 Why is testing necessary - How much testing is enough?The five basic criteria often used to decide when to stop testing are:• Previously defined coverage goals have been met• The defect discovery rate has dropped below a previously defined threshold• The cost of finding the "next" defect exceeds the expected loss from that defect • The project team reaches consensus that it is appropriate to release the product • The manager decides to deliver the productAll these criteria are risk based.It is important not depending on only one stopping criterion.Software Reliability Engineering can help also to determine when to stop testing, by taking into consideration aspects like failure intensity.11

1.2 What is testing - Definition of testingTesting = the process concerned with planning the necessary static and dynamic activities, preparation and evaluation of software products and related deliverables, in order to:• determine that they satisfy specified requirements• demonstrate that they are fit for the intended use• detect defects, help and motivate the developers to fix them• measure, assess and improve the quality of the software productTesting should be performed throughout the whole software life cycle There are two basic types of testing: execution and non-execution based Other definitions:•(IEEE) Testing = the process of analyzing a software item to detect the differences between existing and required conditions and to evaluate its features•(Myers (b3)) Testing = the process of executing a program with the intent of finding errors•(Craig & Jaskiel (b5)) Testing = a concurrent lifecycle process of engineering, using and maintaining test-ware in order to measure and improve the quality of thesoftware being tested12

1.2 What is testing – Testing “schools”Analytic School - testing is rigorous, academic and technical •Testing is a branch of CS/Mathematics•Testing techniques must have a logic-mathematical form•Key Question: Which techniques should we use?•Require precise and detailed specificationsFactory School - testing is a way to measure progress, with emphasis on cost

Page 5: General Information About Testing

and repeatable standards•Testing must be managed & cost effective•Testing validates the product & measures development progress•Key Questions: How can we measure whether we’re making progress, When will we be done? •Require clear boundaries between testing and other activities (start/stop criteria)•Encourage standards (V-model), “best practices,” and certificationQuality School - emphasizes process & quality, acting as the gatekeeper •Software quality requires discipline•Testers may need to police developers to follow the rules•Key Question: Are we following a good process?•Testing is a stepping stone to “process improvement”Context-Driven School - emphasizes people, setting out to find the bugs that will be most important to stakeholders•Software is created by people. People set the context•Testing finds bugs acting as a skilled, mental activity•Key Question: What testing would be most valuable right now? •Expect changes. Adapt testing plans based on test results •Testing research requires empirical and psychological study13

1.3 General testing principles• Testing shows presence of defects, but cannot prove that there are no moredefects; testing can only reduce the probability of undiscovered defects• Complete, exhaustive testing is impossible; good strategy and risk management must be used• Pareto rule (defect clustering): usually 20% of the modules contain 80% of the bugs• Early testing: testing activities should start as soon as possible (including here planning, design, reviews)• Pesticide paradox: if the same set of tests are repeated over again, no new bugs will be found; the test cases should be reviewed, modified and new test cases developed• Context dependence: test design and execution is context dependent (desktop, web applications, real-time, ...)• Verification and Validation: discovering defects cannot help a product that is not fitto the users needs14

1.3 General testing principles – heuristics of software testing•Operability - The better it works, the more efficiently it can be tested •Observability - What we see is what we test•Controllability - The better we control the software, the more the testing process can be automated and optimized•Decomposability - By controlling the scope of testing, we can quickly isolate problems and perform effective and efficient testing

Page 6: General Information About Testing

•Simplicity - The less there is to test, the more quickly we can test it•Stability - The fewer the changes, the fewer are the disruptions to testing •Understandability - The more information we will have, the smarter we will test•Suitability - The more we know about the intended use of the software, the better we can organize our testing to find important bugs15

-Test Implementation & Execution-Evaluating exit criteria & Reporting-Test Closure activities1.4.1 Fundamental test process – phases-Test Planning & Test control-Test Analysis & Design16

Planning1.4.1 Fundamental test process – planning & control1. Determine scope• Study project documents, used software life-cycle specifications, product desiredquality attributes• Clarify test process expectations2. Determine risks• Choose quality risk analysis method (e.g. FMEA)• Document the list of risks, probability, impact, priority, identify mitigation actions3. Estimate testing effort, determine costs, develop schedule• Define necessary roles• Decompose test project into phases and tasks (WBS)• Schedule tasks, assign resources, set-up dependencies4. Refine plan• Select test strategy (how to do it, what test types at which test levels)• Select metrics to be used for defect tracking, coverage, monitoring• Define entry and exit criteriaControl• Measure and analyze results• Monitor testing progress, coverage, exit criteria• Assign or reallocate resources, update the test plan schedule• Initiate corrective actions• Make decisions17

1.4.2 Fundamental test process – analysis & design• Reviewing the test basis (such as requirements, architecture, design, interfaces).• Identifying test conditions or test requirements and required test data based on analysis of test items and the specification.• Designing the tests:

Page 7: General Information About Testing

• Choose test techniques• Identify test scenarios, pre-conditions, expected results, post-conditions• Identify possible test Oracles• Evaluating testability of the requirements and system.• Designing the test environment set-up and identifying any requiredinfrastructure and tools.(see Lee Copeland (b2))18

1.4.2 Fundamental test process – what is a test Oracle?• The expected result (test outcome) must be defined at the test analysis stage• Who will decide that (expected result = actual result), when the test will beexecuted? The Test Oracle!Test Oracle = A source to determine expected result, a principle or mechanism to recognize a problem. The Test Oracle can be:• an existing system (the old version...)• a document (specification, user manual)• a competent client representative ... but never the source code itself !Oracles in use = Simplification of Risk : do not assess ‘pass - fail’, but instead ‘problem - no problem’Problem: Oracles and Automation - Our ability to automate testing is fundamentally constrained by our ability to create and use oracles;Possible issues:• false alarms• missed bugs19

1.4.3 Fundamental test process – implementation & executionTest implementation:• Develop and prioritize test cases, create test data, test harnesses and automation scripts • Create test suites from the test cases• Check test environmentTest execution:• Execute (manually or automatically) the test cases (suites)• Use Test Oracles to determine if test passed or failed• Login the outcome of tests execution• Report incidents (bugs) and try to discover if they are caused by the test data, test procedure or they are defect failures• Expand test activities as necessary, according to the testing mission(see Rex Black (b4))20

1.4.3 Fundamental test process – prioritizing the Test Cases Why prioritize the Test Cases?•It is not possible to test everything, we must do our best in the time available•Testing must be Risk based, assuring that the errors, that will get through to the

Page 8: General Information About Testing

client’s production system, will have the smallest possible impact and frequency of occurrence•This means we must prioritise and focus testing on the priorities What to watch?•Severity of possible defects•Probability of possible defects•Visibility of possible defects•Client Requirement importance•Business or technical criticality of a feature •Frequency of changes applied to a module •Scenarios complexity21

1.4.4 Fundamental test process – evaluating exit criteria and reportingEvaluate exit criteria:• Check test logs against exit criteria specified in test mission definition • Assess if more tests are needed• Check if testing mission should be changedTest reporting:• Write the test summary report for the stakeholders useThe test summary report should include:• Test Cases execution coverage (% executed ) • Test Cases Pass / Fail %• Active bugs, sorted according to their severity(see Rex Black (b4) & RUP- Test discipline(s5))22

1.4.5 Fundamental test process – test closure• Verify if test deliverables have been delivered• Check and close the remaining active bug reports• Archiving the test-ware and environment• Handover of the test environment• Analyze the identified test process problems (lessons learned) • Implement action plan based improvements(see Rex Black (b4))23

Testing is regarded as a ‘destructive’ activity (we run tests to make the software fail...)A good tester:Rex Black’sTop 10 professional errors•Should always have a critical approach •Must keep attention to detail•Must have analytical skills•Should have good verbal and written communication skills•Fall in Love with a Tool•Write Bad Bug Reports•Fail to Define the Mission•Ignore a Key Stakeholder•Deliver Bad News Badly

Page 9: General Information About Testing

•Take Sole Responsibility for Quality •Be an Un-appointed Process Cop•Fail to Fire Someone who Needs Firing •Forget You’re Providing a Service •Ignore Bad Expectations•Must analyse incomplete facts•Must work with incomplete facts•Must learn quickly about the product being tested•Should be able to quickly prioritise •Should be a planned, organised kind of personAlso, he must have a good knowledge about: •The customer’s business workflows•The product architecture and interfaces •The software project process(see also Brian Marick’s article)•Testing techniques and practices1.5 The psychology of testing24

1.5 The psychology of testing“”The best tester isn’t the one who finds the most bugs, the best tester is the one who gets the most bugs fixed” (Cem Kaner)“Selling” bugs (see Cem Kaner (c1)• Motivate the programmer• Demonstrate the bug effects• Overcome objections• Increase the defect description coverage (indicate detailed preconditions, behavior)• Analyze the failure• Produce a clear, short, unambiguous bug report• Advocate error costsLevels of Independence of the Testing Team:Low – Developers write and execute their own testsMedium – Tests are written and executed by another developerHigh – Tests written and executed by an independent testing team (internal or external)Testers Agile Manifesto (Jonathan Kohl)•bug advocacy over bug counts•testable software over exhaustive (requirements) docs •measuring product success over measuring process success •team collaboration over departmental independence25

2.1.1 The V testing model26

2.1.1 The V testing model – Verification & Validation Verification = Confirmation byexamination and through the provision of objective evidence that specified requirements have been fulfilled.

Page 10: General Information About Testing

Validation = Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.Verification is the dominant activity in the Unit, Integration, System testing levels, Validation is a mandatory activity in the Acceptance testing level27

2.1.1 The W testing model – dynamic testing28

2.1.1 The W testing model – static testing29

2.1.2 Software development models - Waterfall30

2.1.2 Software development models - Waterfall Waterfall weaknesses· · · ·Linear: any attempt to go back two or more phases to correct a problem or deficiency results in major increases in cost and scheduleIntegration problems usually surface too late. Previously undetected errors or design deficiencies will emerge, adding risk with little time to recoverUsers can't see quality until the end. They can't appreciate quality if the finished product can't be seenDeliverables are created for each phase and are considered frozen. If the deliverable of a phase changes, which often happens, the project will suffer schedule problemsThe entire software product is being worked on at one time. There is no way to partition the system for delivery of pieces of the system31

2.1.2 Software development models - Rapid Prototype Model32

2.1.2 Software development models - Rapid Prototype ModelRapid Prototype Model weaknesses· In the rush to create a working prototype, overall software quality or long-term maintainability may be overlooked· Tendency for difficult problems to be pushed to the future, causing the initial promise of the prototype to not be met by subsequent products· Developers may fall into a code-and-fix cycle, leading to expensive, unplanned prototype iterations· Customers become frustrated without the knowledge of the exact number of iterations that will be necessary· Users may have a tendency to add to the list of items to be prototyped until the scope of the project far exceeds the feasibility study33

Page 11: General Information About Testing

2.1.2 Software development models - Incremental Model34

2.1.2 Software development models - Incremental ModelIncremental Model weaknesses· Definition of a complete, fully functional system must be done early in the life cycle to allow for the definition of the increments· The model does not allow for iterations within each increment· Because some modules will be completed long before others, well- defined interfaces are required· Requires good planning and design: Management must take care to distribute the work; the technical staff must watch dependencies35

2.1.2 Software development models - Spiral Model36

Spiral Model weaknesses·The model is complex, and developers, managers, and customers may find it too complicated to use· ·Considerable risk assessment expertise is required·May be expensive - time spent planning, resetting objectives, doing risk analysis, and prototyping may be excessive2.1.2 Software development models - Spiral ModelHard to define objective, verifiable milestones that indicate readiness to proceed through the next iteration37

2.1.2 Software development models - Rational Unified Process38

2.1.3 Software development models – Testing life cycle• For each software activity, there must be a corresponding testing activity • The objectives of the testing are specific to that “tested” activity• Plan, analysis and design of a testing activity should be done during the corresponding development activity• Review and inspection must be considered as part of testing activities39

• • •Target: single software modules, components that are separately testable Access to the code being tested is mandatory, usually involves the programmer May consist of:

Page 12: General Information About Testing

• •Test cases follow the low level specification of the module• •Also named Unit testing2.2.1 Test levels – Component testingo Functional testso Non-functional tests (stress test)o Structural tests (statement coverage, branch coverage)Can be automated (test driven software development):o Develop first test codeo Then, write the code to be tested o Execute until passGood programming style (Design-by-contract, respecting Demeter’s law) enhance the efficiency of Unit testing40

• •Product software architecture understanding is critical•Test strategy may be bottom-up, top-down or big-bang2.2.2 Test levels – Integration testing• Target: the interfaces between components and interfaces with other parts of the systemWe focus on the data exchanged, not on the tested functionalities.May consist of:o Functional testso Non-functional tests (ex: performance test)o Component integration testing (after component testing) o System integration testing (after system testing)41

2.2.2 Test levels – Component Integration testingComponent integration testing (done after Component testing) :•Linking a few components to check that they communicate correctly•Iteratively linking more components together•Verify that data is exchanged between the components as required•Increase the number of components, create & test subsystems and finally the complete systemDrivers and Stubs should be used when necessary:•driver: A software component or test tool that replaces a component that takescare of the control and/or the calling of a component or system.•stub: A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.42

2.2.2 Test levels – System Integration testing System integration testing (done after System or Acceptance testing) :

Page 13: General Information About Testing

Testing the integration of systems and packages; testing interfaces to external organizationsWe check the data exchanged between our system and other external systems. Additional difficulties:•Multiple Platforms •Communications between platforms •Management of the environmentsApproaches to access the external systems:•Testing in a test environment•Testing in a ‘clone’ of a production environment •Testing in a real production environment43

2.2.3 Test levels – System testingSystem testing = The process of testing an integrated system to verify that it meets specified requirements• Target: the whole product (system) as defined in the scope document• Environment issues are critical• May consist of:o Functional tests, based on the requirement specificationso Non-functional tests (ex: load tests)o Structural tests (ex: web page links, or menu item coverage)• Black box testing techniques may be used (ex: business rule decision table)• Test strategy may be risk based• Test coverage is monitored44

2.2.4 Test levels – Acceptance testingAcceptance Testing = Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the systemThe main goals:•establish confidence in the system•is the product good enough to be delivered to the client?The main focus is not to find defects, but to assess the readiness for deployment •It is not necessary the final testing level; a final system integration testing session can be executed after the acceptance tests•May be executed also after component testing (component usability acceptance) •Usually involves client representatives•Typical forms:•User acceptance: business aware users verify the main features •Operational acceptance testing: backup-restore, security, maintenance •Alpha and Beta testing: performed by customers or potential users•Alpha : at the developer’s site •Beta : at the customer’s site45

2.3.1 Test types – Functional testing

Page 14: General Information About Testing

Target: Test the functionalities (features) of a product• Specification based:•uses Test Cases, derived from the specifications (Use Cases) •business process based, using business scenarios• Focused on checking the system against the specifications • Can be performed at all test levels• Considers the external behavior of the system• Black box design techniques will be used• Security testing is part of functional testing, related to the detection of threats46

2.3.2 Test types – Non-Functional testingNon functional testing = testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portabilityTargeted to test the product quality attributes:• Performance testing• Load testing (how much load can be handled by the system?)• Stress testing (evaluate system behavior at limits and out of limits)• Usability testing• Reliability testing• Portability testing• Maintainability testing47

2.3.2 Test types – Non-Functional testing - UsabilityUsability testing = used to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions•People selected from the potential users may be involved to study how they use the system•A quick and focused beta-test may be a cheap way of doing Usability testing•There is no simple way to examine how people will use the system•Easy to understood is not the same as easy to learn or as easy to use or as easy to operate48

2.3.2 Test types – Non-Functional testing - InstalabilityInstalability testing = The process of testing the installability of a software product”•Does the installation work?•How easy is to install the system?•Does installation affect other software? •Does the environment affect the product? •Does it uninstall correctly?49

2.3.2 Test types – Non-Functional testing – Load, Stress, Performance, Volume

Page 15: General Information About Testing

testingLoad Test = A test type concerned with measuring the behavior of a component or system with increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or systemStress Test = Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.Performance Test = The process of testing to determine the performance of a software product. Performance can be measured watching:•Response time •Throughput •Resources utilizationSpike Test = Keeping the system, periodically, for short amounts of time, beyond its specified limitsEndurance Test = a Load Test performed for a long time interval (week(s)) Volume Test = Testing where the system is subjected to large volumes of data50

•Targeted to test:o internal structure (component)• • •Can be performed at all test levelsUsed also to help measure the coverage (% of items being covered by tests) Tool support is critical2.3.3 Test types – Structural testingo architecture (system)Uses only white box design techniques51

2.3.4 Test types – Confirmation & regression testingConfirmation testing = Re-testing of a module or product, to confirm that the previously detected defect was fixed• Implies the use of a bug tracking tool• Confirmation testing is not the same as the “debugging” (debugging is a development activity, not a testing activity)Regression testing = Re-testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered as a result of the changes made. It is performed when the software or its environment is changed• Can be performed at all test levels• Can be automated (because of cost and schedule reasons)52

•Release based changes •Corrective changes •Database upgradesRegression testing is also involved2.4 Maintenance testingMaintenance testing = Testing the changes to an operational system or the impact of a changed environment to an operational systemDone on an existing operational system, triggered by modification, retirement or

Page 16: General Information About Testing

migration of the softwareInclude:Impact analysis = determining how the existing system may be affected by changes (used to help decide how much regression testing to do)53

3.1 Reviews and the testing processStatic Testing = testing of a component or system at specification or implementation level without execution of that software, e.g. reviews (manual) or static code analysis (automated)ReviewsWhy review?•To identify errors as soon as possible in the development lifecycle •Reviews offer the chance to find omissions and errors in the software specificationsThe target of a review is a software deliverable:• Specification• Use case• Design• Code• Test case• Manual54

When to review?Benefits include:• Early defect detection• Reduced testing costs and time• Can find omissions3.1 Reviews and the testing process•As soon as an software artifact is produced, before it is used as the basis for the next step in developmentRisks:•If misused they can lead to project team members frictions•The errors & omissions found should be regarded as a positive issue •The author should not take the errors & omissions personally•No follow up to is made to ensure correction has been made •Witch-hunts used when things are going wrong55

Formal review phases:3.2.1 Phases of a formal review•Planning: define scope, select participants, allocate roles, define entry & exit criteria•Kick-off: distribute documents, explain objectives, process, check entry criteria•Individual preparation: each of participants studies the documents, takes notes, issues questions and comments•Review meeting: meeting participants discuss and log defects, make

Page 17: General Information About Testing

recommendations•Rework: fixing defects (by the author)•Follow-up: verify again, gather metrics, check exit criteria56

3.2.2 Roles in a formal reviewThe formal reviews can use the following predefined roles:• Manager: schedules the review, monitor entry and exit criteria• Moderator: distributes the documents, leads the discussion, mediates various conflicting opinions• Author: owner of the deliverable to be reviewed• Reviewer: technical domain experts, identify and note findings• Scribe: records and documents the discussions during the meeting57

Informal reviewTechnical Review•A peer or team lead reviews a software deliverable•Without applying a formal process •Documentation of the review is optional •Quick way of finding omissions and defects •Amplitude and depth of the review depends on the reviewer•Formal Defect detection process•Main meeting is prepared•Team includes peers and technical domain experts•May vary in practice from quite informal to very formal•Led by a moderator, which is not the author •Checklists may be used, reports can be prepared •Main purposes: discuss, make decisions, evaluate alternatives, find defects, solve technical problems and check conformance to specifications and standards.•Main purpose: inexpensive way to get some benefitWalkthrough•The author of the deliverable leads the review activity, others participate •Preparation of the reviewers is optional •Scenario basedInspection•The sessions are open-ended •Can be informal but also formal •Main purposes: learning, gaining understanding, defect findingFormal process, based on checklists, entry and exit criteriaDedicated, precise rolesLed by the moderator3.2.3 Types of reviewMetrics may be used in the assessment Reports, list-of-findings are mandatory Follow-up processMain purpose: find defects58

3.2.4 Success factors for reviews• Clear objective is set

Page 18: General Information About Testing

• Appropriate experts are involved• Identify issues, not fix them on-the-spot• Adequate psychological handling (author is not punished for the found defects)• Level of formalism is adapted to the concrete situation• Minimal preparation and training• Management encourages learning, process improvement• Time-boxing is used to determine time allocated to each part of the document to be reviewed• Use of effective and specialized checklists ( requirements, test cases )59

3.3 Static analysis by tools•Performed without executing the examined software, but assisted by tools •The approach may be data flow or control flow based•Benefits:• early defects detection• early warnings about unwanted code complexity • detects missing links• improved maintainability of code and design• Typical defects discovered:• reference to an un-initialized variable • never used variables• unreachable code• programming standards violations• security vulnerabilities60

4. Test design techniques - glossary• Test condition = item, event, attribute of a module or system that could be verified (ex: feature, structure element, transaction, quality attribute)• Test data = data that affects or is affected by the execution of the specific module• Test case [IEEE] = a set of input values, execution preconditions, expected results and execution post-conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement• Test case specification [IEEE] = a document specifying a set of test cases for a test condition• Test procedure (suite) specification = a document specifying a sequence of actions for the execution of a series of test cases61

1. Identify test conditions:2. Develop test casesInputs:•Field – level· Use cases are used as input· Test cases will cover all possible paths of the execution graph flow· Test data should be specified if necessary

Page 19: General Information About Testing

· Priorities of Test cases should be assigned· Traceability matrix (Use cases x Test cases) should be maintained•Group – levelCapability related: •Trigger conditions4.1 Test design – test development process•Constraints or limits•Interfaces to other products •Validation of inputs at the following levels of aggregation:3. Develop test procedures•Field / action / message •Record / row / window •File / table / screen •Database· Group test cases into execution schedules · Factors to be considered:•Product states •Behavior rulesa. Prioritizationb. Logical dependencies c. Regression testsArchitectural design related: •Invocation paths· Traceability from test condition to the specifications (requirements) is a must•Communication paths •Internal data conditions •Design states •Exceptions· Risk analysis is a best practice62

4.2 Test design – categories of test design techniquesBlack box: no internal structure knowledge will be used White box: based on the analysis of the internal structure Static: without running, exercised on specific project artifactsEach black box or white box test technique has:•A method (how to do it)•A test case design approach (how to create test cases using the approach)•A measurement technique – coverage % (except the black box Syntax testing)Other taxonomy:• Specification based: test cases are built from the specifications of the module• Structure based: information about the module is constructed (design, code) is used to derive the test cases• Experience based: tester’s knowledge about the specific domain, about the likely defects, is used63

55 -- 99 = do not hire4.3.1 Black box techniques – equivalence partitioning• To minimize testing, partition input (output) values into groups of equivalent values (equivalent from the test outcome perspective)• Select a value from each equivalence class as a representative valueIf an input is a continuous range of values, then there is typically one class of valid values and two classes of invalid values, one below the valid class and one above it.Example: Rule for hiring a person is second its age: 0 – 15 = do not hire16 – 17 = part time18 – 54 = full time

Page 20: General Information About Testing

Which are the valid equivalence classes? And the invalid ones?Give examples of representative values!(other examples)(see Lee Copeland b2 chap.3, Cem Kaner c1, Paul Jorgensen b7 chap.2.2,6.3)64

4.3.1 Black box techniques – all-pairs testingIn practice, there are situations when a great number of combinations must be tested. For example: A Web site must operate correctly with different browsers— Internet Explorer 5.0, 5.5, and 6.0, Netscape 6.0, 6.1, and 7.0, Mozilla 1.1, and Opera 7; using different plug-ins—RealPlayer, MediaPlayer, or none; running on different client operating systems—Windows 95, 98, ME, NT, 2000, and XP; receiving pages from different servers—IIS, Apache, and WebLogic; running on different server operating systems—Windows NT, 2000, and Linux.•Test environment combinations: •8 browsers•3 plug-ins•6 client operating systems •3 servers•3 server OS•1,296 combinations !•All-pairs testing is the solution : tests a significant subset of variables pairs.65

4.3.2 Black box techniques – boundary value analysisBoundaries = edges of the equivalence classes.Boundary values = values at the edge and nearest to the edge The steps for using boundary values:•First, identify the equivalence classes.•Second, identify the boundaries of each equivalence class.•Third, create test cases for each boundary value by choosing one point on the boundary, one point just below the boundary, and one point just above the boundary. "Below" and "above" are relative terms and depend on the data value's units•For the previous example:•boundary values are {-1,0,1}, {14,15,16},{15,16,17},{16,17,18}{17,18,19}, {54,55,56},{98, 99, 100}Or, omitting duplicate values: {-1,0,1,14,15,16,17,18,19,54,55,56,98,99,100}(other examples)(see Lee Copeland b2 chap.4, Paul Jorgensen b7 chap5)66

4.3.3 Black box techniques – decision tables• Conditions represent various input conditions• Actions are the actions taken depending on the various combinations of inputconditions• Each of the rules defines a unique combination of conditions that result in the execution of the actions associated with that rule• Actions do not depend on the condition evaluation order, but only on their

Page 21: General Information About Testing

values.• Actions do not depend on any previous input conditions or system state. (see Lee Copeland b2 chap 5, Paul Jorgensen b7 chap 7)67

4.3.3 Black box techniques – decision tables - example a, b, c are they the edges of a triangle?(however, some additional test cases are needed)68

4.3.4 Black box techniques – state transition tablesAllow the tester to interpret the system in term of:• States• Transition between states• Events that trigger transitions• Actions resulting from the transitions• Transition table used:(see Lee Copeland b2, chap 7)69

4.3.4 Black box techniques – state transition tables - exampleTicket buy - web applicationExercise: Fill-in the transition table!70

4.3.5 Black box techniques – requirements based testingBest practices:•Validate requirements (what) against objectives (why) •Apply use cases against requirements•Perform ambiguity reviews•Involve domain experts in requirements reviews•Create cause-effect diagrams•Check logical consistency of test scenarios•Validate test scenarios with domain experts and users •Walk through scenarios comparing with design documents •Walk through scenarios comparing with code71

4.3.5 Black box techniques – scenario testingGood scenario attributes:•Is based on a real story•Is motivating for the tester•Is credible•Involves an enough complex use of environment and data •Is easy to evaluate ( no need for external oracle)How to create good test scenarios:•Write down real-life stories•List possible users, analyze their interests and objectives •Consider also

Page 22: General Information About Testing

inexperienced or ostile users•List system benefits and create paths to access those features •Watch users using old versions of the system or an analog system •Study complaints about other analog systems72

Steps:1. Identifytheuse-caseMost common test case mistakes:1. Making cases too long2. Incomplete, incorrect, or incoherent setup3. Leaving out a step4. Naming fields that changed or no longer exist 5. Unclear whether tester or system does action 6. Unclear what is a pass or fail result7. Failure to clean upscenarios.2. For each scenario, identify oneor more test cases.3. For each test case, identify theconditions that will cause it toexecute.4. Completethetestcasebyadding data values.73(see example)4.3.5 Black box techniques – use case testingGenerating the Test Cases from the Use Cases

4.3.5 Black box techniques – Syntax testingSyntax testing = uses a model of the formally-defined syntax of the inputs to a ComponentThe syntax is represented as a number of rules each of which defines the possible means of production of a symbol in terms of sequences of, iterations of, or selections between other symbols.Here is a representation of the syntax for the floating point number, float in Backus Naur Form (BNF) :float = int "e" int.int = ["+"|"-"] nat.nat = {dig}.dig = "0"|"1"|"2"|"3"|"4"|"5"|"6"|"7"|"8"|"9".Syntax testing is the only black box technique without a coverage metric assigned.74

A process block is a sequence of program statements that execute sequentially.A decision point is a point in the module at which the control flow can change. Most decision points are binary and are implemented by if-then-else statements.

Page 23: General Information About Testing

Multi-way decision points are implemented by case statements. They are represented by a bubble with one entry and multiple exits.A junction point is a point at which control flows join togetherNo entry into the block is permitted except at the beginning. No exit from the block is permitted except at the end. Once the block is initiated, every statement within it will be executed sequentially.Example:(see Lee Copeland b2, chap.10)754.4 White box techniques – Control flowModules of code are converted to graphs, the paths through the graphs are analyzed, and test cases are created from that analysis. There are different levels of coverage.Process BlocksDecision PointJunction Point

a;if (b) {c; }4.4.1 White box techniques – statement coverageStatement coverage = Executed statements / Total executable statementsExample:d;In case b is TRUE, executing the code will result in 100% statement coverage76

Given the code:a;if (x) {xTTFF yTFTF aaaa} }eeelse {e;}4.4.1 White box techniques – statement coverage - exerciseb;if (y) {c; }bbelse {d;cdHow many test cases are needed to get 100% statement coverage?77

Page 24: General Information About Testing

4.4.2 White box techniques – branch & decision coverage - glossaryBranch = a conditional transfer of control from a statement to any other statementOR= an unconditional transfer of control from a statement to any other statement except the next statement;Branch coverage = executed branches / total branchesDecision coverage = executed decisions outcomes / total decisionsFor components with one entry point 100% Branch Coverage is equivalent to 100% Decision Coverage78

4.4.2 White box techniques – branch & decision coverage - exampleDecisions = B2, B3, B5 each with 2 outcomes = 3 * 2 = 6 Branches = (how many arrows?) = 10Q1. What are the decision and branch coverage for (B1B2 B9) ?Q2. But for (B1->B2->B3->B4->B8->B2->B3->B5->B6->B8->B2- >B3->B5->B7) ?Answers: 1. 1/6, 2/10 2. 5/6, 9/1079

4.4.2 White box techniques – LCSAJ coverageLCSAJ = Linear Code Sequence and JumpDefined by a triple, conventionally identified by line numbers in a source code listing:•the start of the linear code sequence•the end of the linear code sequence•the target line to which control flow is transferredLCSAJ coverage = executed LCSAJ sequences / total nr. of LCSAJ seq.80

4.4.3 White box techniques – data flow coverageJust as one would not feel confident about a program without executing every statement in it as part of some test, one should not feel confident about a program without having seen the effect of using the value produced by each and every computation.Data flow coverages:•All defs = Number of exercised definition-use pairs / Number of variable definitions•All c(omputation)-uses = Number of exercised definition- c-use pairs / Number of definition- c-use pairs•All p(redicate)-uses = Number of exercised definition- p-use pairs / Number of definition- p-use pairs•All uses = Number of exercised definition- use pairs / Number of definition- use pairs•Branch condition = Boolean operand values executed / Total Boolean operand values•Branch condition combination = Boolean operand values combinations executed / Total Boolean operand values combinations

Page 25: General Information About Testing

(see Lee Copeland b2, chap.11)81

4.5 Exploratory testing• Exploratory testing = Concurrent test design, test execution, test logging and learning, based on a quick test charter containing objectives, and executed within delimited time-intervals• Uses structured approach to error guessing, based on experience, available defect data, domain expertise• On-the-fly design of tests that attack these potential errors• Skill, intuition and previous experience is vital• Test strategy is built around:•The project environment•Quality criteria defined for the project •Elements of the product•Risk factors82

Factors used to choose:· Product or system type· Standards· Product’s requirements· Available documentation· Determined risks· Schedule constraints· Cost constraints· Used software development life cycle model· Tester’s skills and (domain) experience4.6 Choosing test techniques(additional materials: Unit Test design, exercises)83

Options : independent team or not?5.1.1 Test organization & independencePluses:• Testers are not influenced by the other project members• Can act as ‘the customer’s voice’• More objectivity in evaluating the product quality issuesMinuses:• Risk of isolation from the development team• Communication issues· Developers can loose the ‘quality ownership’ attribute84

5.1.2 Tasks of the test leader•Plan, estimates test effort, collaborates with project manager •Elaborates the test strategy•Initiate test specification, implementation, execution

Page 26: General Information About Testing

•Set-up configuration management of test environment & deliverables •Monitors and controls the execution of tests•Chooses suitable test metrics•Decides if and to what degree to automate the tests •Select tools•Schedule tests•Prepare summary test reports•Evaluate test measurements85

Test AnalystTest Designer• Identify test objectives (targets)• Review product requirements and• Define test approach (procedure)software specifications• Review test plans and test cases• Elaborates test case lists and writes main test cases• Verifies requirements to test cases traceability• Assesses testabilityDefine testing environment details• Define test scenario details• Compares test results with testTesteroracle• Assesses test risks• Gather test measures• Define test approach (procedure)• Write test cases• Review test cases (peer review)• Implement and execute tests5.1.2 Tasks of the tester• Structure test implementationRecord defects, prepare defect reports 86

5.2.1-5.2.2-5.2.3 – Test planningTest plan = A document describing the scope, approach, resources and schedule of intendedtest activitiesIt identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and test measurement techniques to be used, and the rationale for their choice, and any risks requiring contingency planning.It is a record of the test planning process IEEE 829: Test plan contents:•Test plan identifier •Introduction•Test items•Features to be tested •Features not to be tested •Approach•Test deliverables

Page 27: General Information About Testing

•Testing tasks •Environment •Responsibilities•Staffing and training needs •Schedules•Item pass / fail criteria •Suspension criteria & resumption criteria•Risks and contingencies •Approvals87

• Determine scopeo Study project documents, used •Refine plan•• Preventive approach• Reactive approach• Risk-based• Model (standard) based•o Select metrics to be used for defect tracking, coverage, monitoring•software life-cycle specifications, product desired quality attributeso Identify and communicate with othero Define roles detailed responsibilitiesstakeholderso Clarify test process expectationso Select test strategy, test levels: Test strategy issues (alternatives):Determine riskso Choose quality risk analysis method (e.g. FMEA)o Document the list of risks, probability, impact, priority, identify mitigation actions• Choosing testing techniques (white and/or black box)Estimate testing effort, determine costs, develop scheduleo Define necessary roleso Decompose test project into phaseso Define entry and exit criteria • Exit criteria:and tasks (WBS)o Schedule tasks, assign resources,• Coverage measures• Defect density or trend measures• Cost• Residual risk estimation• Time or market basedset-up dependencieso Develop a budgeto Obtain commitment for the plan from the stakeholders885.2.1-5.2.2-5.2.3 – Test planning

Two approaches:• based on metrics (historical data) • made by domain experts5.2.4 – Test estimation

Page 28: General Information About Testing

Testing effort depends on:• product characteristics (complexity, specification)• development process (team skills, tools, time factors) • defects discovered and rework involved• failure risk of the product (likelihood, impact)Time for confirmation testing and regression testing must be considered too89

5.2.5 – Test strategiesTest approach (test strategy) = The chosen approaches and decisions made that follow from the test project's and test team's goal or mission.The mission is typically effective and efficient testing, and the strategies are the general policies, rules, and principles that support this mission. Test tactics are the specific policies, techniques, processes, and the way testing is done.One way to classify test approaches or strategies is based on the point in time at which the bulk of the test design work is begun:•Preventative approaches, where tests are designed as early as possible.•Reactive approaches, where test design comes after the software or system has been produced.Or, another taxonomy:Analytical - such as risk-based testingModel-based - such as stochastic testingMethodical - such as failure- based, experience-based Process- or standard-compliant Dynamic and heuristic - such as exploratory testingConsultative Regression-averse90

5.3 – Test progress monitoring, reporting & controlMonitoring - Test metrics used: •Test cases (% passed/ % failed)Control: identify and implement corrective actions for:•Testing process•Other software life-cycle activities•Defects (found, fixed/found, density, trends)Possible corrective actions:•Test Coverage (% executed Test cases)Reporting: •Defects remaining •Coverage metrics •Identified risks•Assign extra resource •Re-allocate resource •Adjust the test schedule •Arrange for extra test environments•Refine the completion criteria91

5.4 – Configuration managementIEEE definition of Configuration Management:A discipline applying technical and administrative direction and surveillance to:•identify and document the functional and physical characteristics of a configuration item,•control changes to those characteristics,

Page 29: General Information About Testing

•record and report change processing and implementation status, and •verify compliance with specified requirementsConfiguration Management:•identifies the current configuration (hardware, software) in the life cycle of the system, together with any changes that are in course of being implemented.•provides traceability of changes through the lifecycle of the system. •permits the reconstruction of a system whenever necessaryOnly persistent objects must be subject to Configuration Management, therefore, the data processed by a system cannot be placed under Configuration Management.•Related to Version Control and Change Control92

Configuration Management activities:5.4 – Configuration managementConfiguration identification = selecting the configuration items for a system and recording their functional and physical characteristics in technical documentationConfiguration control = evaluation, co-ordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identificationStatus accounting = recording and reporting of information needed to manage a configuration effectively, including:•a listing of the approved configuration identification,•the status of proposed changes to the configuration, and •the implementation status of the approved changesConfiguration auditing = The function to check that the software product matches the configuration items identified previously93

•Identify all test-ware items5.4 – Configuration managementIn Testing, Configuration Management must:•Establish and maintain the integrity of the testing deliverables (test plans, test cases, documentation) through the project life cycle•Set and maintain the version of these items •Track the changes of these items•Relate test-ware items to other software development items in order to maintain traceability•Reference clearly all necessary documents in the test plans and test cases94

5.5 – Risk & TestingRisk = a factor that could result in future negative consequences,expressed as likelihood and impact• Project risks (supplier related, organizational, technical)• Product risks (defects delivered, poor quality attributes (reliability, usability, performance)The risks identified can be used to:

Page 30: General Information About Testing

• Define the test strategy and techniques to be used• Define the extent and depth of testing• Prioritize test cases and procedures (find important defects early)• Determine if review or training activities could help95

5.6 – Incident managementIncident = any significant, unplanned event that occurs during testing that requires subsequent investigation and / or correction•The system does not function as expected •Actual results differ from expected results •Required features are missingIncident reports can be raised against:•documents placed under review process•product’s defects related to functional & non-functional requirements •documentation anomalies (manuals, on-line help)•test-ware defects (errors in test cases or test procedures)The incident reports raised against products defects are named also bug reports.96

Recommended Bug report formatBug statuses•Defect ID•Component name and Build version •Reported by and Date•Error type•Severity•Priority•Summary and detailed description •Attached documentsIssued – just been reported(Exercise)5.6 – Incident managementOpened – programmer is working to solve-itFixed – programmer thinks that’s repairedNot solved – tester retested but the bug is not solvedDeferred – programmer or PM decided to postpone the decisionNot-a-bug – programmer or tester discovered that it is not a defectClosed – bug is solved and verified97

• Management of testing:•Test management •Requirements management •Bug tracking• Test execution: •Record and play•Configuration management • Static testing:•Unit test framework •Result comparators •Coverage measurement •Security•Review support •Static analysis •Modeling• Performance and monitoring: •Dynamic analysis•Load and stress testing •Monitoring• Test specification: •Test design

Page 31: General Information About Testing

• Specific application areas (TTCN-3) • Other tools•Test data preparation6.1.1 – Test tool classification98

•Test management:•Manage testing activities •Manage test-ware traceability •Test result reporting•Test metrics tools•Configuration management •Individual support:•Requirements management: •Checking•Traceability •Coverage•Bug tracking6.1.2 – Tool support - Management of testing•Version and change control•Builder •Project related•Department or company related99

•Review support: •Process support•Communications support •Team support•Static analysis: •Coding standards •WEB site structure •Metrics•Modeling:•SQL database management6.1.3 – Tool support - Static testing100

•Test design:•From requirements •From design models6.1.4 – Tool support – Test specification•Test stubs and driver generators •Test data preparation101

6.1.5 – Tool support – Test execution and logging•Record and play •Scripting•Unit test framework•Test harness frameworks •Result comparators •Coverage measurement •Security testing support102

6.1.6 – Tool support – Performance and monitoring•Dynamic analysis: •Time dependencies •Memory leaks•Load testing •Stress testing •MonitoringWatch for possible mistakes!103

6.2.1 – Tool support – benefits•Repetitive work is reduced (e.g. running regression tests, re-entering the same

Page 32: General Information About Testing

test data, and checking against coding standards).•Greater consistency and repeatability (e.g. tests executed by a tool, and tests derived from requirements).•Objective assessment (e.g. static measures, coverage and system behavior).•Ease of access to information about tests or testing (e.g. statistics and graphs about test progress, incident rates and performance).104

6.2.1 – Tool support – risks•Unrealistic expectations for the tool (including functionality and ease of use).•Underestimating the time, cost and effort for the initial introduction of a tool (including training and external expertise).•Underestimating the time and effort needed to achieve significant and continuing benefits from the tool (including the need for changes in the testing process and continuous improvement of the way the tool is used).•Underestimating the effort required to maintain the test assets generated by the tool.•Over-reliance on the tool (replacement for test design or where manual testing would be better).•Lack of a dedicated test automation specialist•Lack of good understanding and experience with the issues of test automation•Lack of stakeholders commitment for the implementation of a such tool105

6.2.2 – Tool support – special considerations•Test execution tools:•Significant implementation effort•Record and play tools are instable when changes occur •Technical expertise is mandatory•Performance testing tools:•Expertise in design and results interpretation are mandatory•Static analysis tools:•Lots of warnings generated •Build management sensitive•Test management tools:•Interfacing with other tools (Windows Office, at least) is critical•Test tools future is much debated (see...)106

6.2.2 – Test automation classic mistakes (Shrini Kulkarni )10. Wild Desire to automate 100%9.Attempting to automate existing test cases without scrutinizing them for “suitability” to automate8. Mapping test case to script 1:1 linear model – falling prey to deceptive traceability and gold plated reporting.7.Not building automation solution bottom-up , unidentifiable building block of the solution.6. Trying only one type of automation or attacking only one layer of the

Page 33: General Information About Testing

application – Farther you go from code, messier it gets.5. Focusing only test execution related tasks4. Treating automation as scripting – ignoring “generally accepted good software development practices for hygiene.3. Failure to involve developers from the beginning – Not attempting to testability or automatability of the application.2. Jumping to automation to speed up testing or save cost before fixing testing problems – inadequate, inefficient and broken.1. Failure to arrive (formulate) at the right mix of human testing and automated test execution.0. Using Automation as solution to testing problems.107

6.2.2 – Tool support – testing in Visual Studio Team System •Developer:•use Test Driven Development methods•manage Unit Testing•analyze code coverage•use code static analysis•use code profiler to handle performance issues•Tester:•manage test cases•manage test suites •manage manual testing •manage bug tracking •record / play WEB tests •run load tests•report test results108

6.2.2 – Tool support – testing in Agile distributed environmenthttp://agile2008toronto.pbwiki.com/Evolution+of+tools+and+practices+of+a+distributed+agile+team109

•Quick pilots •Select a tool6.2.2 – Introducing a tool into an organization•Tool selection process:•Identify requirements•Identify constraints•Check available tools on the market (feature evaluation) •Evaluate short list (feature comparison):•DemosNote: there are many free testing tools available, some of them also online ( www.testersdesk.com )110

• 40 multiple (4) choice questions • 1 hour examK1: The candidates will recognize, remember and recall a term or concept.• Score >= 65% (>=26 good answers) to pass• 50% K1, 30% K2, 20% K3

Page 34: General Information About Testing

K2: The candidates can select the reasons or explanations for statements related to the topic. They can summarize, compare, classify and give examples for concepts of testing.•Chapter 1 - 7 questions •Chapter 2 – 6 questions •Chapter 3 – 3 questions •Chapter 4 – 12 questions •Chapter 5 – 8 questions •Chapter 6 – 4 questionsK3: The candidates can select the correct application of a concept or techniques and/or apply it to a given context.ISTQB Foundation Exam guidelinesExample: (see others...)Which statement regarding testing is correct?a) Testing is planning, specifying and executing a program with the aim of finding defectsb) Testing is the process of correcting defects identified in a developed programc) Testing is to localize, analyze and correct the direct defect caused) Testing is independently reviewing a system against its requirements111