Test Architecture v1.5
-
Upload
ravisharda -
Category
Documents
-
view
230 -
download
0
Transcript of Test Architecture v1.5
-
8/3/2019 Test Architecture v1.5
1/43
1
Test architecture- Ravi Sharda, April 2011
-
8/3/2019 Test Architecture v1.5
2/43
Setting the context
Cem Kaner (adapted):
A toxic myth about testing: testing = verification If you have contracted for delivery of software, and the contract
contains a complete and correct specification (requirements and/ordesign) - only then verification covers a good part of testing For example, w.r.t. requirements (adapted from [BergerUsecases01]):
They might be ambiguous They might be incomplete They may not describe enough detail of use Not enough of them, missing entire areas of functionality They might be inaccurate They might not have been updated when requirements changed (or CCBs arrived
and were accepted) They may have assumed general standards of quality attributes (usual response
times, fail gracefully, etc.)
Kaner: Verification cannot tell you whether the software will meetstakeholder needs or elicit a positive reaction from users Berger (as implied by [BergerUsecases01]): The goal of testing is to find
bugs, rather than to make sure the software works
-
8/3/2019 Test Architecture v1.5
3/43
Still setting the context
Software testing definition (Cem Kaner): Software testing is an empirical technical investigation
conducted to provide stakeholders with information aboutthe quality of the product or service under test.
-
8/3/2019 Test Architecture v1.5
4/43
Still setting the context
Alistair Cockburn: Software engineering is built onthree legs Craft
Lifelong learning
Deepening the proficiency in ones craft
New tools and technologies
Cooperative gaming Every situation (game) is different
No formula for winning the game
Quality of the move in the game is not absolute
Quality of community and communication among members matterenormously
Lessons from lean manufacturing People hand others decisions; people wait on each other for
decisions; Some people have bigger backlog of decisions theythey can handle at the moment
(By implication - outputs too)
Test Architect: 1) What arethe available testingtechniques, practices, tools,technologies, etc.?
2) Can I invent one?
Test Architect: What testingtechniques, practices,tools, technologies weneed to use for thisproduct?
How do we plan to coverthe product so as todevelop an adequate
assessment of quality?
Project Mgr: How do I
optimize thedependency network?
-
8/3/2019 Test Architecture v1.5
5/43
Functional testing (this does this) The system under test is viewed as a black box
Emphasis is on the external behavior of the software Selection of tests is based on specifications
Requirements, and/or
Design specification
Is often used interchangeably with black-box testing, feature testing,behavioral testing
Behavioral testing: a more specific form of functional testing Is based on requirements specification
Functional testing (or more specifically behavioral testing) isnt alltesting
In fact it represents just 35-65% of all testing If we stick to a functional story (this does this), well miss all kinds of
problems that data and other structural elements can trigger (thisdoes this with that)
Adapted from [LouTechniques], [Beiz95], [BoltonModel05]
-
8/3/2019 Test Architecture v1.5
6/43
Structural testing
The software is viewed as a white-box or a glass-box
Selection of test cases is based on the implementation of thesoftware
Static analysis (code complexity), code churn measures, code paths, etc.
Path coverage, branch coverage, data-flow coverage, etc.
Goal:Cause the execution of specific spots in the software entity
Specific statements, program branches or paths Examples
Execute every statement at least once
Test the use of all data objects
Expected results are evaluated based on a set of coverage criteria
A product can be manifested in (or on) concrete, physical parts[BoltonModel05]
Object code, templates, sample data, configuration files, registrysettings, user manuals, etc.
Does your test strategy incorporate ideas informed by these physicalobjects?
Adapted from [LouTechniques], [Beiz95], [BoltonModel05]
-
8/3/2019 Test Architecture v1.5
7/43
Other terms
Test architecture vs. design Test architecture: non-local
things that affect most or large part of an application, or a group of applications
Test design: local Things that affect local parts
Bugs vs. faults [Beiz95] Fault implies someone is to blame
Carelessness during programming, incompetence, etc.
Bugs Just happens. No one is to blame.
Heuristic models [BoltonModel05]
When we model something, we focus on certain attributes of it while ignoringothers Gives opportunity to understand some important aspect
Risk is we might be oblivious to other important things
Good models are often heuristic Set of guidelines to help us solve a problem
But they are provisional: used for a specific, temporary purpose
They are fallible.
-
8/3/2019 Test Architecture v1.5
8/43
Test Architecture COE:
1) What are the available testingtechniques, practices, tools,technologies, etc.?
2) Do we need to invent one?
3) How do I propagate these andhelp teams in using these?
CRAFT Lifelong learning Deepening the proficiency in ones
craft New tools and technologies
Higher quality
Repeatability
Consistency
Improvedproductivity
Predictability
Improved estimation
Better prioritization
-
8/3/2019 Test Architecture v1.5
9/43
Testing techniques - Classification
9
Coverage-based techniques
Function testing
Feature or functionintegration testing
Menu tour
Domain testing
Equivalence class analysis Boundary testing
Best representative testing
Input field test catalogs
Logic testing
State-based testing Path testing
Statement or branchcoverage
Configuration coverage
Specification-based testing Combination testin
A wise navigator never relies solely on one techniqueN. Bowditch
Activity-based techniques
Regression Scripted testing Smoke testing Exploratory testing Guerilla testing
Scenario testing Installation testing Load testing.
Long sequence testing Performance testing
-
8/3/2019 Test Architecture v1.5
10/43
Testing techniquesAnother Classification
Black-box or Functional testing
techniques Function tests
Domain testing.
Specification-based testing
Risk-based testing Stress testing
Regression testing
Performance-based
Finite-state machine-based
Exploratory testing Decision table
Orthogonal arrays and all pairs
Etc.
White-box or Structuraltesting techniques
Control-flow testing
Data-flow testing
Mutation testing Reference models for
code-based testing
Etc.
Etc.
-
8/3/2019 Test Architecture v1.5
11/43
Testing techniques: function testing
11
Identify all functions or features (from requirements,user manuals, walking through the interface, etc.)
Test them one at a time
Are highly credible
Easy to evaluate
Not particularly powerful
Adapted from Cem Kaner
-
8/3/2019 Test Architecture v1.5
12/43
Testing techniques: specification-based
testing
12
Check the program against every claim made in: requirements,
design,
user interface description,
published model, user manual, etc.
When specs. are taken seriously, spec-based testingis important
Spec. is part of the contract Or, products must conform to their advertisements, etc.
Are often weak
Adapted from Cem Kaner
-
8/3/2019 Test Architecture v1.5
13/43
Testing techniques: specification-based
13
Structural risk analysis example from a design spec. [pointing at a box] What if this function fails? Can this function ever be invoked at the wrong time? [pointing at any part of the diagram] What error checking do
you do here? [pointing at an arrow] What exactly does this arrow mean?
What would happen if it was broken? Try annotating the box with icons for test ideas
Web Server
Database
Layer
App Server
Browser
Real-time
Monitoring
Auction
Server
Src: Michael Bolton
-
8/3/2019 Test Architecture v1.5
14/43
Testing techniques
14Src: Michael Bolton
-
8/3/2019 Test Architecture v1.5
15/43
Testing techniques: domain testing
15
Stratified sampling strategy for choosing a few test casesfrom the near infinity of candidate of test cases Divide/partition a domain into subdomains (equivalence
classes)
Then select representatives of each subdomain
E.g., a good set of domain tests for a numeric variable hits:
Every boundary value Min, max, a value barely below the min, a value barely above the max
Every extreme value Empty, null, negative (when positive expected), 0 (when non-zero
expected)
Boundary/extreme-value errors are very common in practice.
Hence, testing for them, Are higher power than tests that
dont use best representatives
Skip some of the subdomains
E.g., people often skip cases that are expected to lead to error
messagesSrc: Adapted from[BoltonDom06], [KanerDomain]
-
8/3/2019 Test Architecture v1.5
16/43
Testing techniques: domain testing
16
Domain testing How to divide into subdomains: Intuitive equivalence:two test values are equivalent if they are so
similar to each other that it seems pointless to test both
Specified equivalence:two test values are equivalent if thespecification says that the program handles them in the same way
Paths:two test values are equivalent if they would drive the programdown the same path (e.g. execute the same branch of an IF)
Risk-based: two test values are equivalent if, given your theory ofpossible error, you expect the same result from each
Etc.
Examples of usage
http://www.testingeducation.org/k04/DomainExamples.htm http://www.testingeducation.org/k04/documents/bbst5_2005.pdf
http://www.testingeducation.org/k04/documents/bbst6_2005.pdf
Src: Adapted from[BoltonDom06], [KanerDomain]
http://www.testingeducation.org/k04/DomainExamples.htmhttp://www.testingeducation.org/k04/documents/bbst5_2005.pdfhttp://www.testingeducation.org/k04/documents/bbst6_2005.pdfhttp://www.testingeducation.org/k04/documents/bbst6_2005.pdfhttp://www.testingeducation.org/k04/documents/bbst5_2005.pdfhttp://www.testingeducation.org/k04/DomainExamples.htm -
8/3/2019 Test Architecture v1.5
17/43
Testing techniques: risk-based testing
17
A program is a collection of opportunities for things to gowrong For each way that you can imagine the program failing, design
tests to determine whether the program actually will fail in thatway
Some bugs are not functional problems, but fall into otherquality risk categories States, installation or uninstallation, operations, maintenance,
regression, data quality, date and time handling, configurationand compatibility, performance and reliability, stress andcapacity, etc,
Examples, Customer facing portal: have you tested for cross-site scripting,
SQL injections and other typical web application attacks?
B2B gateway application: can the system handle multiple
identical messages coming in (idempotency)
-
8/3/2019 Test Architecture v1.5
18/43
Testing techniques: risk-based testing
18
A Generic risk list Complex:anything disproportionately large, intricate, or convoluted
New: anything that has no history in the product
Changed: anything that has been tampered with or "improved".
Upstream Dependency:anything whose failure will cause cascadingfailure in the rest of the system.
Downstream Dependency:anything that is especially sensitive to failuresin the rest of the system.
Critical: anything whose failure could cause substantial damage.
Precise: anything that must meet its requirements exactly.
Popular: anything that will be used a lot.
Strategic: anything that has special importance to your business, such asa feature that sets you apart from the competition.
Third-party: anything used in the product, but developed outside theproject.
Distributed: anything spread out in time or space, yet who elements mustwork together.
Buggy: anything known to have a lot of problems.
Recent failure: anything with a recent history of failure.
Src: James Bach, Heuristic Risk-Based Testing, Software Testing and Quality Engineering Magazine, 11/99
-
8/3/2019 Test Architecture v1.5
19/43
Testing techniques: finite-state machine
based
19
Model a program as a finite state machine that runsfrom state to state in response to events (such as
new inputs)
Tests can be selected it order to cover states and
transitions on it In each state, does it respond correctly to each event?
Suited for transaction-processing, reactive,embedded and real-time systems
Src: [Sweebok01] [KanerDesign]
-
8/3/2019 Test Architecture v1.5
20/43
Testing techniques: exploratory testing
20
Not a replacement for sustained engineeringnecessary for the long-term maintenance of softwarereleases
Not purely spontaneous. Needs extensive research: studying other competitive products/systems,
failure histories of this and analogous systems, the weaknesses of the product
interviewing programmers,
reading specifications, etc.
Might use any or all of these techniques: Domain
Specification-based
Stress
Risk-based
-
8/3/2019 Test Architecture v1.5
21/43
Testing techniques: exploratory testing
21
Example
Src: James Bach
-
8/3/2019 Test Architecture v1.5
22/43
Testing Web applications
Concerns
Functional correctness aspects Recoverability from errors Browser compatibility and configuration Usability:Understandability, learnability, operability,
attractiveness Business rules:checking for accurate representation of
business rules
Transaction accuracy:Checking whether transactionscomplete accurately and whether cancelled transactions areappropriately rolled back
Data validity and integrity:Valid formats of enterable data andproper character sets.
Security:vulnerability analysis (unvalidated input, brokenaccess control and session management, cross-site scriptingflaws, SQL injection flaws, improper error handling, external
intrusion, protection of secured transaction, viruses, accesscontrol, etc.) Performance: Concurrency, stress, throughput, response
times Etc.
Tools HTML test tools, Site validation, General purpose Web test
tools (GUI capture and playback), Web security tools, Webload and performance testing tools, Site monitoring tools
-
8/3/2019 Test Architecture v1.5
23/43
More tools and technologies
Web services testing Concerns
Testing the transportation layer (HTTP/S, JMS, FTP, etc.)
Functional corectness
Regular Web services testing using good, bad and unexpected inputs
Service design (WSDL) validation
Message and schema validation, scheme version verification and data transformation
Security policy validation
Performance and load testing
Testing for WS-* standards and related industry standards
SLA and QoS testing
Interoperability testing, say using WS-I basic profile, for Web services
Communication protocol compatibility tests
Testing for idempotency, etc.
Etc. Tools: Web services testing tools including SOATest and SOAPUI, etc.
Test data generation tools and strategies Production sampling, Starting from scratch, seeding data, generating from
databases
Reverting data to a known state
-
8/3/2019 Test Architecture v1.5
24/43
Testing Datawarehouse
Concerns [PerryTesting00] Inaccurate or incomplete data in a datawarehouse Losing an update to a single data item
Inadequate audit trail to reconstruct transaction
Unauthorized access to data
Inadequate service level
Placing data in a wrong calendar period
Improper use of data Loss of continuity of processing
Etc..
Security Concerns
Whether Application meets security needs. Examples include userauthentication, secure data storage and transmission of specific fields(such as encrypted), verifying sensitive data is not stored in logs,
Identifying security vulnerabilities of applications in the given environment.Examples include, buffer overflow, SQL injection, cross-site scripting,parameter tampering, cookie poisoning, hidden fields, debug options,unvalidated input, broken authorization, broken authentication, and session
management Tools: Network scanning, vulnerability scanning, penetration testing,
-
8/3/2019 Test Architecture v1.5
25/43
Testing for
Robustness
Concerns
How sensitive is a system to erroneous inputs and changes in operationalenvironment?
Verify that the application can recover using current backups
Test failover and redundancy operations (DB, application server, etc.)
High availability tests, say verifying that the system gracefully and quicklyrecovers from hardware and software failures without adversely impactingthe operation of the system
Verifying that failed transactions roll back correctly
Mostly manual
Performance Concerns
Volume, load and stress tests
Identification of critical transactions
Initial analysis of performance data
Tools: Loadrunner, Winrunner, etc.
-
8/3/2019 Test Architecture v1.5
26/43
More tools and technologies
Mobile web application testing Concerns [NguyenWebTest03] :
Add-on installation tests
Data synchronization related tests
UI implementation and limited usability tests
Browser-specific tests
Platform-specific tests
Configurability or compatibility tests (cross devices, cross OS, crossbrowsers, cross versions, cross languages, graphic formats, soundformats, video formats, etc)
Connectivity tests
Performance, security tests
Etc.
Tools: Device and browser emulators, Web-based mobine phoneemulators and WML validators, Desktop WAP browsers, etc.
Other examples Rich Internet Applications: AJAX, Flash/Silverlight, FlashFX, AIR, etc
Database testing
Etc.
-
8/3/2019 Test Architecture v1.5
27/43
Testing
Ajax testing Concerns
Back button, book-marking, browsers loading control, especiallywhen you use Ajax for affecting navigation or workflow
Tests on different browser types
UI testing: Since Ajax Web apps rely on stateful async client/servercommunication and client-side manipulation of DOM-tree, they are
fundamentally harder to test automatica Ajax security
Server-side: same server side security schemes of the regular web apps Client-side: JS code is visible to a user/hacker (obfuscation or compression
may help)
Rules testing
Business process management and workflow testing Desktop UI testing
-
8/3/2019 Test Architecture v1.5
28/43
Test coverage
28
Functional Data
Platform
Operations
Time
There were more than amillion test cases
written for MicrosoftOffice2007 Alan Page
et. Al.
-
8/3/2019 Test Architecture v1.5
29/43
Test coverage: functional coverage
29
Print testing example Setup, preview, zoom Print range, print copies, scale to paper
Print all, current page, or specific range
Choose printer, printer properties, paper size and type Print to file instead of to printer
Focusing on functional coverage menu and dialog tours: choose every option, click every button, fill
in every field
mouse tours: dont forget right-clicks, Shift-click, Ctrl-click, Alt-click,-click
click frenzy: drag and drop
keyboard tours: dont forget key combinations
error tours: search for error messages inside resource tables other forms of guided tours: see Mike Kellys FCC CUTS VIDS
Google it!
Test what it does
Src: Michael Bolton, Understanding Test Coverage
-
8/3/2019 Test Architecture v1.5
30/43
Test coverage: data coverage
30
Print testing example Content in documents (text, graphics, links, objects) Types of documents (letter, envelope, book)
Size or structure of documents (empty, huge, complex)
Data about how to print (zoom factor, number of copies) Data about the hardware
Focusing on data coverage test using known equivalence classes and boundaries
orient testing towards discovery of previously unknownclassifications and boundaries
not just testing at known or rumoured boundaries
increase the quantity and ranges of input, driving toward infinityand zero
dont forget to drive output values too descriptions of elements of a system include data; what data can
we vary in those descriptions?
Test what it doesit to
Src: Michael Bolton, Understanding Test Coverage
-
8/3/2019 Test Architecture v1.5
31/43
Test coverage: time coverage
31
Print testing example Try different network or port speeds
Print documents right after another and after longintervals
Try time-related constraints--spooling, buffering,timeouts
Try printing hourly, daily, month-end, year-end reports
Try printing from two workstations at the same time
Try printing again, later.
Test how its
affected by time
Src: Michael Bolton, Understanding Test Coverage
-
8/3/2019 Test Architecture v1.5
32/43
Extent of test coverage
32
Smoke and sanity
Can this thing even be tested at all?
Common and critical
Can this thing do the things it must do?
Does it handle happy paths and regular input? Canit work?
Complex, extreme and exceptional
Will this thing handle challenging tests, complex data
flows, and malformed input, etc.? Willit work?
Src: Michael Bolton, Understanding Test Coverage
-
8/3/2019 Test Architecture v1.5
33/43
-
8/3/2019 Test Architecture v1.5
34/43
Test Architecture COE:
In the given situation/context,what testing techniques,practices, tools, technologies dowe need to use?
COOPERATIVE GAMING
Every situation (game) is different No formula for winning the game Quality of the move in the game is
not absolute Quality of community and communication
among members matter enormously
Context-Driven Testing
http://www.context-driven-
testing.com/
-
8/3/2019 Test Architecture v1.5
35/43
Choosing test techniques, testing tools &
technologies
A multi-dimensional problem[KanerDesign]
Forces: Objectives (of a given testing project)
The context Product/software
Quality criteria Risks
Project factors (constraints, etc.)
No one testing technique, fits all needs. Often needmany of them.
Experience matters How do we capture collective experience and make it easily
accessible to all
-
8/3/2019 Test Architecture v1.5
36/43
Cooperative gaming: role of test architecture
COE
Influencing, mentoring, coaching, and trainingexecution teams
Techniques, tools and technologies that are relevant to a givenproject
Collaboration with specialists on specific type of testing
aspects Definition of a process that forms the basis for optimizing the
test architecture lifecycle dependency network\
Reviewing and providing inputs and technical assistance ontest design and strategy to execution teams
-
8/3/2019 Test Architecture v1.5
37/43
The importance ofdomain knowledge Expert tester in Microsoft in the Outlook group, may not be an expert
tester in another context say Avionics group in Boeing A loose grouping of similar things
Industry domains (telecom OSS/BSS) and sub-domains (fulfillment,assurance, billing, etc.)
Technical domains (Rules and related infrastructure, Business Processmanagement, Rich Internet Applications, and so forth)
Company-specific domains (Mass Markets Ordering, Out of RegionBilling, etc., eCommerce, etc.)
How it helps?
Constraints the set of techniques, practices, tools and technologies onemust learn
Enables testers to use a vocabulary that is understood by others in thesame domain
Failures commonly found in a domain can be used to determine risks insystem under test
-
8/3/2019 Test Architecture v1.5
38/43
A poorly stated (and conceived test strategy)
38
We will use black box testing, cause-effectgraphing, boundary testing, and white box testing totest this product against its specification.
Test cases and procedures should manifest the teststrategy
Src: James Bach, Test Strategy: What is it? What does it look like?
-
8/3/2019 Test Architecture v1.5
39/43
Test strategy: An example
39
What is the product?
An application to help people, and teams of people, make
important decisions.
What are the key potential risks?
It will suggest the wrong decisions. People will use the product incorrectly.
It will incorrectly compare scenarios.
Scenarios may become corrupted.
It will not be able to handle complex decisions How could we test the product so as to evaluate the
actual risks associated with it?
-
8/3/2019 Test Architecture v1.5
40/43
Test strategy: An example
40
How could we test the product so as to evaluate theactual risks associated with it?
Understand the underlying algorithm.
Simulate the algorithm in parallel.
Capability test each major function. Generate large numbers of decision scenarios.
Create complex scenarios and compare them.
Review documentation and help.
Test for sensitivity to user error
-
8/3/2019 Test Architecture v1.5
41/43
References
[Berard94] Berard, Edward V. "Object Oriented Design." In MARC94, pp. 721729
[Northrup94] Northrup, Linda M. "Object-Oriented Development." In MARC94, pp. 729737
[Beiz95] Beizer, Boris. Black-Box Testing: Techniques for Functional Testing of Software andSystems. John Wiley & Sons. 1995
[LouTechniques] Lu Lou, "Software Testing Techniques, Technology Maturation and ResearchStrategy", Institute for Software Research International, Carnegie Mellon University
[Kaner09] Cen Kaner, Automated Testing @ RIM, 2009
[BoltonDom06] Micheal Bolton, Master of your domain, Better Software, Vol. 8, No. 9, Oct 2006,http://www.developsense.com/articles/2006-10-MasterOfYourDomain.pdf
[Sweebok01] Guide to the Software Engineering Body of Knowledge (SWEEBOK), IEEE ComputerSociety, May 2001
[BoltonModel05] Micheal Bolton, Elemental models, Better Software, Vol. 7, No. 8, October, 2005,http://www.developsense.com/articles/2005-10-ElementalModels.pdf
[KanerDomain] Cem Kaner, Teaching Domain Testing: A Status Report,http://www.testingeducation.org/a/tdtsr.pdf
[KanerDesign] Cem Kaner and James Bach, Black Box Software Testing, Part 8 Test design,http://www.testingeducation.org/BBST/index.html
[BergerUsecases01] Bernie Berger, The Dangers of Use Cases Employed as Test Cases, 2001,http://www.testassured.com/docs/Dangers.htm
-
8/3/2019 Test Architecture v1.5
42/43
More references
[NailTesting08] Naik, Sagar, and Piyu Tripathy. "Software Testing and Quality Assurance:
Theory and Practice", John Wiley & Sons, 2008[KanerContext02] Kaner, Cem, James Bach, and Bret Pettichord. "Chapter 2 - Thinking Like a
Tester". Lessons Learned in Software Testing: A Context-Driven Approach.John Wiley & Sons. 2002.
[PerryTesting00] Perry, William E., "Chapter 25 - Testing a Date Warehouse". EffectiveMethods for Software Testing, Second Edition. John Wiley & Sons. 2000.Books24x7
[NguyenWebTest03]
Nguyen, Hung Q., Bob Johnson, and Michael Hackett. "Chapter 20 -Testing Mobile Web Applications". Testing Applications on the Web: TestPlanning for Mobile and Internet-Based Systems, Second Edition. JohnWiley & Sons. 2003.
[BinderTestPatterns]
Testing Object-Oriented Systems: Models, Patterns, and Tools, RobertBinder
[PageTestAtMicrosoft09]
Page, Alan, Ken Johnston, and Bj Rollison. How We Test Software atMicrosoft. Microsoft Press. 2009
[FarellManagingTesting08]
Farrell-Vinay, Peter. Manage Software Testing. Auerbach Publications. 2008
-
8/3/2019 Test Architecture v1.5
43/43
Version history
Versio
n No.
Date Author(s) Details of Change
1.0 March 8,10
Ravi Sharda First published version
1.1, 1.2 Nov. 10,2010
Ravi Sharda Made some minor changes to add Web testing, ajaxtesting, mobile web testing, etc.