MTAT.03.159: Software Testing€¦ · Automated vs. Manual New test vs. ... on the deliverables of...
Transcript of MTAT.03.159: Software Testing€¦ · Automated vs. Manual New test vs. ... on the deliverables of...
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
MTAT.03.159: Software Testing
Lecture 05: Test Lifecycle,
Documentation and
Organisation
(Textbook Ch. 6, 7, 8) Dietmar Pfahl email: [email protected] Spring 2014
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Structure of Lecture 6
• Test Lifecycle
• Test Levels (Ch. 6)
• Test Planning and Documentation (Ch. 7)
• Test Organisation (Ch. 8)
– Organization Approaches (Kit’s book Ch. 13)
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test Process – The Management View
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Process Types: from Waterfall to Agile
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
V-model
Requirements
Unit
testing
Coding
Module
design
Architecture
design
Functional
specification
Acceptance
testing
System
testing
Integration
testing
Build
Test
Requirements
Design
Unit test
Integration test
System
Acceptance test
Code
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Excursion:
Levels, Types, and Phases –
not the same thing
≠ ≠
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test Levels
Requirements
Unit
testing
Coding
Module
design
Architecture
design
Functional
specification
Acceptance
testing
System
testing
Integration
testing
Build
Test
• Test level – a test level is a group of test activities that focus
on a certain level of the test target
– Test levels can be seen as levels of detail and
abstraction
– How big is the part of the system we are testing?
≠ ≠
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Dimensions of Testing
unit
integration
system
efficiency
maintainability
functionality
white box black box
Level of detail
Accessibility
Characteristics
usability
reliability
(module)
portability
Test Levels
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test Types
• Test type – evaluate a system for a number of associated
quality goals
– Testing selected quality characteristics
– Testing to reveal certain types of problems
– Testing to verify/validate selected types of functionality or parts of
the system
• Common test types
– Functional: Installation, Smoke, Concurrency (Multi-User)
– Non-Functional: Security, Usability, Performance, …
• A certain test type can be applied on several test levels and
in different phases
≠ ≠
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Dimensions of Testing
unit
integration
system
efficiency
maintainability
functionality
white box black box
Level of detail
Accessibility
Characteristics
usability
reliability
(module)
portability
Test Types
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test Phases
• Test phase – Test phases are periods of time in a development process (exclusively) dedicated to testing
• Test phases do not always match test levels or types
– In iterative development test phases may not match test levels one-to-one
• Often, there is unit testing but no unit test phase
• You can have several integration phases
– or integration testing without an integration phase
– Phases might match the test levels • as depicted in V-model
≠ ≠
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Dimensions of Testing
unit
integration
system
efficiency
maintainability
functionality
white box black box
Level of detail
Accessibility
Characteristics
usability
reliability
(module)
portability
Missing dimensions:
Test Phase
Automated vs.
Manual
New test vs.
regression test
…
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Testing in different process types
Programmers
Testers
Waterfall model Agile model(s)
Customer
Programmer
Programmer
Tester
Idea: Testing in collaboration
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
V-model of testing
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Assumptions of V-model
• It helps to have a blueprint laid out in advance – It is easier to think about a product in terms of a hierarchy
of blueprints
– It is good to test each blueprint
– It is good to use each blueprint as a basis for testing
• It is much easier to find faults in small units than in large entities
– Testing of large units can be carried out more easily when the smaller parts are already tested
• It is better to finish a given task before starting work on another task
Requirements
Unit
testing
Coding
Module
design
Architecture
design
Functional
specification
Acceptance
testing
System
testing
Integration
testing
Build
Test
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Benefits of the V-model
• Intuitive and easy to explain
– Matches to familiar waterfall
– Makes a good model for training of people
– Shows how testing is related to other phases/activities of
the waterfall process
• Scalable / adaptable to various situations
– If the team is experienced and understands the inherent
limitations of the model
• Beats the code-and-fix approach in larger projects
– Better quality
Requirements
Unit
testing
Coding
Module
design
Architecture
design
Functional
specification
Acceptance
testing
System
testing
Integration
testing
Build
Test
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Weaknesses of the
V-model
• Document driven – Relies on the existence, accuracy, and timeliness
of documentation
– Asserts testing on each level is designed based on the deliverables of a single design phase
• Communicates change poorly – Does not show how changes, fixes, and test
rounds are handled (rework!)
• Based on simplistic waterfall model
– Testing windows get squeezed
– Does not fit into iterative development
Requirements
Unit
testing
Coding
Module
design
Architecture
design
Functional
specification
Acceptance
testing
System
testing
Integration
testing
Build
Test
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Testing in different process types
Programmers
Testers
Waterfall model Agile model(s)
Customer
Programmer
Programmer
Tester
Idea: Testing in collaboration
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
“What is time-paced development?” – Time is fixed, scope changes, e.g. Scrum
• 30 days to complete iteration or sprint
• 90 days to complete release 1
• 90 days to complete release 2
• 180 days for whole project
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Requirements
Unit
testing
Coding
Module
design
Architecture
design
Functional
specification
Acceptance
testing
System
testing
Integration
testing
Testing in Time-Paced Development
• Testing is not a phase
– But part of each development
task
• Software is developed
incrementally
– It has to be tested incrementally
• Rapid, time-paced development
– Time-boxed releases and increments
– Deadlines are not flexible
• Scope is flexible
– Testing provides information for scoping
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Approaches to Testing in Time Paced
Development
• Automated regression testing
– Automated unit testing
– Test-driven development
– Daily builds and automated tests
• Stabilisation phase or increment
– Feature freeze
– Testing and debugging at the end of the increment or release
• Separate system testing
– Independent testing
– Separate testers or testing team Testing
Increments
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
A Combined Testing Approach
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Challenges of Testing in Agile / Time
Paced Development
• Requirements change all the time
• Specification documents are never final
• Code is never ‘finished’, never ‘ready for testing’
• Limited time to test
• Need for regression testing in each increment
– Developers always break things
– How can we trust that the code is not broken?
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Coping with the Challenges
• Requirements change all the
time
• Specifications are never final
• Code is never ‘finished’, never
ready for testing
• Not enough time to test
• Need, to regression test
everything in each increment
• Developers always break things
again
• How can we trust?
• Let them change, test design is
part of each task
• Focus on developing ‘finished’
increments, tracking on the task
level
• Testing is part of each
development task
• Trust comes from building-in the
quality, not from the external
testing ‘safety net’
• Automation is critical
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Structure of Lecture 6
• Test Lifecycle
• Test Levels (Ch. 6)
• Test Planning and Documentation (Ch. 7)
• Test Organisation (Ch. 8)
– Organization Approaches (Kit’s book Ch. 13)
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Dimensions of Testing
unit
integration
system
efficiency
maintainability
functionality
white box black box
Level of detail
Accessibility
Characteristics
usability
reliability
(module)
portability
Test Levels
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test Levels
System Level
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test levels vs development paradigm
Level OO Procedural
Unit Method Function or procedure
Module Class Group of functions or
procedures
Integration Cluster of
classes
Subsystem
System System System
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Unit Testing
• Unit = smallest testable software component
– Objects
– Procedures / functions
– Reusable components
• Tested in isolation
– Should be planned and results public (in team)
• Usually done by programmer during development
– A pair tester can help
– The textbook idealizes a heavy weight model for unit testing
• ”The unit should be tested by an independent tester (someone other
than the developer) and the test results and defects found should be
recorded as a part of the unit history”
• Also known as component or module testing
Labs 2 & 3
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Scaffolding
Test Harness
(automated
test framework)
In Labs 1 and 2 you wrote the
driver for unit tests.
You did not make any stubs
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Unit testing and state
push(s, elem1)
pop(s, x) -> x=elem1
pop(s, x) -> x=?
push(s, elem1)
Show_top(s)=elem1
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Stack Example
How many test cases
needed to cover all
feasible branches?
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Stack Example
How many test cases
needed to cover all
feasible branches?
4 decisions
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Stack Example
How many test cases
needed to cover all
feasible branches?
4 decisions
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Stack Example
2 test cases are enough:
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Stack Example
2 test cases are enough:
D1
D2
D3
D4
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Stack Example
2 test cases are enough:
D1
D2
D3
D4
D3: false
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Stack Example
2 test cases are enough:
D1
D2
D3
D4
D1: false
D2: true
D1: false
D2: true
D1: false
D2: true
D1: true
D4: true ... false
D3: true
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test-driven development
1. Write a test
2. See it fail
3. Make it run
4. Make it right
(refactor)
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Integration testing
• More than one (tested) unit
• Detecting defects
– On the interfaces of units
– Communication between units
• Helps assembling incrementally a whole system
• Non-functional aspects if possible
• Done by developers/designers or independent testers
– Preferably developers and testers in collaboration
• Often omitted due to setup difficulties
– Time is more efficiently spent on unit and system tests
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Integration Testing – Procedural
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Top-down testing
Level 2Level 2Level 2Level 2
Level 1 Level 1Testing
sequence
Level 2stubs
Level 3stubs
. . .
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Bottom-up testing
Level NLevel NLevel NLevel NLevel N
Level N–1 Level N–1Level N–1
Testingsequence
Testdrivers
Testdrivers
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Integration Testing – OO
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
System testing
• Testing the system as a whole
• Functional
– Functional requirements and
requirements-based testing
• Non-functional
– Performance, stress, configuration,
security, ...
– As important as functional
requirements
– Often poorly specified
– Must be tested
• Often done by independent test
group
– Collaborating developers and testers
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Types of system testing
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Regression testing
• Testing changed software
– Retest all
– Selective regression testing
• Regression test suite
• Types of changes
– Defect fixes
– Enhancements
– Adaptations
– Perfective maintenance
[Skoglund, Runeson, ISESE05]
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Retest all
• Assumption:
– Changes may introduce faults anywhere in the code
• BUT: expensive, prohibitive for large systems
• Reuse existing test suite
• Add new tests as needed
• Remove obsolete tests
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Selective regression testing
• Impact analysis
• Only code impacted by change needs to be retested
• Select tests that exercise such code
• Add new tests if needed
• Remove obsolete tests
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Acceptance testing (α, β)
• Final stage of validation – Customer (user) should
perform or be closely involved
– Customer can perform any test
they wish
– Final user sign-off
• Approach – Mixture of scripted and
unscripted testing
– Performed in real operation
environment
• Project – Contract acceptance
– Customer viewpoint
– Validates that the right
system was built
• Product – Final checks on releases
– User viewpoint throughout
the development; validation
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Structure of Lecture 6
• Test Lifecycle
• Test Levels (Ch. 6)
• Test Planning and Documentation (Ch. 7)
• Test Organisation (Ch. 8)
– Organization Approaches (Kit’s book Ch. 13)
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test Management
• Goal
• Policy
• Plan
• Follow-up
• Reach India
• Go west
• Get three ships
• …
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test Planning
• Objectives
• What to test
• Who will test
• When to test
• How to test
• When to stop
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
IEEE 829-2008:
Standard for
Software and
System Test
Documentation
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Software Project Management Plan
Goals
• Business
• Technical
• Business/technical
• Political
Quantitative/qualitative
Policies
• High-level statements
of principle or courses
of action
• Govern the activities
implemented to
achieve stated goals
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Hierarchy of test plans
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test Planning – different levels
Test
Policy
Test
Strategy
Detailed
Test Plan
Detailed
Test Plan
Detailed
Test Plan
Detailed
Test Plan
Company or
product level
Project level
Test levels (one for each within
a project, e.g. unit, system, ...)
Master test plan
Acceptance test plan
System test plan
Unit and integration test plan
Evaluation plan
Number of needed levels depends on context (product, project, company, domain, development process, regulations, …)
Master
Test Plan
Master
Test Plan
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Purpose of test case (planning)
• Organization
– All testers and other project team members can review and use test
cases effectively
• Repeatability
– Know what test cases were last run and how so that you could
repeat the same tests
• Tracking
– What requirements or features are tested?
– Tracking information’s value depends on the quality of the test
cases
• Evidence of testing
– Confidence (quality)
– Detect failures
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Defect Report (Test incidence report)
• Summary
• Incident Description
• Impact
Lab 1
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Defect Report (Test incidence report)
Summary
• This is a summation/description
of the actual incident.
– Provides enough details to
enable others to understand
how the incident was
discovered and any relevant
supporting information
• References to:
– Test Procedure used to
discover the incident
– Test Case Specifications that
will provide the information to
repeat the incident
– Test logs showing the actual
execution of the test cases and
procedures
– Any other supporting
materials, trace logs, memory
dumps/maps etc.
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Defect Report (Test incidence report)
Incident Description
• Provides as much details on the
incident as possible.
– Especially if there are no other
references to describe the
incident.
• Includes all relevant information
that has not already been
included in the incident summary
information or any additional
supporting information
• Information:
– Inputs
– Expected Results
– Actual Results
– Anomalies
– Date and Time
– Procedure Step
– Attempts to Repeat
– Testers
– Observers
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Defect Report (Test incidence report)
Impact
• Describe the actual/potential
damage caused by the incident.
– Severity
– Priority
• Severity and Priority need to be
defined so as to ensure
consistent use and
interpretation, for example:
• Severity – The potential impact to
the system
– Mission Critical - Application will not
function or system fails
– Major - Severe problems but possible
to work around
– Minor – Does not impact the
functionality or usability of the process
but is not according to
requirements/design specifications
• Priority – The order in which the
incidents are to be addressed
– Immediate – Must be fixed as soon as
possible
– Delayed – System is usable but
incident must be fixed prior to next
level of test or shipment
– Deferred – Defect can be left in if
necessary doe to time or costs
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test results report
• Test cases executed
• Versions tested
• Defects found and reported
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Standards
• IEEE 829-1998
Standard for Software Test Documentation
• IEEE 1012-1998
Standard for Software Verification and Validation
• IEEE 1008-1993
Standard for Software Unit Testing
->
• ISO 29119
Software Testing Concepts
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Overview of ISO 29119
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test plan according to
IEEE Std 829-2008 (Appendix II)
a) Test plan identifier
b) Introduction
c) Test items
d) Features to be tested
e) Features not to be tested
f) Approach
g) Item pass/fail criteria
h) Suspension criteria and
resumption requirements
i) Test deliverables
j) Testing tasks
k) Environmental needs
l) Responsibilities
m) Staffing and training needs
n) Schedule
o) Risks and contingencies
p) Approvals
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test Plan (1)
a) Test plan identifier
b) Introduction
– Product to be tested, objectives, scope of the test plan
– Software items and features to be tested
– References to project authorization, project plan, QA plan, CM plan, relevant policies & standards
c) Test items
– Test items including version/revision level
– Items include end-user documentation
– Defect fixes
– How transmitted to testing
– References to software documentation
Slide not shown in lecture;
Only illustrative background info
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test Plan (2)
d) Features to be tested
– Identify test design / specification techniques
– Reference requirements or other specs
e) Features not to be tested
– Deferred features, environment combinations, …
– Reasons for exclusion
f) Approach
– How you are going to test this system • Activities, techniques and tools
– Detailed enough to estimate
– Completion criteria (e.g. coverage, reliability)
– Identify constraints (environment, staff, deadlines)
Slide not shown in lecture;
Only illustrative background info
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test Plan (3)
g) Item pass/fail criteria
– What constitutes success of the testing
– Coverage, failure count, failure rate, number of executed tests, …
– Is NOT product release criteria
h) Suspension and resumption criteria
– For all or parts of testing activities
– Which activities must be repeated on resumption
i) Test deliverables
– Test plan
– Test design specification, Test case specification
– Test procedure specification, Test item transmittal report
– Test logs, Test incident reports, Test summary reports
Slide not shown in lecture;
Only illustrative background info
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test Plan (4)
j) Testing tasks
– Including inter-task dependencies & special skills
– Estimates
k) Environment
– Physical, hardware, software, tools
– Mode of usage, security, office space
– Test environment set-up
l) Responsibilities
– To manage, design, prepare, execute, witness, check, resolve issues, providing environment, providing the software to test
m) Staffing and Training needs
Slide not shown in lecture;
Only illustrative background info
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test Plan (5)
n) Schedule
– Test milestones in project schedule
– Item transmittal milestones
– Additional test milestones (environment ready)
– What resources are needed when
o) Risks and Contingencies
– Testing project risks
– Contingency and mitigation plan for each identified risk
p) Approvals
– Names and when approved
Slide not shown in lecture;
Only illustrative background info
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test plan quality criteria
Usefulness – Will the test plan effectively serve its intended functions?
Accuracy – Is the test plan document accurate with respect to any statements of
fact?
Efficiency – Does it make efficient use of available resources?
Adaptability – Will it tolerate reasonable change and unpredictability in the
project?
Clarity – Is the test plan self-consistent and sufficiently unambiguous?
Usability – Is the test plan document concise, maintainable, and helpfully
organized?
Compliance – Does the test plan meet externally imposed requirements?
Foundation – Is the test plan the product of an effective test planning process?
Feasibility – Is the test plan within the capability of the organization that must use
it?
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Structure of Lecture 6
• Test Lifecycle
• Test Levels (Ch. 6)
• Test Planning and Documentation (Ch. 7)
• Test Organisation (Ch. 8)
– Organization Approaches (Kit’s book Ch. 13)
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
7 approaches to test organisation
1. Each person’s responsibility
2. Each unit’s responsibility
3. Dedicated resource
4. Test organisation in QA
5. Test organisation in development
6. Centralized test organisation
7. Test technology centre
[Kit, Software Testing in the Real World Ch 13, 1995]
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
1. Each person’s responsibility
+ Natural solution - Testing own software
Product
developers
M
P P P P
P
T
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
2. Each unit’s responsibility
+ Solves dependency
problem
- Two tasks
- Double competency?
Product
developers
M
P P P P
P
T
P
T
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
3a. Dedicated resource
+ Solves multiple task problem
+ Single team
- Management of two types
- Competency provision
Product
developers
M
P P P T
P T
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
3b. Dedicated resource on a large scale
+ Solves mgmt problem
of 3a
- Where to put
organiztion?
Product
developers
M
P P P P
Product
developers
M
P P P P
Test developers
TM
T T T T
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
4. Test organisation in QA
+ Solves mgmt problem of 3b - Teamwork problems?
- TDG lost in QAO
- PDG not responsible for final product
Test Development
QAO
Group
Product Development
PDO
Group
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
5. Test organisation in development
+ Solves mgmt problem of 4
+ May solve teamwork
problem of 4
- Dependent on management
communication
Test Development
Group
Product Development
PDO
Group
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
6. Centralized test organisation
+ Solves mgmt problem of 5
+ Career path for test mgmt
- VP key for test
- Teamwork at low level?
- Consistency of methods?
Test Development
Group
Product Development
VP
Group
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
7. Test technology centre
+ Solves consistency
problem of 6
- VP key for test
- Teamwork at low
level?
Test Development
Group
Product Development
VP
Group
Test Technology
Group
SE
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Which organization
should we choose?
• Depending on
– size
– maturity
– focus
• The solution is often a
mix of different
approaches
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Criteria for choice of
organization
A. Support for decision making
B. Enhance teamwork
C. Independence / Autonomy
D. Balance testing – quality
E. Assist test management
F. Ownership of test technology
G. Resources utilization
H. Career path
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Decision matrix
A B C D E F G H Sum
1
2
3
4
5
6
7
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test Specialist Skills – Technical
• education in software
engineering principles, and
methodologies
• strong coding skills and an
understanding of code structure
• understanding of testing
principles and practices
• understanding of basic testing
strategies, methods, and
techniques
• experience to plan, design, and
execute test cases and test
procedures on multiple levels
• networks, databases, and
operating systems
• configuration management
• test-related documents
• define, collect, and analyze test-
related measurement data
• testing tools and equipment
• quality issues
• process issues
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Test Specialist Skills
– personal and managerial
• organizational, and planning skills
• keep track of, and pay attention to, details
• discover and solve problems
• be creative
• work in a variety of environments
• work with users and clients
• be able to resolve conflicts
• mentor and train others
• strong written and oral communication
skills
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Recommended
Textbook Exercises
• Chapter 6
– 1, 3, 6, 7, 8, 9, 12
• Chapter 7
– 2, 3, 6, 8, 9, 11
• Chapter 8
– 2, 3, 6, 7, 9
MTAT.03.159 / Lecture 05 / © Dietmar Pfahl 2014
Next Week
• Lecture 6:
– Tools, Metrics, and Test Maturity Model
• Lab 6:
– Project presentations
• In addition to do:
– Submit project reports by Thursday, May 15, at 19:00
– Read textbook chapters 6-8 (available via OIS)