Testing & Software Metrics CM602 Effective Systems Development.
-
Upload
morgan-gray -
Category
Documents
-
view
224 -
download
5
Transcript of Testing & Software Metrics CM602 Effective Systems Development.
Testing& Software Metrics
CM602 Effective Systems Development
Objectives
In this lecture we will consider Validation, verification and testing in
UML Objectives of Software Metrics Planning a Metrics Programme
What are metricsGQM and other techniques“Dos and Don’ts”
Validation and Verification
Validation refers to a set of activities that ensure that the software that has been built is traceable to customer requirements “Are we building the right product ?”
Verification refers to the set of activities that ensure that software correctly implements a specific function. “Are we building the product right ?”
(Pressman, 2000)
Validation in UMLUse the design walkthrough technique to check: classes (including interface objects) and class
diagrams interaction sequence diagrams collaboration diagrams state diagrams data dictionary (possibly)
Check for consistency between models
Check for completeness and redundancy
Check for correctness of all OO models
Difficulties in ValidationIt is difficult to know exactly what is being looked for in validation
Looking for anything that might make the final system less useful for the customer / user than the system should be.
To do validation effectively need to involve the customer
Software ValidationDemonstrate conformity with requirements.Check for: all functional requirements satisfied all behavioural characteristics are achieved all performance requirements are attained documentation is correct
Two possible conditions after testing conformance to requirements non-conformance: create a defect list
Verification in UMLTesting of one explicit thing (the product) against another (the specification)Verify that the use cases described in the UML model satisfy the requirements described in the requirements specificationVerify that the classes are capable of providing the use casesVerify that the code corresponds to the classes in the design
OO testing strategiesUnit testing – Class (or Component) Level Focuses on the operations and the state
behaviour of the class.
Integration testing – Subsystem Level Test that combinations of components work as
expected
Validation testing – System testing focus on user interactions and user visible actions
of the system
Unit Test CasesTest cases should focus on the class (or component).
Test cases primarily derived from sequence and class diagrams.
Need to consider the implications of inheritance in test cases
White box test cases – written with knowledge of the internal workings of the test item.
System Testing – Done by Developers
Exercise the whole system using use cases as the basis for many of the test scripts.Also includes: Recovery testing Security testing
Stress testing quantity, frequency, volume, growth
Performance testing run time of software, response times
Acceptance Testing – Done by Users
Exercise the whole system using use cases as the basis for many of the test scripts.Also includes: Recovery testing Security testing
Stress testing quantity, frequency, volume, growth
Performance testing run time of software, response times
Metrics: Your starter for ten.
Two programmers are working for the same company, though not on the same project. In one week of work, Fred produces half as much again as Jim, measured in lines of source code.What do you conclude from this?
Software Metrics
Measurement is defined as a process by which numbers or symbols are assigned to attributes of entities in the real world in such a way as to describe them according to clearly defined rules. (Fenton N 1994)
Direct and Indirect
We measure by direct (most accurate) and indirect measurements.Many quality factors are measured indirectly.
Direct or indirect?
Speed = Distance*time
Internal/External Attributes
Typical internal attributes:- Time, effort, and number of events of a
particular type which occurred during the process, eg number of defect reports raised
Others include size, modularity
Typical external attributes Cost, controllability and stability,
reliability of the code, readability of the documents
Problems with Measurement
Costs incurred The result of any business activity
should be related to the business needs (ie reduce cost). No point measuring for the sake of measuring.
Disruptions caused Developing and using the right method
(consistency in results & repeatability).
Basics
We need to assert whether we are measuring to predict or to assess.All metrics must be validated
Next question:-When do we measure what?
Requirement Definition
Structural Design
Detailed Design Coding Unit Testing
Integration testing
Document metrics
Size of product
System structure
Defect detection and removal effort
Effectiveness of defect correction process
Requirements faults and changes
Structural design faults and changes
Detailed design faults and changes
Changes in number of modules
Configuration management system
Test planning
Test effectiveness
Effectiveness of test planningLife-cycle stages and metricsManns T & Coleman M (1996)
Software Metrics Considerations
Cost Oriented labour cost (normal time) £/hour labour cost (overtime) £/hour
Size Oriented lines of source code 000s project elapsed time months number of project staff people project effort person hours system documentation pages
Function Oriented number of tasks number task complexity scale number of system interfaces number
Productivity Oriented output LOC/person-month documentation produce pages/person-month
Function Point Analysis
algorithmic modelling techniqueproposed by Albrechtproductivity measurement based on function pointsmethod based on functionality rather than sizeuseful analogies: building a house: each type of house has a configuration of windows, rooms, fittings ... etc. all with a norm for construction upon which the surveyor bases his cost estimates. FPA uses the software design as a blueprint and the design components as its configuration
standard costs in Accounting: everything in a business has a cost
Function Point Analysis
Function points are computed by counting the software characteristics:
number of user inputs number of user outputs number of user inquiries number of files number of external interfaces
a complexity value is associated with each of the countsa Function Point Count is computed using the formula:
FP = count-total x [0.65 + 0.01 x SUM(Fii)]
once calculated function points are used in a way analogous to LOC
GQM
1. Set goals specific to needs in terms of purpose, perspective and environment.
2. Refine the goals into quantifiable questions that are tractable.
3. Deduce the metrics and data to be collected (and the means for collecting them) to answer the questions
GQM phases
Interpretation
Data Collection
Definition
Measurement resultsAnswers to Questions
Evaluation of goal attainment
Collected Data
Goals, questions, metrics, hypotheses
PlanningProject selected
Project plan
Example
Goal – Reduce helpdesk maximum turnaround time for basic queries below 48hrs.Questions: What proportion of queries take longer than
48hrs? What types of queries take longer than 48
hrs? What throughput of queries per person leads
to unacceptable turnaround time?
Example metrics
Metrics Average turnaround time for the last year. Number of queries dealt with. Number of queries taking > 48 hrs. Correlation of staff on duty with turnaround
time. Correlation of query type with turnaround
time. Etc.
What can goals relate to?
Products which are output from processes Artefacts, deliverables or documents which
arise from the processes
Processes Are any software related activities which take
place over time
Resources which are inputs to processes E.g. person-hours, processing time, office
space.
Checklist for using GQM
Define human resources are in placeAll necessary resources are availableGoals are identifiedProject team support the goalsProject plans are availableCommunication strategy is in place
Metrics are defined and are consistent with goals and questionsGQM/Measurement/ Analysis plan are available and agreedManagement are fully involvedTrial measurement period is held
Making the Decision to use metrics
Triggers: General quality concerns Concerns about customer complaints Customers’ requirements for increased information Knowledge of competitors activities in the metrics area Process improvement programmes Cost Cutting Management pressure for information A consultancy recommendation New awareness of possibility of measurement, perhaps
as a result of someone attending a conference Forrester Research Inc. (Barnett et al., 2005)
Developing a Measurement Culture
Plan your measurement activities Before you start the work
Fear of data being used negatively Confidentiality of the data
Educate the team Provide appropriate training, explain
why
Share the data within the team
Developing the habit
Measurements do not have to be time consuming Use commercial toolsTracking forms/Spreadsheets/ChartsStart small/explain why/share resultsUse the information extracted to improve development work
Starting points (1)
Individual developers Work Effort distribution Estimation vs task durations and
efforts Code covered by unit testing No of defects found per unit testing Code and design complexity
Starting points (2)
Project Teams Product size Work effort distribution Requirements stats, (how many,
implemented, verified) Estimation vs actual Defect counts found when integrating
systems Defect counts found by inspections Requirement stability Number of tasks planned and completed
What to avoid
Lack of management commitmentMeasuring too much, too soonMeasuring too little, too lateMeasuring the wrong thingsImprecise metrics definitions
What to avoid
Using metrics data to evaluate individualsUsing metrics to motivateCollecting data that is not usedLack of communication & trainingMisinterpreting metrics data
Summary
Overview of V&V, testing.What software metrics are concerned with, and what we can measure.Approaches, especially GQM.Good and bad practice in using metrics.
ReadingPressman (2000)
Chapter 4 on project and process metrics; sections on estimating & function points in chapter 5.
(Also some good material on testing.) Equivalents in other editions will do.
Good article on GQM: “Software Acquisition GOLD PRACTICETM Goal-
Question-Metric (GQM) Approach” at http://www.goldpractices.com/practices/gqm/index.php
Bennett, McRobb & Farmer (2005), Chapter 19, on implementation (testing) Or equivalent in earlier versions
References
Bennett, S., McRobb, S. & Farmer, R. (2005), Object-Oriented Analysis and Design using UML, 3rd edn. McGraw-Hill.Pressman, R.S. (2000), Software Engineering, A Practitioner’s Approach, 5th edn (European), McGraw-Hill.Barnett, L., Visitacion, M., Gilpin, M. & Symons, C. (2005), “Metrics For Application Development: Selecting A Balanced Set Is No Easy Task” available at http://www.forrester.com/Research/ [viewed 1/12/05].