Software Quality and Metrics

87
Software Quality and Metrics

description

Software Quality and Metrics. Software Quality. What is Quality ? Quality Characteristics Cost evolution : Software vs Hardware How to solve it ?. Software Quality. What it is not ! Zero errors A good GUI What it is ! Meeting the user ‘s needs . ISO Definition of Quality. - PowerPoint PPT Presentation

Transcript of Software Quality and Metrics

Page 1: Software Quality and Metrics

Software Quality and Metrics

Page 2: Software Quality and Metrics

Software Quality

• What is Quality ?

• Quality Characteristics

• Cost evolution : Software vs Hardware

• How to solve it ?

Page 3: Software Quality and Metrics

Software Quality

• What it is not !– Zero errors– A good GUI

• What it is !– Meeting the user ‘s needs

Page 4: Software Quality and Metrics

ISO Definition of Quality

• ‘The totality of characteristics of an entity that bear on its ability to satisfy stated and implied needs’

Software Product

Should fullfill has

characteristicsrequirementsIs related to

(contributes to the fulfillments)

Page 5: Software Quality and Metrics

ISO 9126 Characteristics

• The objective of this standard is to provide a framework for the evaluation of software quality. ISO/IEC 9126 does not provide requirements for software, but it defines a quality model which is applicable to every kind of software. It defines six product quality characteristics and in an annex provides a suggestion of quality subcharacteristics.

Page 6: Software Quality and Metrics

ISO 9126 Characteristics• Functionality is the set of attributes that bear on the existence of a set of functions and their

specified properties. The functions are those that satisfy stated or implied needs. • Reliability is the set of attributes that bear on the capability of software to maintain its level of

performance under stated conditions for a stated period of time. • Usability is the set of attributes that bear on the effort needed for use, and on the individual

assessment of such use, by a stated or implied set of users. • Efficiency is the set of attributes that bear on the relationship between the level of performance

of the software and the amount of resources used, under stated conditions. • Maintainability is the set of attributes that bear on the effort needed to make specified

modifications. • Portability is the set of attributes that bear on the ability of software to be transferred from one

environment.

Page 7: Software Quality and Metrics

Sub CharacteristicsCharacteristics Subcharacteristics Definitions

  Suitability Attributes of software that bear on the presence and appropriateness of a set of functions for specified tasks.

  Accurateness Attributes of software that bear on the provision of right or agreed results or effects.

 Functionality Interoperability Attributes of software that bear on its ability to interact with specified systems.

  Compliance Attributes of software that make the software adhere to application related standards or conventions or regulations in laws and similar prescriptions.

  Security Attributes of software that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs or data.

 

  Maturity Attributes of software that bear on the frequency of failure by faults in the software.

 Reliability Fault tolerance Attributes of software that bear on its ability to maintain a specified level of performance in case of software faults or of infringement of its specified interface.

  RecoverabilityAttributes of software that bear on the capability to re-establish its level of performance and recover the data directly affected in case of a failure and on the time and effort needed for it.

 

  Understandability Attributes of software that bear on the users’ effort for recognizing the logical concept and its applicability.

 Usability Learnability Attributes of software that bear on the users’effort for learning its application.

  Operability Attributes of software that bear on the users’effort for operation and operation control.

Page 8: Software Quality and Metrics

Sub CharacteristicsCharacteristics Subcharacteristics Definitions

Efficiency Time behaviour Attributes of software that bear on response and processing times and on throughput rates in performances its function.

Resource behavior Attributes of software that bear on the amount of resource used and the duration of such use in performing its function.

Analyzability Attributes of software that bear on the effort needed for diagnosis of deficiencies or causes of failures, or for identification of parts to be modified.

Maintainability Changeability Attributes of software that bear on the effort needed for modification, fault removal or for environmental change.

Stability Attributes of software that bear on the risk of unexpected effect of modifications.

Testability Attributes of software that bear on the effort needed for validating the modified software.

AdaptabilityAttributes of software that bear on the opportunity for its adaptation to different specified environments without applying other actions or means than those provided for this purpose for the software considered.

Portability Installability Attributes of software that bear on the effort needed to install the software in a specified environment.

Conformance Attributes of software that make the software adhere to standards or conventions relating to portability.

Replaceability Attributes of software that bear on opportunity and effort using it in the place of specified other software in the environment of that software.

Page 9: Software Quality and Metrics

Quality model

• The relative weight of the characteristics depends on the type of software– Prototype– Critical (ex : nuclear power plant)– Long live (ex : Telecom switch)– …

Page 10: Software Quality and Metrics

Quality model

• What should be the ranked list of Quality characteristics (most important first) for a software used in a space shuttle ?

• Give them a weight.

Page 11: Software Quality and Metrics

NAVETTE HERMES

• QUALITY CHARACTERISTICS IN DECREASING ORDER :– RELIABILITY CHANGEABILITY– ROBUSTNESS FLEXIBILITY– AVAILABILITY PORTABILITY– CONFORMANCE REUTILISABILITY– INTEGRITY ADAPTABILITY– USABILITY– ERGONOMY– EFFICIENCY– TESTABILITY

Page 12: Software Quality and Metrics

Software Metrics• The Quality needs to be quantified

Maintainability

Testability

Cyclomatic complexity Max nber of interleaving Nbre of input/output

StabilityChangeability Analyzability

Page 13: Software Quality and Metrics

Metrics versus measurement

• A measure f is a function f:A->B where for each object a belonging to A there exists a formal object (measure’s value) f(a) belonging to B

• A function f is called a metric iff it has the three following properties:– f(x,y)=0 with x=y– f(x,y)=f(y,x) for each x and y– f(x,z)<=f(x,y)+f(y,z) for each x, y, z

Page 14: Software Quality and Metrics

Metrics versus measurement• A measure should satisfy 4 criteria– Being in relation with well understood and significant

attributes– Being supported by formal models or abstractions that

capture the attributes– Respecting the relations and the existing order between

objects (determined by the attributes of the models– Being mapped to the numeric system in order to maintain

the order relationship.• Remark: if it doesn’t respect the three first criteria, it is

a metric

Page 15: Software Quality and Metrics

Type of measurement

nominal

ordinal

interval

ratio

absolute

Page 16: Software Quality and Metrics

Type of measurement (examples)

• Nominal: number on the back of a football player

• Ordinal: Richter scale• Interval: f(x)=ax+b. Temperature is of interval

type• Ratio: f(x)=ax. A length is a ratio• Absolute: f(x)=x. Number of people working in

a company

Page 17: Software Quality and Metrics

Building of measures• Non terminal measures

– Raw value => normative valueEx: v(G) < 15 => v(G) = 2

15<v(G)<=30 => v(G) = 1v(G)> 30 => v(G) = 2

• Terminal measuresMes_1 = (mes_2 + 4*mes3)/5• Qualitative expression:[0;0.5 ] => Mes_1 = 0 NOK]0.5;1.5] => Mes_1 =1 Average]1.5;2] =>Mes_1= 2 OK

Page 18: Software Quality and Metrics

Software Metrics

• Different levels of measurement– Architecture• Call Graph

– Module• Control Graph

– Textual

Page 19: Software Quality and Metrics

Software Metrics

• Architecture– Number of paths– Hierarchical complexity– Structural complexity– Component assessibility– Path testability– System testability– System entropy

Page 20: Software Quality and Metrics

Software Metrics

• Structural complexity– Cyclomatic number– Max number of degrees– Max number of interleaving

• Textual– Halstead Software Science measures

• Programming Norms

Page 21: Software Quality and Metrics

Object-Oriented Software Metrics

• Depth of inheritance tree (DIT)– DIT is defined as the maximum number of steps

from the class node to the root of the inheritance tree

• What it measures: Since in OO design, the application domain is modelled as a hierarchy of classes represented as a tree, the DIT metric is a measure of how many ancestor classes can affect a particular class. The depth of inheritance tree metric is an indication of the potential for reuse.

Should be between 0 and 4

Page 22: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of children (NOC)– NOC is defined as the number of immediate

descendants of a class in the hierarchy. (1..4)

• What it measures: NOC is an indicator of the potential influence a class can have on the design and on the system

Page 23: Software Quality and Metrics

Object-Oriented Software Metrics

• Coupling between object classes– : This is a count of the number of other classes to

which a given class is coupled.

• What it measures: Also known as message delivery channels, this metric counts the number of associations in a class and attributes whose parameters are of the class type. Messages can only be sent when an object of a class holds a reference to another object of a class

Should be between 1 and 4

Page 24: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of the associations linked to a class– The number of associations including aggregations

is counted.

• What it measures: This metric is useful for estimating the static relationships between classes.

Page 25: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of the elements in the transitive closure of the subclasses of a class

• What it measures: This is potentially useful for predicting the classes whose changes might affect this class.

Page 26: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of the superclasses of a class– This counts the direct parents of a class

• What it measures: In a single inheritance implementation like Java, the value of this metric is either 0 or 1, whereas under multiple inheritance scheme it is greater than or equal to 0.

Page 27: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of use cases in a model

• What it measures: A use case represents a coherent unit of functionality provided by a system, a subsystem, or a class.

Page 28: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of system classes associated with a use case– This metric counts the number of classes whose objects

participate in the scenario of a use case.

• What it measures: NSCU is good for estimating the impact of a requirement change onto the system. Any changes of use cases spread to classes and the interactions of their objects, and vice versa

Page 29: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of messages associated with a use case– This metric counts the number of messages comprising

the scenario of a use case. • What it measures: A use case is further refined through its

scenario. In UML, there are two scenario diagrams, i.e. sequence diagram and communication diagram. These two kinds of scenario diagrams are completely isomorphic meaning one kind of diagram can be automatically replaced with another kind without the loss of information contained in it. This metric is useful for tracing requirements into design-level elements.

Page 30: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of actors in a model– This metric is the number of actors in a model,

where an actor is a special class whose stereotype is “Actor”.

• What it measures: This metric computes the number of actors in a model.

Page 31: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of actors associated with a use case– : This metric is the number of actors that are

associated with a use case.

• What it measures: Normal system classes are not counted for this metric because this metric concerns the interactions between a system and its stakeholders.

Page 32: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of classes in a model

• What it measures: A class in a model is an instance of the metaclass “class”. This metric is comparable to the traditional LOC (lines of code) for estimating the size of a system. This metric can be used to compare sizes of systems.

Page 33: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of inheritance relations in a model

• What it measures: This metric counts the number of generalisation relationships between classes existing in a model.

Page 34: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of packages in a model

• What it measures: Package is a way of managing closely related modelling elements together. Also by using packages, naming conflicts can be avoided.

Page 35: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of objects in a model

• What it measures: In a similar manner that a class is an instance of the metaclass “Class”, an object is an instance of a class.

Page 36: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of associations in a model– This metric measures the number of connections

between classes

• What it measures: An association is a connection, or a link, between classes. This metric is useful for estimating the scale of relationships between classes.

Page 37: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of aggregations in a model

• What it measures: An aggregation is a special form of association that specifies a whole-part relationship between the aggregate (whole) and a component part

Page 38: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of messages in a model

• What it measures: A message is an instance of the metaclass “Message”. Messages are exchanged between objects manifesting various interactions

Page 39: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of methods (NOM)– NOM is defined as the total number of methods in a

class, including all public, private and protected methods.

• What it measures: It has been suggested that this metric is a useful indication of the classes that may be trying to do too much work themselves; i.e., they provide too much functionality. NOM cannot be viewed as an indicator of the effort to develop a class, since a class containing a single large method may take as long to develop as a class containing a large number of small methods.

Page 40: Software Quality and Metrics

• Number of methods added (NMA)

• What it measures : The number of operations added plays a role in the specialization of the class and must be maintained in a proportion which continues to justify inheritance, otherwise known as generalization.

Too many added operations signify too big a difference with the parent class. The inheritance would then make less sense.

The more added operations there are, the more the class must be redeveloped, and the less the inheritance is justified.

Object-Oriented Software Metrics

Page 41: Software Quality and Metrics

• Number of Methods Inherited (NMI)

• What it measures: For a class, this metric gives the percentage of the number of non-redefined operations with regard to the number of operations inherited.

The percentage of operations inherited should be high. This is the opposite of the Number of Methods Overridden (NMO) threshold. A low percentage of inherited operations indicates poor sub-classing.

The maximum of 100 % is ideal, but is never attained, given the fact

that we often need to adapt inherited services.

Object-Oriented Software Metrics

Page 42: Software Quality and Metrics

• Number of Methods Overridden (NMO)

• What it measures: The number of redefined operations plays a role in the specialization of the class and must be maintained in a proportion that continues to justify inheritance.

Too many redefined operations implies too big a difference with the parent

class and inheritance then makes less sense.For a class, this is the count of the number of inherited operations that are

redefined by the class.A class which inherits services must use them with a minimum of

modifications. If this is not the case, the inheritance loses all meaning and becomes a source of confusion.

Object-Oriented Software Metrics

Page 43: Software Quality and Metrics

Object-Oriented Software Metrics

• Response for a class– This is the size of the set of methods that can

potentially be executed in response to a message received by an object.

Class AA::m1() calls B::m1(), B::m2()A::m2() calls A::m1(),C::m1()A::m3()RFC = 7

Page 44: Software Quality and Metrics

Object-Oriented Software Metrics

• Lack of cohesion in methods– This is a measure of the number of disjoint sets

formed by the intersection of the sets of instances variables used by methods.

Page 45: Software Quality and Metrics

• Lack of cohesion in methods (another one)

The LCOM value for a class C is defined as LCOM(C) = 1 - |E(C)| / (|V(C)| x |M(C)|), where V(C) is the set of instance variables, M(C) is the set of instance methods, and E(C)

is the set of pairs (v,m) for each instance variable v in V(C) that is used by method m in M(C).

Object-Oriented Software Metrics

Page 46: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of attributes (NOA)– NOA is defined as the total number of attributes in a

class.

• What it measures: The ratio of private and protected attributes to total number of attributes indicates the effort required by that class in providing information to other classes. Private and protected attributes are therefore viewed merely as data to service the methods in the class

Page 47: Software Quality and Metrics

Object-Oriented Software Metrics

• Number of public attributes• Number of protected attributes• Number of private attributes• Number of class attributes• Number of public methods• Number of protected methods• Number of private methods• Number of class methods

Page 48: Software Quality and Metrics

Software Analysis Process and Report

• Architectural Level

• Structural Level

• Report

Page 49: Software Quality and Metrics

Architectural Level

• General view of the call graph– Depth, width, hierarchy, levels jump

• Singularities– Isolated components, roots, strongly connected

components, recursivity

Page 50: Software Quality and Metrics

Call Graph examinationSymptom diagnosis decision

Large number of calls No hierarchy in tasks Splitting of the component

Large number of recursivities

Very high risk: problem of ending recursivity

Careful reading of codeIntensive testing of components

Components largely used Essential resource of architecture => high risk

Careful reading of codeIntensive testing of components

Isolated component Doesn’t belong to this version

Deleting the component

Root for a very limited number of paths in the graph

Auxiliary Entry point (ex: debugging)

Deleting of this root (doesn’t feature a normal case)

Page 51: Software Quality and Metrics

Call Graph

Page 52: Software Quality and Metrics

Call Graph

Page 53: Software Quality and Metrics

Call Graph

Page 54: Software Quality and Metrics

Call Graph

Page 55: Software Quality and Metrics

Call Graph

Page 56: Software Quality and Metrics

Call Graph (analyzed components)

Page 57: Software Quality and Metrics

Call Graph (root = ana_appel)

Page 58: Software Quality and Metrics

Call Graph (root = display)

Page 59: Software Quality and Metrics

Control Graph examination

• General view of the control graph– Sequences, nesting, pending nodes

• Singularities– Isolated code, graphical similarities, intern calls

Page 60: Software Quality and Metrics

Control graph examinationSymptom Diagnosis decisionDestructuring back call Bad maintainability, infinite

loop risk, bad understandability

Back to the algorithm to rewrite the component

Non reachable code Many changes done, no test, bad readability

Delete the dead code

Illogical structure, many auxiliary exits

Poor programming. Bad readability

Delete the exits but one => homogenisation

Several complex structures in sequence

Very height intern complexity. Hard to test

Split the component in several ones

Several nested complex sequences

Lack of modularity Organize these structures in components and create a new level in the call graph

Graphical similarities Non factorized code Factorization of code – Creation of a new component

Auxiliary entry point Lack of modularity. Poor maintainability

New component. Delete the entry point

Page 61: Software Quality and Metrics

Control Graph

Page 62: Software Quality and Metrics

Control Graph

Page 63: Software Quality and Metrics

Control Graph

Page 64: Software Quality and Metrics

Control Graph

Page 65: Software Quality and Metrics

Control Graph

Page 66: Software Quality and Metrics

Control Graph

Page 67: Software Quality and Metrics

Control Graph

Page 68: Software Quality and Metrics

Control Graph

Page 69: Software Quality and Metrics

Control Graph

Page 70: Software Quality and Metrics

Control Graph

Page 71: Software Quality and Metrics

Control Graph

Page 72: Software Quality and Metrics

Control Graph

Page 73: Software Quality and Metrics

Control Graph

Page 74: Software Quality and Metrics

Quality Report• Quality model– Quality characteristics (from the most to the least important)

for the project– Sub characteristics– Measures related to these characteristics and sub

characteristics (upper and lower bounds)• Global quality results– Kiviat, histograms, …

• Detailed analysis– Architecture– Components (accepted, rejected, …)

Page 75: Software Quality and Metrics

Example

Maintainability

Testability

Cyclomatic complexity Max nber of interleaving Nbre of input/output

StabilityChangeability Analyzability

Page 76: Software Quality and Metrics

Testability

• Testability– Cyclomatic complexity v(G)• V(G) <= 15 => OK 2• V(G) > 15 => NOK 0

– Max_nb_interleaving• Max_level <=4 => OK 2• Max_level > 4 => NOK 0

– Nb_input/output• Nb_IO =2 => OK 2• Nb_IO >2 => KO 0

Page 77: Software Quality and Metrics

Testability=(4*V(G)+4*Max_level+2*Max_IO)/10

V(G) Max_Level Max_IO Diagnosis

OK OK OK 2 = Accepted

OK OK 1.6 = restructure

OK OK 1.2 = to split

OK 0.8 = to split

OK OK 1.2 = to split

OK 0.8 = to split

OK 0.4 = to rewrite

0 = to rewrite

Page 78: Software Quality and Metrics

TestabilityUpper bound

Lower bound

V(G)

Max_level

Max_IO

1

15

2

2

1 4

V(G) = 12Max_Level =3Max_IO =2

Page 79: Software Quality and Metrics

TestabilityV(G)

Max_level

Max_IO

1

15

2

2

1 4

V(G) = 17Max_Level =5Max_IO =2

Page 80: Software Quality and Metrics

Maintainability

Testability Changeability

AnalysabilityStability

V(g)

Max_level

Max_IO

fcom

Nb_Ins

V(g)

Nb_Ins

Max_IOfcomMax_IO

Nb_Ins

Max_level

V(g)

Page 81: Software Quality and Metrics

Example in OO Programming

• Quality Model

• Measures and bounds

Page 82: Software Quality and Metrics

Quality Model

• Reliability– WMPC– CBO– FD– RFC

• Efficiency– CBO– AC– LCOM

• Maintainability– ASC - AR– DIT - ComR– AC– RFC

• Portability– NMCI - DIT– AR - EH– ER - PR

Page 83: Software Quality and Metrics

Class Level Measures

• ComR (Comment Rate)• ER (Encapsulation Rate =nber of private

variables/total number of variables)• NCI (Nber of Classes)• NMCI (Average Nber of Methods per class)• WMPC (Weighted Methods per Class)• LCOM (Lack of COhesion in Methods)• DIT (Depth of inheritance Tree)

Page 84: Software Quality and Metrics

Inheritance Measures

• EH (Entropy of inheritance Tree)• AR (Abstraction Rate = Nber of abstract

classes/Total nber of classes)• PR (Polymorphism Rate = (Nber of virtual

methods /total number of methods)

Page 85: Software Quality and Metrics

Object interactions Complexity measures

• AC (Association Complexity = R-NCI +2P with R nber of interactions between objects and P nber of disconnect parts in the graph)

• FD (Flow Density = sum of FI *p(start node exists) with FI, number of invoked methods between 2 objects and start node, object invoking methods)

Page 86: Software Quality and Metrics

Object interactions Complexity measures

• CBO (Coupling between Objects)

• RFC (Response for a class)

Page 87: Software Quality and Metrics

MeasuresLCOM Lack of COhesion in Methods <3

CBO Coupling between objects <10

EH Entropy of inheritance tree <2

AR Abstraction Rate >0.2

PR Polymorphism Rate >0.2

AC Association Complexity <15

WMPC Weighted Methods Per Class <15

RFC Response for a Class <10

FD Flow Density <100

ASC Average Size per Class <24

DIT Depth of inheritance tree <5

ComR Comment Rate >0.2

NCIER

Number of classesEncapsulation Rate

<20>0.9