Post on 14-Oct-2020
CHAPTER 4
IDENTIFYING SOFTWARE QUALITY
FACTORS, SUB-FACTORS AND
METRICS FOR CBSD
Quality begins on the inside ... and then works its way out.
~ Bob Moawad
IDENTIFYING SOFTWARE QUALITY FACTORS, SUB-FACTORS AND
METRICS FOR CBSD
The focus of various existing CBSD technologies is to provide a
mechanism that supports the component deployment in a system.
After the analysis of different studies it can be concluded that the
various quality attributes leading to component quality during its
development have not been of the primary interest and have not been
addressed explicitly. Since the particular quality attributes are of
greater importance which may lead to the quality of component during
development, it becomes important to identify the various crucial
quality attributes. The complete advantage of the component based
system approach will be achieved when the functional parts are easier
to use and are able to accurately predict the component quality during
development. However, the analysis of existing research in the area of
software quality of CBSD revealed that despite the development of
various software quality attributes the proper identification of the
factors, sub-factors of quality and their mapping of the design
properties to the software quality attributes from the developer's
perspective is lacking. The various quality factors and sub-factors
were identified which played a vital role in assessing software quality.
72
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
Moreover, object-oriented software metrics were identified which are
quantified to measure quality of software. Thus, a critical step in
understanding the nature of software quality during the development
of the component is to characterise the strength of various quality
attributes. To achieve this step common and significant quality factors
from the literature survey were identified which are discussed as
follows:
4.1 Identifying Software Quality Factors
It is difficult and in some cases impossible to directly measure
quality factors. In fact, most of the metrics defined by McCall Quality
Model can be measured only subjectively. The metrics can be used in
the form of a checklist and may be assigned with grades to measure
quality [36].
Further, both Cavano and McCall, explained the concept by
quoting situations where judgement is subjective such as in daily
events-taste of food, sporting events (gymnastics), talent contests
(dance, singing) etc. Similar situation exists for software quality. To
overcome this problem precise definition of quality of software is
required, along with quantitative measurements of software quality for
objective analysis is a must. A popular phrase by Louis Pasteur sums
it all 'A science is as mature as its measurement tools' [34]. Another
popular phrase 'Measure what is measurable, and make measurable
what is not so' written by Galileo Galilei, the 17th century Italian
physicist, mathematician and astronomer often called as the 'Father
73
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
of Modern Science' clearly signifies the need of metrics. The quality
factors adopted by various quality models are as follows:
4.1.1 Efficiency
McCall et al. [117] believed that efficiency is the Volume of code or
computer resources (e.g. Time or external storage) needed for a
program so that it can fulfil its function'. The statement is concerned
with the efficient use of computer code to perform processes and the
efficient use of storage resources [57].
While developing software, suitable programming language
should be selected, along with advanced operating system as it
increases the software efficiency, improves system performance and
supports background operations. While designing the software,
techniques to minimise data redundancy and algorithms to optimize
process time should be adopted, for which cohesion, coupling and
normalization factors be considered.
Good programming techniques must be applied such as, top-
down design, sequence, selection and iteration constructs, local
variables with in procedures, good use of parameter passing,
meaningful variable names, proper documentation etc. With the
passage of time hardware resources have improved a lot, but the
expectations also have moved higher. The software requirements have
increased manifolds and efficiency applies to a much broader set of
resources.
Hong Zhu [181] expresses efficiency as 'responsiveness to the
system' i.e. number of events processed in a fixed period of time. Hong
74
Identify ing Software Quality Factors, Sub-Factors And Metrics For CBSD
Zhu has further divided period of time into three parts. First part, is
handling communication between software components which are
different but collaborate to execute an event. The structure of a
software system evaluates the time needed by the communication
process between components for the amount of communications,
means of communication and interactions between components. A
good design will eradicate unnecessary communication between
components and will have an organized communication among
components. Components comprising of procedure must have a
procedure call for communication purpose as a high-quality design
principle. For components formed from processes, the communication
can be established through messages. The communication speed also
depends upon component distribution on the network. A complex and
widely spread network will logically consume more time for
communication between components in comparison to
communication within the same computer. A good communication will
certainly improve the performance. Second part, a perfect balance of
components being executed in parallel and/or synchronizing the
executions improves the performance levels. A better architectural
design will certainly improve the performance. Third part, the
selection and adoption of the most appropriate algorithm, procedure,
data structures and programming language enhances the
performance. A good detailed design certainly improves the
performance. The efficiency of the system is more or less also affected
by the human-computer interaction process. The user friendly
75
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
Graphical User Interface (GUI) package marginally improves the
performance levels.
The relationship of quality factor 'efficiency' with other factors is
inverse as per Perry's Model. Perry's Model projects three types of
relationship between the quality factors. The direct relationship
depicts that if a software system is good at one attribute, it should
also be good at the other attribute. The neutral relationship portrays
that the two attributes are normally independent of each other. The
inverse relationship represent that if a software system is good at one
attribute, it is not good at the other attribute [129]:
a) Integrity vs. efficiency (inverse): To make the component more
secure, additional code is required, which in turn requires longer
runtime, hence less efficiency.
b) Usability vs. efficiency (inverse): A highly usable component will
contain more code and data, hence less efficient.
c) Correctness vs. efficiency (neutral): A correct code has no relation
with efficiency, as it may still delay or fasten the proceedings.
d) Reliability vs. efficiency (neutral): An accurate solution may or may
not be efficient.
e) Maintainability vs. efficiency (inverse): A well commented code is
huge, well maintained but less efficient.
f) Testability vs. efficiency (inverse): An optimized code is not easy to
test but is efficient.
76
Identifying Software Quality Factors, Sub-Factors And Metrics For CBSD
g) Flexibility vs. efficiency (inverse): A flexible code is changeable and
adaptable to change, hence require more code, resulting in less
efficiency,
h) Portability vs. efficiency (inverse): To make a component suitable for
various systems, it requires more of code, hence less efficiency,
i) Reusability vs. efficiency (inverse): For a component to be reusable,
it has to be segmented into small sections, each being independent
and having clear boundaries to it, all this requires more code and
results in less efficiency,
j) Interoperability vs. efficiency (inverse): To make a component
converse able with other machine more code is required, hence less
efficiency.
Efficiency has been categorised as resource utilisation and time
used for an activity.
4.1.2 Integrity
McCall et al. [117] defines integrity as the 'extent to which illegal
access to the programs and data of a product can be controlled'.
Pressman [134] gives importance to integrity keeping in view cyber
terrorists, hackers etc. He expressed integrity attribute as a measure
to the system's ability to overcome intentional and accidental attempts
to procure software and data. To measure integrity, two additional
related attributes are exposed, namely threat and security. Threat is
the probability that an attack of a specific type will occur within a
given time. Security is the probability that an attack of a specific type
77
Identifying Software Quality Factors, Sub-Factors And Metrics For CBSD
will be repelled. Pressman concluded that integrity is definable in a
formula given in equation 4.1:
Integrity = X (l-(threat * (1 - security))) (4.1)
where
Threat is the probability of an attack of a specific type will occur
within a given time.
Security is the probability that the attack of a specific type will be
repelled. Threat and security are summed over each type of attack.
According to Ronan [57] integrity applies control for preventing
inaccurate data entering the system, prevent changes to the software.
French highlighted that it should be insured that all data is
processed, preserved, errors should be detected, corrected and
reprocessed and frauds be prevented and detected. French [59]
further, suggested five different types of control namely Manual
control. Data preparation control. Validation check control, Batch
control and File controls. Ghezzi et al. [64] also supports the above
view by specifying that data should be validated while being input or
updated. Few mechanisms to ensue integrity are: input data types can
be validated, maximum/minimum range values can be imposed,
concurrent access control mechanism can be applied, lost update
problem can be plugged, deadlock problem can be prevented and
read-lock and write-lock system can be used. Minimizing data
redundancy and integrity features from Data Base Management
System should be incorporated. All validation checks at the data input
78
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
stage m u s t be mentioned in requirement specification. Similarly safety
specifications m u s t be written in requirement specification.
Perry's Model [181] analysis the quality, factor 'integrity' in
comparison to other factors in the following manner : -
a) Integrity vs reusability (inverse): Reusable software increase da ta
security problem.
b) Integrity vs. Usability (direct): A component which is suitable,
adoptable and learnable will be secure hence integrity being
satisfied.
c) Integrity vs. Interoperability (inverse): The ability to connect to two
different components largely makes the component vulnerable to
security problems. As the h a n d s h a k e between two components
may h a r m a good component .
d) Integrity vs. Maintainability, Testability, Flexibility and Portability
(neutral): These factors do not affect Integrity a s a portable or un -
portable component may not affect integrity. Similar is the case
with other factors.
4 . 1 . 3 Reliability
McCall et al. [117] defines reliability a s 'the extent it can fulfil its
specific function'. Reliability is the ability of a product or component
to cont inue to perform its intended role over a period of time to pre
defined conditions. At t imes reliability is measured in te rms of the
mean time between failures, the mean time to repair, mean time to
recover, the probability to failure and availability of the solution. A
dependable component is reliable.
79
Identifying Software Quality Factors, Sub-Factors And Metrics For CBSD
Perry's Model [129] establishes the relationship of reliability with
other factors i.e.:-
a) Reliability, vs. Usability, Maintainability, Testability and Flexibility,
(direct): A well maintained, usable, tested software component is
more likely to be Reliable.
b) Reliability vs. Reusability (inverse): Frequent changes cause
reliability to fall.
c) Reliability vs. Efficiency, Integrity, Portability, Interoperability and
(Neutral): All such factors generally do not effect reliability.
The Reliability factor has gained popularity among software users
and developers.
4.1.4 Usability
Usability is an attempt to quantify ease-of-use and can be
measured in terms of suitability, learnability and adaptability. McCall
et al. [117] defines usability, factor as The Cost/effort to learn and
handle a product'. Usability is concerned with software operations and
environment in today's world. The ease with which the software is
used, learned and mastered is given importance for software
operations. Further, screen design, feedbacks, error messages, system
performance, ability to reverse the user's last action etc. lead to
software quality determining phase for the user's view. Similarly, use
of input devices and output devices set the tone for environment. Even
lighting, space, humidity, noise, etc. are part of the desired
environment. In other words, a software component is usable if it
finds humans who use it easily. Usability is subjective in nature. User
80
Identifying Software Quality Factors, Sub-Factors And Metrics For CBSD
friendliness parameters change from one user to another user. A non
programmer may appreciate the use of menus, while a programmer
may like to type a textual command. Hence, the user interface is an
important criterion of user friendliness [64].
Perry's Model [129] compares Usability, with other factors:
a) Usability vs. Maintainability, Testability, Flexibility, correctness,
(Direct): A tested, well maintained, highly flexible and accurate
software component will be used more, therefore the relation is
direct.
b) Usability, vs. Portability, Reusability, Interoperability (Neutral): An
easy to transfer component, or a component able to communicate
with other machine may or may not affect the frequency of use of
the component, hence a neutral relation exists.
In simple words, usability decides whether the component is
executed, used and appreciated by the user. The usability factor
pertains to the liking of the user.
4.1.5 Correctness
Correctness factor implements the specified user's requirements.
McCall et al. defines it as 'the extent to which a program fulfils its
specification' [117]. Ghezzi et al. [64] believes that correctness is
achieved, if stated functional requirements are met. Further,
correctness can be improved by using standard proven algorithms or
libraries of standard modules. Correctness is established through
testing, program inspection, static program analyzers and verification
81
Identifying Software Quality Factors, Sub-Factors And Metrics For CBSD
mechanisms. Pressman [134] refers to correctness as the degree to
which software performs its required function.
Perry Model [129] evaluates correctness with other factors to
realize that it has a direct relation with many other factors,
a) Correctness vs. Reliability, Usability, maintainability. Testability,
Flexibility (Direct): All these factors help directly in making the
software component more accurate.
4.1.6 Maintainability
Maintainability refers to the easiness of maintaining a software
system [181]. McCall et al. [117] defines Maintainability as The cost of
localizing and correcting errors'. Software maintenance includes the
modification of the software for correction of errors, called as
corrective maintenance. Besides that maintenance includes
modification of system due to environment change such as upgrade of
the system software, called as adaptive maintenance. Ghezzi et al. [64]
also believes that maintenance is perfective maintenance, it involves
changing the software to improve its quality by adding new functions,
improving the performance of the application, making it easier to use,
etc. Perfective maintenance could be initiated by the customer request
or due to an idea of improvement by the developer. Legacy software
could be another reason of maintainability. Legacy software is
software which already exists and is generally old, developed with
older technology, obsolete language and such software should either
be repaired or evolved. Repairable software allows fixing of defects
whereas evolvable ones allow changes that enable it to satisfy new
82
Identifying Software Quality Factors, Sub-Factors And Metrics For CBSD
requirements. Earlier maintenance was viewed as bug fixing but later
it is realized that it is more of enhancing the product with features.
There are no specific means of measuring maintainability. Pressman
[134] proposed a simple time oriented metric as 'Mean Time To
Change'. It is the time taken to analyze the change request, design an
appropriate modification, implement the change, test it and distribute
the change to all users. Maintainable programs have a lower Mean
Time To Change.
To ease maintainability phase good practices during development
of software such as use of Data Flow Diagrams, Entity Relationship
Diagrams, comment lines, documentation, meaningful names,
readable code, etc be incorporated. Ronan believes maintainability as
The non-operational costs associated with a product after a
successful user acceptance test' [57].
Perry's Model [129] compares maintainability with other factors
as:
a) Maintainability vs. correctness, usability, testability, flexibility,
portability, reusability, (Direct): Maintained code is one which is
well structured, easy to use, tested, flexible, portable and easy to
reuse.
b) Maintainability vs. efficiency (inverse): Compact and optimized code
is not easy to maintain and test. Well commented code is less
efficient.
83
Identifying Software Quality Factors, Sub-Factors And Metrics For CBSD
4.1.7 Testability
McCall et al. [117] believes that 'the cost of program testing for
the purpose of safeguarding is that the specific requirements are met'.
Testing is associated with quality factor as a major criterion of quality
[57]. Most of the researchers in the field of testing, have concluded
four stages of testing namely Item testing, Integration testing. System
testing and Acceptance testing. Item testing - tests components
individually. Integration testing-tests components when worked in
environment of modules. System testing-tests the software
(completely), to be sure that it is developed without errors. Acceptance
testing-the customer tests the software, before incorporating it into its
working. A number of testing strategies are employed for testing such
as functional (Black Box), structural (White Box) and residual defect
estimation. Techniques used to implement strategies are top down,
bottom up and stress testing [20].
Perry's Model [129] compares testing with other factors:
a) Testing vs. correctness, usability, maintainability, flexibility (Direct):
A tested software will be correct, likely to be used, easy to maintain
and even reusable.
b) Testing vs. efficiency (inverse): Compact, optimized code is not easy
to maintain and test, where as well commented code is less
efficient.
4.1.8 Flexibility
McCall et al. [117] defines flexibility as 'the cost of product
modification'. Flexibility is adaptability. In other words it is being able
84
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
to change or reconfigure the user interface to suit user's preferences.
The ability to meet the upcoming requirements of customer, without
much stress is flexible software [57]. Perry's Model projects flexibility
as a factor having most direct relations with other factors [129].
4.1.9 Interoperability
It is 'the cost of connecting two products with one another' as
defined by McCall et ah [117]. Another way of expressing
Interoperability is: A component successfully interacting with other
component [57]. Word processors communicating with spread sheets,
graphics module and much more will deem to be highly interoperable.
In other words, it is the property of how easily a software system can
be used with other software systems. It depends upon the interface
and does not depend on design, algorithm and data structure. An
open system allows different applications, written by different
organizations to interoperate.
Perry's Model projects inverse relation of interoperability with
efficiency and integrity [129].
4.1.10 Reusability
McCall et al. [117] defines re-usability as 'the cost of transferring
a module or program to another application. In simple words, it is the
use of code more than once. A routine performing a job in a specific
module will perform the same job needed in other module too. Thus,
instead of writing the code again it can be reused by passing new
different parameters. Visual Basic, C language and object-oriented
development techniques support re-usability. However, the issue
85
Identifying Software Quality Factors, Sub-Factors And Metrics For CBSD
related to reusability of code is that such code has to be adaptable
one, generalized one, so that it can be reused in another application.
It takes longer and costlier to develop reusable code. At times, in order
to save and work within budget constraints and schedules, one
ignores the need of developing reusable code. Another critical issue
which is worth considering is that, who owns the copyright and
ownership rights of the reusable code.
Hong Zhu [181] also supports that reusability depends on the
generality of the components in a given application domain and the
extent to which the components are parameterized and configurable.
Architecture design determines the functionality of software system
decomposed into components and their interconnection is known.
Reusability is increasing rapidly in the field of software claimed by
Carlo Ghezzi et al. [64]. Perry model projects reusability's inverse
relation with efficiency, integrity, reliability. Though, reusability has
direct relation with portability, flexibility, testability and
maintainability [129].
4.1.11 Portability
McCall et al. [117] defines it as 'the cost of transferring a product
from its hardware or operational environment to another'. Portability
is defined by Ronan [57] 'as the strategy of writing software to run on
one operating system or hardware configuration while being conscious
of how it might be refined with minimum effort to run on other
operating systems and hardware platforms'. Sommerville [160]
considers portability with reusability such that it is applied across
86
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
different computers and operating systems. Programming languages
are portable between different systems. Hong Zhu [181] advises that
at design level due care should be given to the design of algorithms
and data structures so that portability be achieved. The interface
design related conclusion by Hong Zhu specifies that the independent
environment specific features will make portability easy.
Carlo Ghezzi et al. [64] believe that portability depends upon
operating system in today's world. Perry Model shows portability
having inverse relation with efficiency, because portable software will
have more code, thus less efficient. Portability has direct relation with
maintainability and testability [129].
The quality factors have gained popularity and importance. It is
evident from the studies of various software quality models viz. McCall
Quality Model, Boehm Quality Model, ISO 9126 Quality Model, Gillies
Relational Model and Perry Quality Model that the models propose
same quality factors and further, it is realised that the quality factors
are related to each other. Some quality models have given more
weightage to one factor and the other quality model has paid more
heed to another. The quality factors which are part of all the quality
models, the ones which are well weighted by all the quality models
and are critical for evaluation of software quality are Efficiency,
Maintainability, Portability, Reliability and Usability. These factors are
the evaluators of software quality. But these quality factors are non-
quantifiable.
87
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
Selection of the quality factors amongst the list of available factors
were based on the prevailing software quality models. Four software
quality models namely, McCall, Boehm, ISO 9126 and Gillies
Relational model were found as the domain for identifying quality
factors, as most of the studies incorporated these models as part of
their research [44] [10] [83] [42] [147]. The quality attributes identified
within the models from the survey of literature are tabulated in Table
4.1.
TABLE 4.1: QUALITY FACTORS ADDRESSED IN THE QUALITY MODELS JiiaKty: ^^^^^ Models Factors ~~ __
McCaU Boehm ISO 9126 Gillies Relational
Correctness X - - X Efficiency X X X X FlexibUity - - - X Human Engineering - X - -Integrity X - - X Interface facility X - - X Maintainability X X X X Modifiability - X - -Portability X X X X ReliabUity X X X X Reusability X - - -TestabUity X X - -Understand ability - X - X Usability X X X X
Similar quality factors in the various software quality models
namely, McCall quality model, Boehm quality model, ISO 9126 and
Gillies Relational model are listed and quality factors versus quality
models are tabulated in Table 4.1. The 'X' denotes the quality factor in
the corresponding row is proposed by the quality model mentioned in
the corresponding column e.g. a OC' in 'Human Engineering' row
represents that 'Boehm' quality model recommends Human
Engineering as the quality factor. A '-' signifies that the quality factor
88
Identify ing Software Quality Factors, Sub-Factors And Metrics For CBSD
under consideration is not a part of the corresponding quality model.
It is inferred that the common quality factors prevalent in all
mentioned quality models are: Efficiency, Maintainability, Portability,
Reliability and Usability. These quality factors have emerged as
common quality factors in most of the well known quality studies
[143] [95] [19] [128] [9] and are generally considered as most related to
the quality of the software. The remaining factors are indirectly
integrated within all other models. In the past, software architects
have devised several different models of software qualities. The
selected quality models satisfy the complete taxonomy of software
quality.
4.2 Identifying Software Quality Sub-factors
The above identified software quality factors are non-quantifiable.
Factors, subjective in nature, need to be measured to evaluate the
quality. There is a necessity to break the quality factors into segments
comprising of sub-factors. Software quality sub-factors are the
characteristics that define quality factors. Further, quality factors may
be divided into independent characteristics. The sub-factor
perspective of the software quality is used for proposing the quality
assurance framework. The various sub-factors for above mentioned
quality factors, based on the number of studies, opinion of
researchers in the past and interpretation are discussed below:
4.2.1 Efficiency
Efficiency factor means the product's ability to offer sufficient
efficiency and using reasonable amount of resources when product is
89
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
being used in specified environment. The identified sub-factors of
efficiency are tabulated in Table 4.2 with proper description of each.
TABLE 4.2: EFFICIENCY SUB-FACTORS
s. No.
Sub-factor Description
1. Time Behaviour [1841
Product's ability to response time for a given throughput.
2. Resource Behaviour [184]
Ability to use resource optimally to complete the task in terms of i.e. memory, CPU, disk, network usage, etc.
3. Efficiency Compliance
Maturity to obey standards and regulations regarding efficiency issues in specified environment.
4. Reply time [94] [93] Ability to respond with output. 5. Processing speed
[941 [931 Rate at which the data is converted into information.
6. Execution efficiency [1421
Product's run time efficiency of the software.
7. Hardware independence [1421
Degree to which the software is dependent on the underlying hardware.
8. Robust [1151 It is the degree to which an executable work product continues to function properly under the abnormal condition or circumstances.
4.2.2 Maintainability
Maintainability quality factor implies the product ability to be
changeable, maintainable and updatable. Table 4.3 shows the various
identified sub-factors for the maintainability factor along with
description of each sub-factor.
90
Identifying Software Quality Factors, Sub-Factors And Metrics For CBSD
TABLE 4.3: MAINTAINABILITY SUB-FACTORS S. No. Sub-factor Description 1. Analyzability [41] The capability of the software product to be
diagnosed for deficiencies or causes of failures in the software or for the pa r t s to be modified to be identified.
2. Changeability [113]
The capability of the software product to enable a specified modification to be implemented.
3 . Stability The capability of the software to minimize unexpected effects from modifications of the software.
4. Testability [66] The capability of the software product to enable modified software to be validated.
5. Correct ability [142] [113] [43]
The ease with which minor defects can be corrected between major releases while the application or component is in use by its u se r s .
6. Extensibility [66] It is the ease with which an application or components can be enhanced in the future to meet the changing requirements .
7. Reusability [93] The rate to which the used components of the product can be reused on another product or system.
8. Modularity [43] [160]
The rate to which the product is build from separate components so tha t change to one component h a s minimal impact on the other components of the product .
9. Adaptiveness [142] [43] [113] [160] [105] [126]
It is the ability of the product to accept the new environment, new hardware , new operating system, new support ing software.
10. Perfectiveness [142] [113] [43]
Ability to perform accurately unde r all c i rcumstances .
11. Preventiveness [142]
It is the ability of the product to anticipate future problems.
12. System age [160] [185]
It is the period since the first release of the product .
13. Understandabil i ty [114] [106]
The capability of the software product to enable the use r to unde r s t and whether the software is suitable and how it can be used for part icular t a sks and conditions of use .
14. Documentat ion [113]
Provision of programmers m a n u a l tha t explains implementat ion of components .
15. Error debugging [113]
It is the meant ime to debug, find and fix errors.
16. Maintainability Compliance
The rate of how well product adheres the s t anda rds and regulations regarding maintainability.
91
Identifying Software Quality Factors, Sub-Factors And Metrics For CBSD
4.2.3 Portability
A portability factor means the products ability to be portable from
one environment to another. Table 4.4 highlights the identified sub-
factors for portability with description of all the sub-factors.
TABLE 4.4: PORTABILITY SUB-FACTORS
s. No.
Sub-factor Description
1. Adaptability [1] [135] [120] [24]
The capability of the software to be modified for different specified environments without applying actions or means other than those provided for this purpose for the software considered.
2. Install ability [1] [135] [24]
The capability of the software to be installed in a specified environment.
3. Coexistence [1] [135]
The capability of the software to coexist with other independent software in a common environment sharing common resources.
4. Replace ability [1] [135] [120] [23] [24]
The capability of the software to be used in place of other specified software in the environment of that software.
5. Portability Compliance
The rate of how well product adheres the standards and regulations regarding portability.
6. Conformance [24] It is the rate to which the product meets the requirement defined in the SRS and design specification.
7. Reusability [24] [57]
It is the ability of the product to be used more than once and also to be used in different environments.
8. Transferability [24] It is the effort to transfer the product from one to another hardware and also from one to another operating system.
9. Flexibility [24] It is the products ability to be usable in all possible conditions for which it was designed.
92
Identifi/ ing Software Quality Factors, Sub-Factors And Metrics For CBSD
4.2.4 Reliability
Reliability attribute means the rate to which the product or
component executes its functions under stated conditions in specified
period of time. The identified sub-factors of reliability factor are
tabulated in Table 4.5 with appropriate description of each sub-factor.
TABLE 4.5 : RELIABILITY SUB-FACTORS S. No. Sub-factor Description 1. Maturity [198] The capability of the software to avoid failure
a s a resul t of faults in the software. 2. Fault Tolerance
[190] The capability of the software to mainta in a specified level of performance in case of software faults or of infringement of its specified interface.
3 . Accuracy [142] [26] [54] [5]
Precision of computa t ions and output .
4. Completeness [142] [26] [54]
Degree to which a full implementat ion of the required functionalities h a s been achieved.
5. Recoverability [66] [192]
The capability of the software to reestablish its level of performance and recover the da ta directly affected in the case of a failure.
6. Survivability [66] It is the degree to which the essential services cont inue to be provided in spite of either accidental or malicious ha rm.
7. Consistency [142] [26] [54] [5]
It is the u s e of uniform design and implementat ion techniques and notat ion throughout a project.
8. Simplicity [142] [5]
It is the ease with which the software can be unders tood.
9. Error tolerance [142] [5]
It is the degree to which a product cont inues to function properly despite the presence of erroneous input .
10. Statistical behaviour
The portability tha t the software will operate a s expected over a specified time interval.
11. Availability [41] The rate to which the component or system is operational and accessible for u s e when required.
12. Integrity [26] [54] The rate with which the component prevents the unauthor ized modification of or access to system data .
13. Reliability Compliance
The rate of how well product adheres the s t anda rds and regulations regarding reliability.
93
Identifying Software Quality Factors, Sub-Factors And Metrics For CBSD
4.2.5 Usability
Usability quality factor covers the product's ability to allow
specified users to complete the needed task in defined context of use.
The sub-factors for the usability factor, are listed along with
description of each sub-factor in Table 4.6.
TABLE 4.6: USABILITY SUB-FACTORS
s. No.
Sub-factor Description
1. Understandability [112]
The capability of the software product to enable the user to understand whether the software is suitable and how it can be used for particular tasks and conditions of use.
2. Learn ability [160] The capability of the software product to enable the user to learn its applications.
3. Operability [6] The capability of the software product to enable the user to operate and control it.
4. Attractiveness The capability of the software product to be liked by the user.
5. Ease of Use [114] The rate to which the user finds the product easy to operate and control.
6. Communicativeness [6]
Ease with which inputs and outputs can be assimilated.
7. User friendly [64] Ease with which the component can be operated.
8. Accessibility It is the degree to which the user interface enables users with common or specified disabilities to perform their specified task.
9. Customer satisfaction [61]
It is the degree of the user's contentment in the usage of the component.
10. Documentation [112]
It is the availability of manuals and other supporting documents for support of the user in its operation
11. Training [6] Ease with which the new users can use the system.
12. Usability Compliance
The rate of how well product adheres the standards and regulations regarding usability issues in specified environment.
94
Identifying Software Quality Factors, Sub-Factors And Metrics For CBSD
It may be concluded that the fundamental, crucial and decisive
quality factors are efficiency, maintainability, portability, reliability
and usability. Corresponding to each quality factor the various
imperative, relevant and key quality sub-factors were engineered
which results in a more complete requirements specification for
evaluating the software quality. The sub-factors were categorized into
a logical understandable hierarchy which is easy to use while
predicting quality. The identified sub-factors are quantifiable in the
sense that: whether training is conducted regarding the use of
software, whether the software responds on encountering an error,
whether the software is workable in diverse operating environments,
whether modules are used in the development of software. The replies
to these queries facilitate in quantification of quality. Identification of
low level sub-factors benefit software developers and academicians for
assessing the precise design requirements during the development
process.
4.3 Identifying Software Quality Metrics
The contribution of metrics to the purpose of software quality is
understood and fully recognized by the software engineering
community in general [134] and particularly emphasized by the
software quality community [55]. Process and product metrics help in
managing activities, such as scheduling, costing, staffing and
controlling. Also, the engineering activities such as analyzing,
designing, coding, documentation and testing are helped by the
software metrics. Since the early days of computer science many
95
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
approaches quantifying the internal s t ruc ture of procedural software
system have emerged [182]. Some of those traditional metr ics can still
be u sed with the object-oriented paradigm, especially a t the method
level such a s Lines Of Code (LOG) and Number of Methods [2].
However, the need to quantify the distinctive features of object-
oriented paradigm gave birth, in recent years , to new metric sui tes .
The core object-oriented features include inheri tance, encapsula t ion,
abst ract ion and polymorphism. Most of the sui tes are yet to be
experimentally validated for object-oriented features. This validation
step usual ly consis ts of correlation s tudies between internal (design)
and external (attributes) [3]. Software metrics is the measur ing
property or a t t r ibute to measure quality of a software object related to
software project of any size. Object-oriented approach is capable of
classifying the problem in t e rms of objects and provides paybacks like
efficiency, maintainabili ty, reliability, portability and usability. Object-
oriented metrics are useless if they are not mapped to software quality
parameters . There are n u m e r o u s quality metric sui tes available to
predict quality of the software namely CK, MOOD, Lorenz a n d Kidd
and Li Suite.
CK suite comprised of six metr ics namely. Weighted Methods
per Class (WMC), Response For a Class (RFC), Lack of Cohesion of
Methods (LOOM), Depth of Inher i tance Tree (DIT), Number Of Children
(NOC) and Coupling Between Object c lasses (CBO). The metrics
comprised in MOOD metrics are Method Hiding Factor (MHF),
Attribute Hiding Factor (AHF), Method Inheri tance Factor (MIF),
96
Identif^g Software Quality Factors, Sub-Factors And Metrics For CBSD
Attribute Inheritance Factor (AIF), Polymorphism Factor (PF) and
Coupling Factor (CF). The Lorenz and Kidd suite had three categories
of metrics namely, Class Size Metrics, Class Inheritance Metrics and
Class Internals Metrics. Each of these groups further contained
metrics which are merely variables for counting. The Class Size
Metrics had metrics each for counting- the Number of Public Methods
(NPM), Number Of Methods (NOM), Number of Public Variables (NPV),
Number of Variables per class (NV), Number of Class Variables (NCV)
and Number of Class Methods (NCM). For the Class Inheritance
Metrics one metrics each for counting- Number of Methods Inherited
(NMI), Number of Methods Overridden (NMO) and Number of New
Methods (NNA) is assigned. The Class Internals Metrics had metrics to
count Average Parameters per Method (APM) and Specialization IndeX
(SIX). The Li suite had six metrics namely, Number of Ancestor
Classes (NAC), Number of Descendent Classes (NDC), Number of Local
Methods (NLM), Class Method Complexity (CMC), Coupling Through
Abstract data (CTA) and Coupling Through Message passing (CTM).
The Lorenz and Kidd suite and Li suite had metrics for totalling
purposes.
Based on the review of the existing software metrics suite, a list of
parameters is defined to accept or discard software metric. Lacking
any of these properties will result in an inapplicable quality metrics.
a) Precisely defined metrics
Ambiguity in metrics definition allows many interpretations for
the same metric. A mathematical formula or a clear explanation of the
97
Identifying Software Quality Factors, Sub-Factors And Metrics For CBSD
method of calculation of the metric should exist. All the CK suite
metrics except one (LCOM) have definite mechanism to compute the
metrics. LCOM had ambiguity in its computing formula [70].
b) Empirically validated software quality metrics
Metrics suites without validation are always in doubt concerning
their correctness. The MOOD metrics suite and CK metrics suite have
been validated in several studies [3] [52] [4] and [18] [37] [140]
respectively. Empirical validations for the Lorenz and Kidd metrics
suite and for Li suite are lacking.
c) Interpretation of metrics
Lord Kevin quoted "The degree to which you can express
something in numbers is the degree to which you really understand it"
is self explanatory to state that numbers do not have a meaning of
their own, till the values are interpreted to make decisions. The Li
suite and Lorenz and Kidd suite face an uphill task, number values
exist but are not able to infer. Each metrics of CK and MOOD have
interpretation for object-oriented characteristics e.g. DIT, MIF and AlF
for inheritance; CBO for coupling; AHF and MHF for data abstraction;
Polymorphism is represented by PF.
d) Relationship between metrics and quality factors
The metrics value should address to the quality factors. An
explicit relationship of increase or decrease in metrics value having
implication on software quality factors should be made. The Lorenz
and Kidd suite and the Li suite are not able to relate the metrics with
98
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
quality factors. However, the metrics of CK and MOOD have strong
relation.
e) Metrics computable at any stage
At any stage especially at the initial stage or before completion of
the software, values for metrics may be calculated. Mid way
assessment of software metrics certainly helps in improvement of
software quality.
The desirable properties of OOD quality metrics laid the basis for
the identification of the various quality metrics as follows:
a) Weighted Methods per Class (WMC) [108] [74]
WMC is the sum of the complexity of the methods of a class.
• WMC = Number of Methods (NOM), when all method's
complexity are considered unity.
• WMC is a predictor of how much time and effort is required to
develop and to maintain the class.
• The larger NOM the greater the impact on children.
• Classes with large NOM are likely to be more application
specific, limiting the possibility of re-use and making the effort
expended one-shot investment. The value of WMC should be low
for better quality of software.
b) Depth of Inheritance Tree (DIT) [162] [37]
The maximum length from the node to the root of the tree
• Higher value of DIT results in the greater NOM which it is likely
to inherit, making it more complex to predict its behaviour
99
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
• Higher value of DIT results in the potential reuse of inherited
methods
• Small values of DIT in most of the system's classes may be an
indicator that designers are forsaking re-usability for simplicity
of understanding.
• Value of DIT should fall in a range
c) Number of ChUdren (NOC) [37]
• Number of immediate subclasses subordinated to a class in the
class hierarchy
• The greater the NOC is:
• The greater is the re-use
• The greater is the probability of improper abstraction of
the parent class
• The greater the requirement of method's testing in that
class.
• Small values of NOC, may be an indicator of lack of
communication between different class designers.
• Value of NOC should fall in a range
d) Coupling Between Object classes (CBO) [15] [37]
It is a count of the number of other classes to which it is coupled
• Small values of CBO :
• Improve modularity and promote encapsulation
• Indicates independence in the class, making easier its
reuse.
100
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
• Makes easier to maintain and to test a class.
• The value of CBO should be low
e) Lack of Cohesion in Methods (LCOM) [37]
It measures the dissimilarity of methods in a class via instanced
variables.
• Great values of LCOM:
• Increases complexity
• Does not promotes encapsulation and implies classes
should probably be split into two or more subclasses
• Helps to identified low-quality design
• Value of LCOM should be low
Harrison et al. point out that calculation of Lack of Cohesion in
Methods metric requires careful consideration of the use of variables
in a class and it is possible only for components with a small number
of classes [73].
Also, Graham mentions that the LCOM metric has weaknesses in
the formula and different interpretations of the result lead to vague
outputs [70].
f) Response For a Class (RFC) [74] [15]
• It is the number of methods of the class plus the number of
methods called by any of those methods.
• If a large numbers of methods are invoked from a class (RFC is
high):
• Testing and maintenance of the Class becomes more
complex.
101
Identify ing Software Quality Factors, Sub-Factors And Metrics For CBSD
• The value of RFC should be low.
g) Method Hiding Factor (MHF) [76]
MHF = 1 - MethodsVisible
where MethodsVisible = sum(MV) / (C-1) / Number of methods
MV = number of other classes where method is visible
C = number of classes
If all methods are private then MHF=100%
A low MHF indicates insufficiently abstracted implementation. A
large proportion of methods are unprotected and the probability of
errors is high.
A high MHF indicates very little functionality. It may also indicate
that the design includes a high proportion of specialised methods that
are not available for reuse.
h) Attribute Hiding Factor (AHF) [76]
These metrics measure how variables and methods are
encapsulated.
Encapsulation^ It denotes the degree to which methods / variables are
hidden and are accessible by other class. Visibility is counted in
respect to other classes. Hiding decreases in the following order:
Private, Protected, Friend, Protected Friend and Public.
AHF = 1 - AttributesVisible
where AttributesVisible = sum.(AV) / (C-1) / Number of attributes
AV = number of other classes where attribute is visible
C = number of classes
If all attributes are private then AHF=100%.
102
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
• Low MHF/AHF reflects that methods/attributes are unprotected
and probability of error is high.
• High MHF/AHF reflects no reuse, specialized methods and little
functionality.
i) Method Inheritance Factor (MIF) [76]
MIF = inherited methods/total methods available in classes
High MIF denotes lots of inheritance whereas low MIF (0%)
denotes independent class.
j) Attribute Inheritance Factor (AIF) [76]
AIF = inherited attributes/total attributes available in classes
High AIF denotes lots of inheritance whereas low AIF (0%) denotes
independent attribute.
k) Polymorphism Factor (PF or POF)
PF = overrides / sum for each class (new m.ethods * descendants)
Polymorphism: ability to create variable, function or object having
more than one form. In MOODS metric, the sub class (child) can use
(override) the class of its parent. The more it overrides the higher is
the PF.
1) Coupling Factor (CF or COF).
CF = Actual couplings/Maximum, possible couplings
Coupling increases complexity. Coupling reduces encapsulation,
reusability, maintainability and understand-ability.
m) Number of ancestor classes (NAC)
The Number of Ancestor classes (NAC) metric was proposed, as an
alternative to the DIT metric, to measure this attribute of a class. Li
103
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
define the NAC as the total number of ancestor classes from which a
class inherits in the class inheritance hierarchy. The theoretical basis
and viewpoints both are same as the DIT metric. The unit for the NAC
metric is 'class', Li justified that because the attribute that the NAC
metric captures is the number of other classes environments from
which the class inherits. This unit is defined with reference to a
standard which has class inheritance relation in object-oriented
programming.
n) Number of descendent classes (NDC)
The Number of Descendent Classes (NDC) metric is proposed as
an alternative to the NOC metric. It defined as the total number of
descendent classes (subclass) of a class. The theoretical basis and
viewpoints remain the same as NOC. Li reported that the attribute of a
class that the NOC metric captures is the number of classes that may
potentially be influenced by the class because of inheritance relations.
Li claimed that the NDC metric captures the classes attribute better
than NOC.
o) Number of local methods (NLM)
This is one of the metric proposed in Li suite in order to measure
the attributes of a class that WMC metric intends to capture. The
Number of Local Methods metric (NLM) is defined as the number of
the local methods defined in a class which are accessible outside the
class. The theoretical basis and viewpoints are different from the WMC
metric. The theoretical basis describes the attribute of a class that the
NLM metric captures is the local interface of a class. This attribute is
104
Identifying Software Quality Factors, Sub-Factors And Metrics For CBSD
important for the usage of the class in an object-oriented design
because it indicates the size of a class's local interface through which
other classes can use the class. Li stated three viewpoints for NLM
metric as following:
1. The NLM metric is directly linked to a programmer's comprehension
effort when a class is reused in an OO design. The more local
methods a class has, the more effort is required to comprehend the
class' behaviour.
2. The larger the local interface of a class, the more effort is needed to
design, implement, test and maintain the class.
3. The larger the local interface of a class, the more influence the class
has on its descendent classes.
p) Class Method Complexity (CMC)
The Class Method Complexity (CMC) metric is defined as the
summation of the internal structural complexity of all local methods.
Regardless of whether, they are visible outside the class or not. This
definition is essentially the same as the first definition of the WMC
metric. However, the CMC metric's theoretical basis and viewpoints
are significantly different from WMC metric. The NLM and CMC
metrics are fundamentally different because they capture two
independent attributes of a class. However, there is some commonality
in the viewpoints of the two metrics - they affect the effort required to
design, implement, test and maintain a class.
105
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
q) Coupling Through Abstract data type (CTA)
The Coupling Through Abstract data type (CTA) is defined as the
total number of classes that are used as abstract data types in the
data-attribute declaration of a class. Two classes are coupled when
one class uses the other class as an abstract data type.
Viewpoints according to Li:
1. A software engineer needs to spend more time in understanding the
interfaces of the used classes in order to create the design for a
high CTA class than a low one.
2. For a test engineer, more effort is needed to design test cases and
perform testing for high CTA class than a low one because the
behaviors of the used classes also need to be tested.
3. For a maintenance engineer, it takes more time to understand a
high CTA class than a low one because a high CTA class uses more
class whose behaviors may compliance the class.
r) Coupling Through Message passing (CTM)
The Coupling Through Message passing (CTM) defined as the
number of different messages sent out from a class to other classes
excluding the messages sent to the objects created as local objects in
the local methods of the class. Two classes can be coupled because
one class sends a message to an object of another class, without
involving the two classes through inheritance or abstract data type.
Viewpoints according to Li:
1. A class designer needs to spend more effort in understanding the
services provided by other classes in a high CTM class than in a low
106
Identifying Software Quality Factors, Sub-Factors And Metrics For CBSD
CTM class because the outgoing message are directly related to the
services that other classes provide.
2. A test engineer needs to spend more effort and design more test
cases for high CTM class than for a low CTM class because a high
CTM value means more other classes' methods are involved in the
logical paths of the class.
3. For a maintenance engineer, the higher the CTM metric value, the
more specific methods in other classes the engineer needs to
understand in order to diagnose and fix problems, or to perform
other types of maintenance.
These entire metrics suites are confined to the family of object-
oriented approach having inter-relations with other metric suites.
The software metrics are identified on the basis of the desirable
properties of the metrics [18] [3] [63] [12] [173]. The metrics were
identified from the suites of CK and MOOD. The results of assessing
the software metrics against the software quality desirable properties
are summarized in Table 4.7.
The Table 4.7 is structured with property versus software quality
metrics. The first column comprises of a set of software metrics. The
desirable properties are mentioned in top row. The "Yes" denotes that
the metric satisfies the desirable property, on the other hand a "No"
represents that the metric does not satisfy the desirable property. It is
clearly depicted on the basis of desirable properties that the Lorenz
and Kidd metrics suite is unfit for evaluation of software quality.
107
Ident i fy ing Sof tware Q u a l i t y F a c t o r s , S u b - F a c t o r s A n d Me t r i c s F o r C B S D
TABLE 4.7: ASSESSMENT OF METRICS AGAINST THE DESIRABLE PROPERTIES
^ ^ o p e r t y
Metrics \ ,
Precise Defined Metrics
Empirical Validation
Interpretation of Metrics
Relat ionship be tween Metrics and Factors
Computable at any Stage
MHF Yes Yes Yes Yes Yes AHF Yes Yes Yes Yes Yes MIF Yes Yes Yes Yes Yes AIF Yes Yes Yes Yes Yes PF Yes Yes Yes Yes Yes CF Yes Yes No No No WMC Yes Yes Yes Yes Yes RFC Yes Yes Yes Yes Yes LCOM No No Yes No No DIT Yes Yes Yes Yes Yes NOC Yes Yes Yes Yes Yes CBO Yes Yes Yes Yes Yes Class Size Metrics
Yes No No No Yes
Class Inheri tan ce Metrics
Yes No No No Yes
Class Internals Metrics
Yes No No No Yes
NAC Yes No No No Yes NDC Yes No No No Yes NLM Yes No No No Yes CMC Yes No No No Yes CTA Yes No No No Yes CTM Yes No No No Yes
Similarly, the metrics associated with Li suite failed to evaluate
quality. Both Lorenz and Kidd metrics suite and Li metrics suite have
not been validated in existing studies and are statistical measures for
software in terms of counting: the number of methods / variables
under various categories. Chidamber et al. commented that the
metrics used in Li and Henry suite are that of CK suite and the
metrics explained additional variance in maintenance effort beyond
that explained by traditional size metrics, implying that CK Metrics
suite is better and is best suited [37]. Similarly, Varvana observed that
Li suite had weaknesses of the nature that 'redundant or needless
108
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
variables' were piled up in the suite. Also, the suite does not cater to
the 'full model' of software development [170]. Mark et al. have
recorded that Li suite 'does not define evaluation functions' and
further that the Li suite is 'subsequent modification' of CK suite [91].
Also, LCOM and CF have failed, in evaluation of the metric, for
evaluation of quality as presented.
Harrison et al. [73] criticised Lorenz and Kidd metric suite that
the suggested metrics are not useful because 'only a limited insight
into the architecture of the system under investigation is given'. Neal
[124] pointed out that Lorenz and Kidd metric suite only used
heuristics to validate the metrics. Harrison et al. [75] are of the
opinion that the Chidamber and Kemerer's metrics suite seems useful
to designers and developers of systems, giving them an evaluation of
the system at the class level, while the MOOD metrics could be of use
to project managers as the metrics operate at a system level. Other
object-oriented metrics are derived from the CK suite of object-
oriented metrics namely Lorenz and Kidd, Harrison, Counsell and
Nithi suite and Whitmire.
Marco et al. [145] had highlighted that CK metrics suite is the
most referenced metrics suite. Numerous studies have stated that CK
metrics suite and MOOD metrics suite comprise of the most
comprehensive set of metrics [144] [150]. Nevertheless, the majority of
metrics proposed in CK suite namely WMC, RFC, DIT, NOC and CBO
are acceptable, class based, binding scope of metrics and validate
quality. Similarly, the MOOD suite metrics are acceptable, well
109
Identi^ing Software Quality Factors, Sub-Factors And Metrics For CBSD
defined, project level based, computable and provide evidence of
evaluating quality. The acceptable MOOD metrics are MHF, AHF, MIF,
AIF and PF. The accepted CK and MOOD metrics combine to form a
group of ten metrics capable of assessing object-oriented
characteristics.
The various identified software quality metrics are presented in
Table 4.8:
TABLE 4.8: IDENTIFIED SOFTWARE QUALITY METRICS S. No. Quality Metrics
1. Depth of Inheritance Tree (DIT) 2. Weighted Methods per Class (WMC) 3. Number of Children (NOC) 4. Coupling Between Object Classes (CBO) 5. Response For a Class (RFC) 6. Method Hiding Factor (MHF) 7. Attribute Hiding Factor (AHF) 8. Method Inheritance Factor (MIF) 9. Attribute Inheritance Factor (AIF) 10. Polymorphism Factor (PF)
The first five quality metrics mentioned in the Table 4.8 are
associated with CK suite and the remaining metrics pertain to MOOD
metrics suite.
It may be concluded from the analysis that the identified ten
software quality metrics pertain to object-oriented characteristics, are
recognised in studies, are related to the identified quality factors and
are measurable.
110