Software Testing and Quality Attributes Software Testing Module (0721430) Dr. Samer Hanna.

26
Software Testing and Quality Software Testing and Quality Attributes Attributes Software Testing Module (0721430) Dr. Samer Hanna

Transcript of Software Testing and Quality Attributes Software Testing Module (0721430) Dr. Samer Hanna.

Software Testing and Quality AttributesSoftware Testing and Quality Attributes

Software Testing Module (0721430)

Dr. Samer Hanna

Quality AttributesQuality Attributes

• Quality attributes are the key factors in the success of any software system.

• Also Quality attributes are important for the user of the software system to evaluate how good a system is.

• However, software quality is a complex and subjective mixture of several attributes or factors and there is no universal definition or a unique metric to quantify software quality (Raghavan, 2002).

• Software quality is measured by analyzing the various attributes that are significant to a certain domain or application (Raghavan, 2002).

Main quality models Main quality models

The quality attributes literature includes the following main quality models:

• Boehm (Boehm, 1976),

• McCall (McCall, 1977),

• Adrion (Adrion et al. 1982), and

• ISO 9126:2001 (ISO 9126-1, 2001).

Quality models shared understandingQuality models shared understanding

• When analyzing the main quality models, it is noticed that there is no agreement between researchers about a fixed general quality attributes because there is no shared understanding about the quality attributes (or characteristics).

• It is also noticed that some sub attributes are related to different attribute. For example: accuracy is related to functionality attribute in ISO 9126, while it is related to reliability attribute in Boehm’s model

TrustworthinessTrustworthiness

• Trustworthiness is defined as: “Assurance that the system will perform as expected”. (Avizienis et al., 2004).

• Some quality attributes have sub-attributes which are considered as requirements for the main attribute (see Fig. 1);

• trustworthiness requires many quality attribute such as: security, reliability, safety, survivability, interoperability, availability, fault tolerance, and robustness, etc. (Zhang, J., 2005).

TrustworthinessTrustworthiness

• However, fault-tolerance and robustness are sub-attributes of reliability (ISO 9126-1, 2001) (Adrion, 1982); Fig. 1 describes the trustworthiness quality model according to these relations between the quality attributes.

• To assess the trustworthiness of any software system, researchers and practitioners must find methods to assess the trustworthiness sub-attributes such as reliability, security, and so on.

Trustworthiness

Security

Reliability

Safety

Survivability

Fault Tolerance

Robustness

.……

Interoperability

Availability

..……

Fig. 1 Fig. 1 Trustworthiness Quality ModelTrustworthiness Quality Model

DependabilityDependability

• Avizienis (Avizienis et al., 2004) stated that dependability and trustworthiness have the same goals and they both face the same threats (faults, errors, and failures).

• Dependability is defined as “Ability to deliver a Service that can justifiably be trusted” or “ability of a system to avoid Service failures that are more frequent or more severe than is acceptable” (Avizienis, 2004).

• Dependability encompasses the following sub-attributes: availability, reliability, safety, integrity, maintainability. (Avizienis, 2004).

ReliabilityReliability

• Reliabilty is defined as:

“Ability to tolerate various severe conditions and perform intended function” (Raghavan, 2002).

• Another definition of reliability is:

“The probability that software will not cause that failure of a system for a specified time under specified conditions”

ReliabilityReliability

• The first definition implies that reliability is related to fault-tolerance and robustness (tolerate severe conditions), also reliability us related to correctness in this definition (perform intended function)

• The second definition implies that reliability is related to robustness and fault tolerance.

• To assess how reliable a software system is, these entire requirement (or sub-attributes) of reliability must be assessed.

Testing and Related Terms' DefinitionsTesting and Related Terms' Definitions

• The testing literature is mired with confusing and inconsistent terminology because it has evolved over decades and by different writers (Jorgensen, 2002). This section will introduce a definition of testing and the related terms.

The testing literature has the following main definitions of testing:

• Hetzel (Hetzel, 1973)“The process establishing confidence that a program or system does what it suppose to”.

Testing DefinitionTesting Definition

• Myers (Myers, 1979) “The process of executing a program or system with the intent of finding errors”.

• Beizer (Beizer, 19901212) “A process that is part of quality assurance and aims to show that a program has bugs (faults)”.

• IEEE (IEEE, 1990)“An activity in which a system or component is executed under specified conditions, the results are observed or recorded, and evaluation is made of some aspect of the system or components”.

Testing DefinitionTesting Definition

• Voas (Voas and McGraw, 1998)

“the process of determining whether software meets its defined requirements”.

• Harrold (Harrold, 2000)

“One of the old forms or verification that is performed to support quality assurance”.

Testing DefinitionTesting Definition

• Sommerville (Sommerville, 2004) defines testing as:

“Software testing involves running an implementation of the software with test data. You examine the outputs of the software and its operational behavior to check that it is performing as required. Testing is a dynamic technique of verification and validation”

Testing related terms Testing related terms

• The previous definitions introduce testing related terms such as quality assurance, fault, error, verification and validation.

• The goal of quality assurance is to improve software quality and to determine the degree to which the actual behavior of the software is consistent with the intended behavior or quality of this software.

• Testing related terms are defined to enable a better understand of testing definitions:

Fault, Errors, and Failure Fault, Errors, and Failure

• As mentioned before, Fault, error, and failure are considered as a threat to the dependability and they are defined as follows:

• Fault is defined as:“A defect in the system that may lead to an error” (Osterweil, 1996); another name of a fault is bug or defect.For a certain quality attribute there exist faults that affect this quality attribute. Examples of faults that may affect robustness quality attribute include: wrong input accepted, correct input rejected (IEEE, 1995). Some faults can affect more than one quality attribute, for example, wrong input accepted fault affects robustness, fault tolerance and security.

Fault, Errors, and FailureFault, Errors, and Failure

• Error is defined as“The part of the total state of the system that may lead to a failure” (Avizienis et al. 2004).

• Failure is defined as“the deviation of the execution of a program from its intended behavior” (Osterweil, 1996) Another definition of failure is:“An event that occurs when the delivered service deviates from correct service” (Avizienis et al. 2004).

Verification and Validation Verification and Validation

• Verification and validation (V & V) is the process of checking that a program meets its specification and delivers the functionality expected by the people paying for the software (Sommerville, 2004). Verification and validation are defined as follows:

V & VV & V

• Verification is defined as:“Checking that the software conforms to its specification and meets its specified functional and non-functional requirements” (Sommerville, 2004)

• Validation is defined as“Ensure that the software system meets the customer’s expectations” (Sommerville, 2004)

• Another definition of validation is“Determination of the correctness of the final program or software produced from a development project with respect to the user needs and requirements” (Adrion , et al. 1982).

Roles of software testing Roles of software testing

• It is noticed that different researchers view software testing differently, however, the following roles or goals of software testing can be included from the definitions:– Testing involves running or executing the system

under test with test data.– Testing is a performed to support quality assurance by

assessing the quality attributes

Roles of software testingRoles of software testing

– Testing is performed to find faults before they cause an error and consequently a failure

– Testing is a form of verification.– Testing is a form of validation.

Relation of roles of software testing Relation of roles of software testing definitionsdefinitions

• Table 1. analyzes the roles in each definition of software testing in order to reach a definition that contains all the testing roles. The table indicates whether we can infer a role (column) based on a particular definition (row).

The symbols shown in the table are:• The full circle (●) indicates that the definition explicitly states the

role.• The symbol (≈) indicates that the definition does not explicitly

express that specific role, but the context of the definition suggests it.

• The empty circle (○) indicates that the role is not included in a specific definition.

Relation of roles of software testing Relation of roles of software testing definitionsdefinitions

General DefenitionGeneral Defenition

• It is noticed from table1 that Sommerville (Sommerville, 2004) definition contains more of the software testing roles than the other definitions.

• After analyzing all the definitions, this module will use the following definition of software testing that includes all the roles mentioned in Table 1:

• Software testing is a quality assurance process that is part of the verification and validation processes, and involves executing the system under test with test data for the purpose of detecting faults and assessing the quality attributes of that system or software component.

ReferencesReferences

• Adrion W., Branstad M. & Cherniavsky, J. (1982) Validation, Verification, and Testing of Computer Software. ACM Computing Surveys, (Vol. 14, No. 2), pp.159-192.

• Avizienis, A., Laprie, J.-C., Randell, B. & Landwehr, C. (2004) Basic Concepts and Taxonomy of Dependable and Secure Computing. IEEE Transactions on Dependable and Secure Computing, (Vol. 1, No. 1), China, pp. 11-33.

• Boehm, B. W., Brown, J. R. & Lipow, M. (1976) Quantitative Evaluation of Software Quality. Proceedings of the 2nd International Conference on Software Engineering (ICSE), California, USA, 1976, pp. 592-605.

• ISO 9126-1: 2001 (2001) Software Engineering – Product quality – Part 1: Quality Model, International Organization of Standardization, Geneva, Switzerland.

ReferencesReferences

• McCall, J. A., Richards, P. K. & Walters, G. F. (1977) Factors in Software Quality. Three volumes, US Department of Commerce, National Technical Information Service (NTIS).

• Neumann, P. (2004). Principled assuredly trustworthy composable architecture. Emerging draft of the final report for DARPA’s composable high assurance trustworthy systems (CHATS) program. [Retrieved from http://www.csl.sri.com/users/neumann/chats4.pdf].

• Raghavan, G. (2002). Improving Software Quality in Product Families therough Systematic Reengineering. Proceedings of the Software Quality- ESSQ conference, Berlin, Germany, Springer-Verlag, pp. 90-99.

• Zhang, J. & Zhang, L.-J. (2005) Web Services Quality Testing. International Journal for Web Services Research, (Vol. 2, No. 2), pp. 1-4.