SE Module Summary (FULL) - OUM Students from Maldives · PDF file , Page5, e)...
Transcript of SE Module Summary (FULL) - OUM Students from Maldives · PDF file , Page5, e)...
www.oumstudents.tk Page 1
1) Software Engineering a) Software: Programs, documentation and configuration data which is needed to make programs operate correctly.
i) Two types of Software Products:
(1) Generic Products: Eg: word processors, databases, drawing packages and project management tools etc.
(2) Customised Products: Eg: online registration system, banking system, real-‐time systems etc.
b) Software Engineering: An engineering discipline, which is concerned with all aspects of software production from
the early stages of system specification until maintaining the system.
i) Engineering Discipline: Software engineers apply theories, methods and tools to solve problems concerning the software development process.
ii) All Aspects of Software Production: It also includes project management activities and using tools, methods
and theories to support software production.
c) Software Process: A set of activities and results, which produce a software product.
i) Four fundamental Software Process Activities:
(1) Specification: Defines the requirements of the software and also the constraints involved.
(2) Development Involves the production of the software.
(3) Validation The software must be validated to ensure that it meets the customer’s satisfaction.
(4) Evolution The software must evolve to meet any requirement changes by the users.
d) Software Types: Eg: real-‐time software, web-‐based software, game software, multimedia software, embedded software and many more.
e) Software Process Model: A simplified description of a software process. It consists of activities that are part of the software process, software products and
the roles of people who are involved in software engineering.
f) Software Engineering Method: A structured approach to software development. To facilitate the production of high-‐quality software with adequate cost. i) Components in Software Engineering Methods
(1) System Model Description: Descriptions of the system models, which should be developed, and the notation used to define these models. Eg: Object models, data-‐flow models, state machine models, etc.
(2) Rules: Constraints that always apply to system models. Eg: Every entity must have a unique name.
(3) Recommendations: Heuristics which characterise good design practice in this method. Eg: No object should have more than seven sub-‐objects associated with it.
(4) Process Guidance: Descriptions of the activities which may be followed to develop the system models and the organisation of these activities. Eg Object attributes should be documented before defining the operations associated with an object.
g) Essential Characteristics of a Well-‐designed Software System
i) Maintainability: Software should be written in such a way that it may evolve to meet the changing needs of
customers. As software change is an inevitable consequence of a changing business environment. ii) Dependability: Software dependability has a range of characteristics, including reliability, security, and safety. iii) Efficiency: Software should not make wasteful use of system resources such as memory and processor cycles.
Efficiency therefore includes responsiveness, processing time, memory utilisation, etc. iv) Usability: Software must be usable, without undue effort, by the type of user for whom it is designed. This
means that it should have an appropriate user and adequate documentation.
www.oumstudents.tk Page 2
2) Software Process a) Software Process Model (Software Development
Models): A structured set of activities required to develop a software system. i) The Waterfall Model: This model is separated
phases of specification and development.
(1) Phases are: (a) Requirement Analysis and Definition (b) System and Software Design
(c) Implementation and Unit Testing (d) Integration and System Testing (e) Operation and Maintenance
(2) Three Problems with Waterfall Model: (a) Inflexible partitioning of the project into distinct stages; (b) Waterfall model is difficult to respond to changing customer requirements; and
(c) Waterfall model is only appropriate when the requirements are well understood.
ii) Evolutionary Development: In this model, specification and
development are interleaved . For example, an initial system rapidly developed from abstract specifications. This is then refined with customer input to produce a system which satisfies the
customer’s needs.
(1) Three Problems with Evolutionary Development
(a) The Process is Not Visible (b) Systems are Often Poorly Structured
(c) Special Tools and Techniques maybe
(2) This approach is suitable for small or medium-‐sized systems. But, for large, long-‐lifetime systems, a mixed process that
incorporates the waterfall model and the evolutionary development model is more suitable.
iii) Formal Systems Development: Development process is based on formal mathematical transformation of a system specification to an executable program.
(1) Main difference between Formal systems and
Waterfall model: (a) The software requirements specification is later refined into a detailed
formal specification which is expressed in a mathematical notation; (b) The development process design, implementation and unit testing are
replaced by a transformational development process.
(2) Advantages of transformational approach (a) A program meets its specification, where the distance between each transformation is less than the
distance between a specification and a program;
(b) A transformational approach made up of a sequence of smaller steps is better.
(3) Disadvantages of transformational approach (a) Processes cased on formal transformations are not widely used. Hence they require specialised
expertise and the system interaction is not amenable to formal specification and this takes up a very large part of the development effort for most software systems.
(4) This approach is particularly suited to: Development of systems that have tough safety, reliability or
security requirements.
www.oumstudents.tk Page 3
iv) Reuse-‐based Development The system is assembled from existing components or reusable components.
(1) Four main stages: (a) Component Analysis From the given
requirements specification
(b) Requirements Modification to reflect the available components.
(c) System Design with Reuse During this phase, the framework of the system is designed or an existing
framework is reused and sometimes some new software may have to be designed if reusable components are not available.
(d) Development and Integration Lastly, the software which cannot be brought in is developed and the
components and Commercial-‐Off-‐The-‐Shelf (COTS) systems are integrated to create the system.
(2) Advantage of reuse-‐ oriented model is that it reduces the amount of software to be developed, reduces
cost and risk and faster delivery of the software.
(3) Disadvantages of reuse-‐ oriented model: Requirements compromises are inevitable and it does not meet the real needs of the users.
b) Process Iteration: Can be applied to any of the generic models. There are two types:
i) Incremental Development: In this approach, the
software specification, design and implementation are broken down into a series of increments. Introduced to reduce rework in the development
process.
(1) Advantages: (a) Customer value can be delivered with each increment so system functionality is available earlier.
(b) Early increments act as a prototype to help to validate requirements for later increments. (c) Lower risk of overall project failure. (d) The highest priority system services tend to receive the most testing.
(2) Disadvantages: (a) Increments should be relatively small. (b) It may be difficult to map the customer’s requirements onto the right size of increments.
(c) Most systems require a set of basic facilities, which are used by different parts of the system. ii) Spiral Development: In this approach, the development of the
system spirals outwards from an initial outline through to the final
developed system. Process is represented as a spiral with backtracking and each loop in spiral represents a phase in the process.
(1) Each loop divided into 4 sectors: (a) Objective Setting: Specific objectives for the phase are
identified. (b) Risk Assessment and Reduction: Risks are assessed and
activities put in place to reduce the key risks.
(c) Development and Validation: A development model for the system is chosen which can be any of the generic models.
(d) Planning: The project is reviewed and the next phase of the spiral is planned. (2) There are no fixed phases such as specification or design in the spiral mode
www.oumstudents.tk Page 4
c) Software Specification (aka Requirements Engineering): Process of establishing what services are required and the
constraints on the system’s operation and development. Leads to the production of requirements document (specification) for the system. i) Four main phases:
(1) Feasibility Study To identify users needs such as software and hardware technologies that should be used on the new system
(2) Requirements Analysis To derive the system requirements through observation of existing systems,
(3) Requirements Specification To translate the information gathered during the analysis activity into a document that defines a set of requirements. Two types; user requirements and system requirements.
(4) Requirements Validation To check the requirements for realism, consistency and completeness and errors in the requirements document are inevitably discovered.
d) Software Design and Implementation: Process of converting the system specification into an executable system. i) Divided into two parts:
(1) Software Design: Design a software structure that realises the specifications (a) Design Methods: Structured methods applied successfully in many large projects and may support
some or all of the following models of a system: CASE tools developed to supports these methods. (i) Data-‐flow model: Where the system modeling is done using the data transformations which
happen when it is processed; (ii) Entity-‐relation model: Used to describe the basic entities and the relations between them in the
design stage. Normally entity-‐relation models are used to describe database structures;
(iii) Structural model: Indicates the system components and their documented interactions; (iv) Object-‐oriented methods: Include an inheritance model of the system, models of the static and
dynamic linkages between objects and a model of interactions between objects when the system is
executing.
(2) Implementation: Translate the structure into an executable program. (a) Programming and Debugging:
(i) Programming: Process of translating a design into a program and removing errors.
(ii) Debugging: Process of identification and removal of localised implementation errors or bugs. 1. Testing establishes the existence of defects 2. Debugging is concerned with locating and correcting these defects.
ii) Design descriptions produced from different stages:
(1) Architectural Design: The subsystems that make up the system and their relationships
are identified and documented.
(2) Abstract Specification: For each subsystem, an abstract specification of its services and
the operation constraints is produced.
(3) Interface Design: For each subsystem involved, its interface with other subsystems is designed and documented.
(4) Component Design: Services allocated to different components and components interfaces are designed.
(5) Data Structure Design: The data structures which are used in the system implementation are designed in
detail and specified.
(6) Algorithm Design: The data structures which are used in the system implementation are designed in detail and specified.
www.oumstudents.tk Page 5
e) Software Verification and Validation: Intended to show that a system conforms to its specification and meets the
requirements of the system customer. It involves checking and review processes and system testing. i) Testing Process Stages:
(1) Unit Testing: Individual components are tested.
(2) Module Testing: Related collections of dependent components are tested.
(3) Subsystem Testing: Modules are integrated into
subsystems and tested. The focus here should be on interface testing.
(4) System Testing: Testing of the system as a whole and testing of emergent properties.
(5) Acceptance Testing Testing with customer data to check that it is acceptable.
ii) Two types of Testing:
(1) Alpha Testing (Acceptance testing): Continues until the system developer and client agree that delivered system is an acceptable implementation of system requirements.
(2) Beta Testing: Delivering a system to a number of potential customers who agree to use the system. Often used when system is to be marketed as a software product.
f) Software Evolution: Concerned with modifying existing software systems to meet new requirements. Software is inherently flexible and can change.
i) Software Maintenance: Process of changing that system once it has gone into use.
3) Software Prototyping a) Software Prototype:
i) Supports two requirements engineering process activities:
(1) Requirements Elicitation
(2) Requirements Validation
ii) Can be used as a risk analysis and reduction technique.
iii) The overall development costs may be lower
iv) Usually leads to improvements in system specification. Once a prototype is available, it can be used for other
purposes such as user training and system testing .
v) Usually increases cost in the
early stages of the software process but reduces later costs.
vi) Other benefits: (1) Misunderstandings between software developers & users identified as system functions are demonstrated.
(2) Software development staff may find incomplete or inconsistent requirements as the prototype is developed.
(3) A working, even though limited, system is available quickly to demonstrate the feasibility and usefulness of the application to management.
(4) The prototype may be used as a basis for writing the specification for a production-‐quality.
www.oumstudents.tk Page 6
vii) Things that can be left out of prototype for reducing cost and accelerate delivery schedule: (1) Some additional functionality;
(2) Non-‐functional requirements such as response time and memory utilization;
(3) Error handling and management; and
(4) Standards of reliability and program quality.
b) Prototyping Approaches: i) Evolutionary Prototyping: Starts from developing an
initial implementation, gets comments from the user and modifies continuously through many stages until an
adequate system has been developed.
(1) Objectives: (a) To deliver a working system to end users.
(b) Should be developed to the same organisational quality standards as any other software
(c) Starts with a simple system, which implements the most important user requirements. Then, it is changed and improved as new requirements are discovered. In the end, it will become the required system
(d) It is now the normal technique used for website development and e-‐commerce applications
(2) Advantages: (a) Accelerated Delivery of the System
(b) User Engagement with the System
(3) Problems: (a) Management Problems: Prototypes evolve very fast that it is not cost-‐effective to produce a lot of
system documentation. (b) Maintenance Problems: Continual change tends to break the structure of the prototype system.
(c) Contractual Problems: When specification is not available, it can be difficult to design a contract for the system development.
(4) Incremental Development: System components are
incrementally developed and delivered within this framework. Once these parts have been validated and delivered, neither the framework nor the
components are changed unless errors are discovered. (a) More manageable than evolutionary
prototyping because the normal software process standards are followed. Plans and documentation must be produced for each system increment.
ii) Throw-‐away Prototyping: After evaluation, the prototype is discarded and a production-‐quality system is built
(1) Objectives: (a) To validate or derive the system requirements (b) Has a very short lifetime. It can be changed during
development but long-‐term maintainability is not required
(c) Intended to help refine and clarify the system specification (d) The prototype evaluation informs the development of the
detailed system specification which is included in the system requirements document
(2) Usually used for hardware systems: The prototype is used to check the design before expensive commitments to manufacturing the system have been made.
www.oumstudents.tk Page 7
iii) Rapid Prototyping Techniques: Development techniques that emphasize delivery speed rather than other
system characteristics such as performance, maintainability or reliability.
(1) Dynamic high-‐level language development; (a) Dynamic high-‐level languages (HLL) are programming
languages that include powerful run-‐time data management facilities.
(b) Things to consider when choosing prototyping language: (i) Application Domain of the Problem (ii) User Interaction is Required (iii) Support Environment Provided with the Language
(2) Database Programming: Is a specialised language that embeds knowledge of the database and includes operations that are related to database manipulation.
(a) The tools that are included in a 4GL environment (i) SQL is normally used for a database query language.
This may be input directly or generated automatically
from the forms filled in by an end-‐user. (ii) Forms for data input and display is created by using
an interface editor.
(iii) Spreadsheet used for the analysis and manipulation of numeric information. (iv) Report generator that is used to design reports from information in the database.
(3) Component Reuse (Component and application
assembly): Prototyping with reusable components involves developing a system specification by considering what reusable components are available.
The functionality of the available components may not be very precise for the user requirements. (a) Advantage: A lot of prototype functionality can be implemented quickly at a very low cost.
(b) Disadvantage: If they do not know how to use the applications, it may be difficult to learn, especially they may be confused by application functionality which is not necessary
iv) User Interface (UI) Prototyping: Prototyping is also essential for parts of the system such as the user interface
which cannot be effectively pre-‐specified.
(1) User-‐Centered Design: Where the user takes part in the interface design process. Evolutionary prototyping with end -‐user involvement is very important to develop graphical user interfaces for software systems.
4) Software Project Management a) Project means a temporary endeavour undertaken to accomplish a unique purpose.
b) Project management is the application of knowledge, skills, tools and techniques to project activities in order to
meet or exceed stakeholder needs and expectations from a project.
c) Project manager is the person responsible to manage and lead the project team towards the completion of a project.
d) Tracking progress: The project manager needs to monitor and control the project development as well as the project team’s performance. Thus, he prepares a: i) Project Schedule: Describes the software development cycle for a particular project by identifying the phases
or stages of a project and breaking them into discrete tasks or activities to be done. The schedule also portrays the interactions among these activities and estimates the time each task or activity will take.
www.oumstudents.tk Page 8
e) Deliverables that need to be prepared To keep track on the progress of the project:
i) Documents; ii) Demonstrations of function; iii) Demonstrations of subsystems;
iv) Demonstrations of accuracy; and v) Demonstrations of reliability, security or performance.
f) Then, the project activities and milestones need to be identified. i) A Milestone: the completion of an activity,the end of a specially designated activity ii) An Activity: part of the project that takes place over a period of time, and has a beginning and an end
g) Work Breakdown Structure (WBS) is a method that divides the project
as a set of discrete pieces of work. It can be used to show activities and
milestone for developers and customers but it does not give indication of the interdependence of the work units. i) Usage of WBS:
(1) Thought Process Tool (2) Architectural Design Tool (3) Planning Tool (4) Project Status Reporting Tool
h) Work packages are smaller components and more manageable units of work. i) Provides a logical basis for defining the project activities and
assigning resources to those activities in order to identify all project works.
ii) Makes it possible to develop a project plan, schedule and budget
and later monitor the project’s progress.
i) Estimating Completion:
i) Analogous Approach (Top-‐down estimate): The analogous approach uses the actual costs and durations of previous, similar projects as the basis for estimating the current project.
ii) Parametric Modelling: Uses mathematical parameters to predict project costs.
iii) Bottom-‐up Estimate: Estimates the cost and duration of the individual work packages from the bottom row of activities of the work breakdown structure, and then totals the amounts up each row until reaching an estimate for the total project.
iv) Simulation: A computer calculates multiple costs or durations with different sets of assumptions.
j) Project Tracking Tools: Normally, there are two main tools used in
software project development: i) Gantt chart is a chart where a depiction of a project. The activities
are shown in parallel, with the degree of completion indicated by a colour or icon.
ii) Network Diagram provides a visual layout of the sequence in which
project work flows. It includes detailed information and serves as an analytical tool for project scheduling and resource management problems as they arise during the life of the project.
www.oumstudents.tk Page 9
(1) Dependency is simply a relationship that exists between pairs of
activities. (a) Finish-‐ to-‐Start (FS) task A must be completed before task B
can begin
(b) Finish-‐to-‐Finish (FF) task B cannot finish sooner than task A. (c) Start-‐to-‐Start (SS) task B may begin once task A has begun. (d) Start-‐to-‐Finish (SF) is a little more complex than the FS and SS
dependencies.
k) Project Personnel: The assignment of staff to tasks depends on project
size, staff expertise and staff experience. There are good advantages in assigning different responsibilities to different sets of people because this can identify faults early in the development process.
l) Project Organizing: team members are organised to enhance the fast completion of quality products.
i) The choice of an appropriate structure for software project depends on:
(1) The backgrounds and work styles of team members.
(2) The number of people on the team.
(3) The management styles of the customers and developers ii) Egoless approach holds everyone equally responsible for the project. The process is separated from the
individuals; the judgment is made on the product or the result,
m) Risk Management:
i) Risks are uncertain events or conditions that, if they occur, have a positive or negative effect on the project
objectives. The purpose of risk management is to maximise the results of positive events and minimize the results of undesirable events.
ii) Identifying risk: Major categories of risks:
(1) Technical New breakthroughs, design errors or omissions.
(2) Administrative Culture of the organisation, change in management or priorities, office politics.
(3) Environmental Culture of the organisation, change in management or priorities, office politics.
(4) Financial Budget cuts, cash flow problems, corporate unprofitability, unchecked expenditures, changing economic conditions.
(5) Resource Availability Specialised skills or critical equipment not available.
(6) Human Human error, poor worker performance, personality conflicts, communication breakdown.
(7) Logistical Inability to deliver materials to work face-‐to-‐face.
(8) Governmental Government regulations.
(9) Market Product fails in the marketplace, consumer expectations change, and new competitor products.
iii) Assessing Risk: Mainly focused on prioritising risks so that an effective risk strategy can be formulated.
(1) Qualitative Approaches: Qualitative risk analysis focuses on a subjective analysis of risks based upon a project stakeholder’s experience or judgement. Techniques are:
(a) Expected Value An average or mean that takes into account both the probability and impact of various events or outcomes.
(b) Decision Trees It provides a visual or graphical view of various decisions and outcomes.
(c) Risk Impact Table Used to analyse and prioritise various IT project risks.
(2) Quantitative Approaches: Quantitative approaches to project analysis include mathematical or statistical techniques that allow the team to model a particular risk situation. Examples are:
(a) Discrete Probability Distribution Use only integer or whole numbers where fractional values are not allowed.
www.oumstudents.tk Page 10
(b) Continuous Probability Distribution Useful for developing risk analysis models when an event has an
infinite number of possible values within a stated range. iv) Responding to Risk: The purpose of risk response is to minimise the probability and consequences of negative
events and maximise the probability and consequences of positive events. They are:
(1) Planning Responses: A response plan should be developed before the risk event occurs. Then, if the event occurs, just simply execute the plan that already developed.
(2) Possible Responses: In developing a response plan, consider ways to avoid the risk, transfer it to someone
else, mitigate it, or simply accept it.
(3) Avoiding: It may be possible to eliminate the cause, and therefore, prevent the risk from happening. This
may involve an alternative strategy for completing the project.
(4) Transferring: It may also be possible to transfer some risk to a third party, usually for the payment of a risk premium.
(5) Mitigating: Mitigation plans are steps taken to lower the probability of the risk event happening or reduce the impact that will occur.
(6) Accepting: When there is a low likelihood of a risk event, or when the potential impact on the project is
low, or when the cost of mitigation is high, a satisfactory response may be to accept the risk.
(7) Response Plan Outcomes: Contingency plans describe the actions to be taken if a risk event occurs. Reserves are provisions in the project plan to mitigate the impact of risk events.
5) Software Requirements a) Requirements are the descriptions of the services and constraints for the system.
b) Requirements engineering is the process of finding out, analysing, documenting and checking the services and constraints.
c) Requirement types: i) User Requirements: This requirement consists of statements of the
services that the system should provide and the constraints of the
system. Usually, it is stated using natural language and diagrams.
(1) Potential Problems when Requirements are Written in Natural Language:
(a) Lack of Clarity (b) Requirements Confusion (c) Requirements Combination
(2) Simple guidelines for requirements should be followed to minimise misunderstandings: (a) Invent a standard format and ensure that all requirement definitions are following that format;
(b) Use language consistently especially when differentiating between mandatory and desirable reqs; (c) Use text highlighting such as bold and italic to pick out the key points of the requirement; and (d) Try to avoid using computer jargon. On the other hand, sometimes technical terms still have to be used
in the user requirements but they should be explained in a simpler way. ii) System Requirements (Functional specifications): This requirement will describe the system services in detail.
Later, the documents will be used by software engineers as the starting point for the system design. It also
includes different models of the system, eg an object model or a data-‐flow model. It should state what the system should do and not how it should be implemented.
(1) Reasons why most of the design information are included in system requirement document
(a) An initial architecture of the system will be described to help the construction the requirements specification.
(b) Normally, system must interoperate with other existing systems.
(c) The use of a specific design may be an external system requirement.
www.oumstudents.tk Page 11
(2) Problems with natural language can arise when used for more detailed specification such as:
(a) Natural language will work well only if readers and writers use the same words for the same concept. (b) Natural language requirements specification has excess flexibility. (c) It is difficult to revise or amend natural language requirements and find all related requirements.
(3) Alternatives that are used to add structure to the specification and to reduce ambiguity: (a) Structured Natural Language Structured natural language depends on defining standard forms or
templates to express the requirements specification. (b) Design Description Language Design description language uses a language like a programming
language but with more abstract features to specify the requirements by defining an operational model
of the system. (i) Program description language (PDL) is a language created from a programming language like Java
or Ada. 1. They are used in two situations:
a. When an operation is specified as a sequence of simpler actions and when the order of
execution is important. This is because descriptions of such sequences in natural language are sometimes confusing.
b. When hardware and software interfaces have to be specified. Usually, the interfaces
between subsystems are defined in the system requirements specification. 2. Disadvantages of using this approach to requirements specification:
a. The language that is used to write the specification may not be communicative enough to
describe the system functionality. b. The notation can only be understood by people who have some programming language
background.
c. The requirement can be treated as a design specification rather than a model to help the user to understand the system.
(c) Graphical Notations A graphical language, supplemented by text annotations, is used to define the
functional requirements for the system. An earlier example of such a graphical language was SADT. More recently, use-‐case descriptions have been used.
(d) Mathematical Specifications These are notations based on mathematical concepts such as finite-‐state
machines or sets. These unambiguous specifications reduce the arguments between customer and contractor about system functionality. However, most customers do not understand formal specifications and are reluctant to accept them as a system contract.
(4) Interface Specification: When new software systems need to be installed and used, most of the time it must operate with other systems which have been implemented and installed earlier. If the new system
and the existing systems is required to work together, the interfaces of existing systems must be correctly specified. (a) Procedural Interfaces Existing subsystems offer a range of services which are accessed by calling
interface procedures. (b) Data Structures Data structures are passed from one sub-‐system to another. A Java-‐based PDL may be
used for this with the data structure being described using a class definition with attributes
representing fields of the structure. (c) Representations of Data Representations of data (such as the ordering of bits) have been established
for an existing subsystem.
iii) Software Design: This document consists of abstract description of the software design and later will be used
for more detailed design and implementation. This document is the enhancement of the system requirements specification.
www.oumstudents.tk Page 12
d) Software system requirements are classified as:
(1) Functional requirements: consist of statements of services the system should provide, how the system should react to particular inputs and how the system should behave in certain situations, and sometimes, it also states what the system should not do.
(2) Non-‐functional requirements: determine the constraints on the system services or functions. They include timing constraints, constraints on the development process, standards, and others.
(3) Domain requirements: come from the application domain of the system reflecting the characteristics of
that domain. They are divided into functional or non-‐functional requirements.
e) Software requirements document (Software Requirements Specifications, SRS) is the official statement of what is required for the system developers.
f) The Structure of a Requirements Document:
www.oumstudents.tk Page 13
6) Requirements Engineering Process a) Requirements Engineering: Includes all of the activities needed to create and maintain a system requirements
document. b) 4 Requirements Engineering Process Activities:
i) Feasibility Studies: A short and focused
study. Its objective is to answer:
(1) Input: Outline description of the system and how it will be used within an
organisation.
(2) Outcomes: Feasibility Report shold
(3) Questions to be answered: (a) The system contribution to the
overall objectives of the organisation. (b) Possibility of implementing the system using current technology and within given cost and schedule
constraints. (c) Possibility of integrating the system with other systems, which are already in place.
(4) Activities: Information Assessment, Information Collection and Report Writing.
(5) Sources of Information: (a) Department managers where the system will be used;
(b) Software engineers who are familiar with the type of system that is proposed; (c) Technology experts; and (d) End-‐users of the system.
ii) Requirements Elicitation and Analysis: An iterative process which involves domain understanding, requirements collection, classification, structuring, prioritisation and validation.
(1) Stakeholders: Everyone who has some direct or indirect influence on the system requirements
(2) It’s a Difficult Process Because: (a) Stakeholders often do not really know what they want from the
computer system. They often make unrealistic demands. (b) Stakeholders in a system normally express requirements in their own
terms and with own knowledge of their own work. (c) Stakeholders with different job descriptions normally have different
requirements and it could be expressed in
several different ways. (d) Political Factor, as it can increase influence (e) Economic and business environment in which
the analysis takes place keeps changing.
(3) Techniques of Requirement Elicitation and Analysis
(a) Scenarios: Descriptions of how a system is used in practice. People can relate to these more readily than abstract statement of what they
require from a system. (i) A scenario may include:
1. A system states description at the starting of the scenario.
2. A description of the normal flow of events that happen in the scenario. 3. A description of what can go wrong and how to handle it. 4. Information about other activities that could be happening at the same time.
5. A description of the state of the system after the ending of the scenario.
www.oumstudents.tk Page 14
(ii) Event scenarios are used to document the system behaviour when presented with specific events.
Includes a description of data flows and the actions of the system and document the exceptions. (iii) Use-‐case: A scenario-‐based technique for requirements elicitation, will identify the actors involved
in an interaction and will name the type of interaction.
(b) Ethnography is a technique of observation that can be used to understand social and organisational requirements. (i) System analyst involves himself in the working environment where the system will be used.
(ii) It helps discover hidden system requirements which represent the actual process. (iii) Two types of requirements that Ethnography is usually useful at discovering:
1. Requirements taken from the way people actually work rather than the way process definitions
say they should work. 2. Requirements that are taken from cooperation and knowledge of other people’s activities.
iii) Requirements Specification:
iv) Requirements Validation: Proving that the requirements match the users’ requests. Concerned with finding problems with the requirements.
(1) Validation is important because errors in a requirements document can lead to extensive modification
costs when they are later discovered during development or after the system is in service.
(2) Different Types of Checks: (a) Validity Checks (b) Consistency Checks Requirements in the document should not conflict with each other. (c) Completeness Checks The requirements document should include requirements that define all
functions and constraints requested by the system user. (d) Realism Checks All requirements should be checked to make sure that they can be implemented using
existing technology, budget and schedule for the system development.
(e) Verifiability To minimise the potential for disagreement between customer and contractor, system requirements should always be written so that they are verifiable.
(3) Requirement Validation Techniques:
(a) Requirements Reviews Systematic manual analysis of the requirements. (b) Prototyping It is using an executable model of the system to check requirements. (c) Test-‐case Generation Developing tests for requirements to check testability.
(d) Automated Consistency Analysis Checking the consistency of a structured requirements description.
(4) Requirements review is defined as a manual process of checking the requirements document for abnormalies and omission which involves the client and contractor staff.
(a) Informal Requirements Review just involve requirements discussion between developers with as many system users as possible.
(b) Formal Requirements Review, the development team should walk the client through the system
requirements, explaining the implications of each requirement.
c) Requirements management is the process to understand and control changes to system requirements. Business,
organisational and technical changes inevitably lead to changes to the requirements for a software system. i) New requirements appear because:
(1) Different users have different requirements and priorities.
(2) The people who pay for a system and the users of a system are usually not the same people.
(3) The business and technical environment of the system changes frequently and these changes must be
reflected in the system itself. ii) Three principal stages to a change management process
(1) Problem Analysis and Change Specification
(2) Change Analysis and Costing (3) Change Implementation
www.oumstudents.tk Page 15
7) User Interface Design a) Graphical User Interface (GUI):
i) Advantages:
(1) Easy to learn and Use (2) Can open several screens for system interaction
(3) Possible to have fast and full screen interaction with immediate access to anywhere on the screen. ii) Characteristics:
(1) Windows: Multiple windows allow display of different information simultaneously.
(2) Icons: Represents different types of information, easy and fast to understand.
(3) Menus: Commands are selected from a menu rather than typed in command language.
(4) Pointing: Selecting choices are fast with a pointing device such as a mouse.
(5) Graphics: Graphical elements can be mixed with text on the same display. iii) User Interface Design Process:
(1) Exploratory development is considered the most effective design.
(2) This prototyping process can begin with simple
paper-‐based interface test design before starting to develop screen-‐based designs that simulate user interaction.
(3) A user-‐centered approach should be used, where the end-‐users of the system playing an active
part in the design process. iv) Techniques Used to understand the user’s needs: Task analysis, Ethnographic studies, User interviews and
Observations or a mixture of all of these techniques.
b) User Interface Design Principles: i) User Familiarity Principle stated that users should not be forced to adapt to an interface because it is
convenient to implement.
(1) The interface should use terms and concepts which are drawn from the experience of the people who will make most use of the system
ii) Consistency Principle: System commands and menus should have the same format, command punctuation
should be the same and parameters should be passed to all commands in the same way.
(1) The interface should be consistent in that, wherever possible, comparable operations should be activated in the same way.
iii) Minimal Surprise Principle: User can get irritated when a system behaves unexpectedly
(1) Users should never be surprised by the behaviour of a system. iv) Recoverability Principle: Users cannot avoid from making mistakes when using a system.
(1) The interface should include mechanisms to allow users to recover from errors. v) User Guidance (Assistance) Principle: stated that interfaces should have built-‐in user help facilities.
(1) The interface should provide meaningful feedback when errors occur and provide context-‐sensitive user help facilities.
vi) User Diversity Principle: States that, there are different types of users for many interactive systems.
(1) The interface should provide appropriate interaction facilities for different types of system user
(2) There are two types of users: (a) Casual Users: users who interact occasionally with the system.
(b) Power Users: users who use the system for several hours each day. vii) Principle of acknowledging user diversity can conflict with the other interface design principles. The reason is
that some types of user may prefer to have very rapid interaction rather than user interface consistency.
www.oumstudents.tk Page 16
c) User interaction: Giving commands and
data to the computer system. i) Direct Manipulation: For example,
to delete a file, a user can drag it to
a trashcan on the screen. ii) Menu Selection: A user selects a
command from a list of choices
iii) Form Fill-‐in: Filling the form fields iv) Command Language: Issuing a
special command and related
parameters to instruct the system what to do.
v) Natural Language: In order to delete a file, the user could type ‘delete the file named xxx’.
d) Information Presentation: By separating the presentation
system from the data, the representation on the user’s
screen can be changed without changing the basic computational system. i) Model-‐View-‐Controller (MVC): First used in Smalltalk. It is a useful way to support multiple presentations of
data and users can interact with each presentation using a style that is more suitable to it. The data to be displayed is encapsulated together in a model
object. Each model object can have several separate view objects associated with it where each view is a
different display representation of the model. ii) Factors to be considered to decide on how to present information:
(1) User interest in specific information or in the relationships between different data values.
(2) Information values change rate.
(3) User response to information change.
(4) User interaction with the displayed information.
(5) Type of information to be displayed. For example: textual or numeric. iii) Guidelines for Effective Color Use in User Interfaces:
(1) Limit the colors and be conservative on how they are used
(2) Use color change to show a change in system status
(3) Use color coding to support the task which users are trying to perform
(4) Use color coding in a thoughtful and consistent way (5) Be careful about color pairing
iv) Two most frequent Errors made by designers when using colour in a user interface:
(1) Using too many colours in a display,
(2) Associating meanings with particular colours. Color should not be used to represent meaning because:
(a) About 10% of men are colour-‐blind (b) Human colour perceptions are different and there are different interpretations in different professions
about the meaning of particular colours.
e) User Support: Help systems are one part of user interface design. It is used for user guidance.
i) Three areas covered by User Support Help Systems:
(1) The message created by the system to react to user actions.
(2) The online users help system.
(3) The documentation given with the system.
www.oumstudents.tk Page 17
ii) Factors that should be considered when designing help text or error messages:
(1) Context: Be aware of what the user is doing and should adjust the output message to the current context.
(2) Experience: Should provide both long meaningful messages (for beginners) and short messages (for more experienced users) and allow the user to control message conciseness.
(3) Skill Level: Messages should be tailored to the users’ skills as well as their experience (terminologies etc).
(4) Style Messages should be positive rather than negative. They should use the active rather than the passive
mode of address. They should never be insulting or try be funny.
(5) Culture Wherever possible, the designer of messages should be familiar with the culture of the country where the system is sold. A suitable message for one culture might be unacceptable in another.
iii) Error Messages: Very important as it can be the first impression users have about a system. Error messages should constantly be polite, concise, consistent and constructive. Good error message should suggest how the error can be corrected and provide a link to a help system.
iv) Help System Design: The structure of the help frame network is usually hierarchical
with cross-‐links. General information is held at the top of the hierarchy while detailed
information is located at the bottom.
f) User Documentation (System Manual): is important to guide the users on how to use a particular system.
i) Functional Description: Describes, very briefly, the services which the system provides
ii) Installation Document: Contains details of how to install the system
iii) Introductory Manual: Presents an informal introduction to the system, describing its normal usage, how to get
started and how end-‐users might use the common system facilities. iv) Reference Manual: Explains the system facilities and their usages, gives a list of error messages and possible
causes and explains how to recover from detected errors.
v) Administrator’s Manual: Explain the messages generated when the system interacts with other systems and how to respond to these messages.
g) Interface Evaluation: Process of testing the usability of an interface and testing whether it meets user requirement. i) Usability Attributes:
(1) Learnability How long does it take a new user to become productive with the system?
(2) Speed of Operation How well does the system response match the user’s work practice?
(3) Robustness How tolerant is the system of user error?
(4) Recoverability How good is the system at recovering from user errors?
(5) Adaptability How closely is the system tied to a single model of work? ii) Simpler, Less Expensive Techniques of User Interface Evaluation:
(1) Questionnaires that are used to collect information about user’s opinion of the interface.
(2) Observation and Interview of users working with the system.
(3) Video recording of usual system use.
(4) The insertion in the software of code that collects information about the most-‐used facilities and most common errors.
www.oumstudents.tk Page 18
8) Design with Re-‐Use a) Design with Re-‐Use: involves designing the software
based on existing examples of good design and use these software components where available and suitable.
b) Two types of Re-‐Use:
i) Opportunistic Re-‐Use: Used in programming when components are suitable for a requirement.
ii) Systematic Re-‐Use: Needs a design process that
looks at how existing designs can be reused, and that use the design in available software components.
c) Reused-‐based Software Engineering: An approach to software development where it tries to maximise the reuse
of existing software. d) There are three main requirements for component Re-‐Use:
i) The components that are reusable need to be kept for future use.
ii) The people who reuse the components must have confidence that the components are reliable and functional. iii) The reusable components must have related documentation to help the people who want to reuse these
components to understand them and use them in a new application.
e) Advantages of Software Re-‐Use: i) Reduce in Overall Development Costs: Fewer software components will be specified, designed, implemented
and validated.
ii) Increased Reliability: Reused components that have been implemented in working systems should be more reliable than new components.
iii) Reduced Process Risk: The uncertainty in the cost of reusing software component is less than developing a new
component. iv) Effective Use of Specialists: The application specialists can develop reusable components that use their
knowledge instead of doing the same work on different projects.
v) Standard Compliance: The use of standard user interfaces enhances reliability as users make fewer mistakes when given a familiar interface.
vi) Accelerated Development: Speed of system production is increased because time for both development and
validation should be reduced. f) Disadvantages of Software Re-‐Use:
i) Increased Maintenance Costs: Reused elements of the system may become incompatible with system changes,
if source code is not available. ii) Lack of Tool Support: It is difficult to find a tool to support component reuse. Eg: CASE tools doesn’t support it. iii) Not-‐invented-‐here Syndrome Writing new software viewed more challenging than reusing other’s software.
iv) Finding and Adapting Reusable Components Software components have to be searched in a library or archive, understood and adapted to work in a new environment.
g) Generator-‐Based Re-‐Use: Re-‐usable knowledge is included in a program generator system that can be programmed in a domain-‐oriented language i) Cost-‐effective
ii) Depends on the identification of conventional domain abstractions. iii) Used in business data processing, language processing and in command and control systems. iv) Easier for end-‐users to develop programs.
h) Component-‐Based Development: Introduced due to the insufficient use of object-‐oriented
development. i) Stand-‐alone Providers: Means that when a system needs some service, it will call on a component to provide
that service without any concern about where that component is executing or the programming language used
to develop the component.
www.oumstudents.tk Page 19
ii) Two Characteristics of a Reusable Component:
(1) The component is treated as an independent executive entity, source code not available.
(2) Components bring out their interface and all interactions are made through that. iii) Component Interfaces:
(1) Requires Interface: Indicates what services must be available from the system which is using component.
(2) Provides Interface: Describes the services provided by the component.
iv) Five Different Levels of Abstraction: (1) Functional Abstraction: The component uses a single function such as mathematical function.
(2) Casual Groupings: The component is a collection of loosely related entities that could consist of data
declarations, functions and many more.
(3) Data Abstractions: The component symbolises a data abstraction or class in an object-‐oriented language.
(4) Cluster Abstractions: The component is a group of interrelated classes that work together. These classes
are sometimes known as frameworks.
(5) System Abstractions: The component is an entire self-‐contained system. Reusing system-‐level abstractions
is sometimes called COTS (Commercial-‐Off-‐The-‐Shelf) product reuse. v) Objects were the most appropriate abstraction for use, suggests users of object -‐oriented development. vi) Frameworks (or application frameworks): Subsystem design that consists of a collection of abstract and
concrete classes and also the interface between them.
(1) Three classes of frameworks: (a) System Infrastructure Frameworks: Supports the development of system infrastructures like
communications, user interfaces and compilers. (b) Middleware Integration Frameworks: Consists of a set of standards and associated object classes that
support component communication and information exchange. Eg: CORBA, DCOM, Java Beans etc.
(c) Enterprise Application Frameworks: Related to specific application domains. Eg: telecom system.
(2) Primary problem with frameworks is their inherent complexity and the time it takes in order to learn how to use them, high cost of introduction.
vii) Commercial-‐Off-‐The-‐Shelf (COTS): Offered by a third-‐party vendor. Related to the reuse of large-‐scale, off-‐the-‐shelf systems. These provide a lot of functionally and their reuse hugely reduces costs and development time.
(1) Advantages of COTS: COTS offers more functionalities than specialised components.
(2) Four Problems with COTS System Integration: (a) Lack of Control over Functionality and Performance (b) Problems with COTS System Interoperability
(c) No Control over System Evolution (d) Support from COTS Vendors
viii) Component Development for Reuse:
(1) Implement-‐based process: Reusable components are constructed from existing components that have been successfully reused
(2) Component characteristics that lead to reusability: (a) Stable domain abstractions: The main concepts in the application domain that slowly change. (b) The component should not reveal the way its state is represented and should provide operations that
allow the state to be accessed and updated (c) The component should be as independent as possible, Stand-‐alone. (d) All exceptions must be part of the component interface.
i) Design Patterns: In software design, design patterns have been linked with object-‐oriented design. usually rely on
object characteristics like inheritance and equally applicable to all approaches to software design:
i) Four Essential Elements of a Design Patterns: (1) Name (2) Description of the Problem Area (3) Solution Description (4) Statement of Consequences or Results and Profits of using the pattern.
www.oumstudents.tk Page 20
9) Verification & Validation a) Verification and Validation: Process of looking for defects in a software system. Conducted throughout the whole
project lifecycle. It starts with requirements reviews and continues through design reviews and code inspections to product testing. i) Verification: Process of checking software to conform it meets the specification. ii) Validation: More general process. Ensures the software meets the expectations of the customer. iii) Main goal of software verification and validation: is to make sure that the software is good enough for its
original purpose. iv) Should include activities like
(1) Draw up standards and procedures for software inspections and testing, (2) Establish checklists to drive program inspections and define the software test plan.
b) Debugging: The process of locating and correcting defects are called. c) Regression Testing: Re-‐inspecting the program or repeating previous test runs, to check that the new changes to a
program do not create new errors into the system.
d) The required confidence of Software depends on:
i) Software Function: Level of confidence depends on how important a software is to an organization. ii) User Expectations: Many users have low expectations of their software.
e) System checking and analysis techniques that can be used for verification and validation process:
i) Software Testing: Executing an implementation of the software with test data and examining the outputs, and
its operational behaviour, to find out if it’s performing as required. (1) Two distinct types of testing that may be used in different stages in the software process are:
(a) Defect testing is aimed to find inconsistencies between a program and its specification. These
inconsistencies are normally caused by program faults or defects. (b) Statistical testing is done to assess the program’s performance and reliability and to test how it works
under operational conditions. Tests are planned to show the actual user inputs and their frequency.
(2) Testing is very important for: reliability assessment, performance analysis and user interface validation and to check whether the software requirements are the same as the user specifications.
(3) Major Components of a Test Plan: (a) Testing Process: Description of the major phases of the testing process (b) Requirements Traceability: Plan testing so that all requirements are tested individually
(c) Tested Items: Software process products, which will be tested, should be identified. (d) Testing Schedule: Overall testing schedule, includes resource allocation (e) Test Recording Procedure: Test results should be systematically recorded, for ease of auditing etc.
(f) Hardware & Software Requirements: Determines software tools and estimated hardware required (g) Constrains: that affect the testing process, like staff shortage
ii) Software Inspections: To ensure that the software being developed does not contain error. Involves the activities of analysing and checking system representations like requirements document, design diagrams, codes etc. Can be implemented in all stages of the process. (1) Inspection techniques consist of:
(a) Program inspections, (b) Automated source code analysis
(c) Formal verification.
(2) Two reasons why Inspections are more effective than testing to discover defects: (a) During a single inspection session, a lot of different defects may be detected. Testing, detect only one
error per test. (b) Reviews and inspections reuse domain and programming language knowledge.
www.oumstudents.tk Page 21
(3) Program Inspection: Widely used nowadays to detect program defect. . The inspection process should be
guided by a defect checklist. Program code is often thoroughly checked by a small team. (a) Roles of team members in the Inspection Process:
(i) Author or Owner: Programmer or designer responsible for producing the program or document.
(ii) Inspector: Responsible for finding errors, omissions and inconsistencies in programs and documents.
(iii) Reader: Responsible for rephrasing the code or document at an inspection meeting.
(iv) Scribe: Responsible for recording the results of the inspection meeting. (v) Chairman or Moderator: Responsible for managing the process and facilitates the inspection. (vi) Chief Moderator Responsible for inspection of process improvements, checklist updating,
standards development and many more. (b) Among the responsibilities of the moderator are:
(i) Selecting an inspection team;
(ii) Organising a meeting room; (iii) Making sure that the material to be inspected and its specifications are complete.
(c) Before a program inspection starts, it is important that:
(i) The inspection team prepares an accurate specification of the code to be inspected. (ii) The members of the inspection team know the organisational standards very well.
(iii) The latest and syntactically correct version of the code is available. (d) Inspection Process:
(i) Program is given to the inspection
team during overview stage, where author describes what the program is supposed to do.
(ii) A Period of Individual Preparation follows this, where each member tries to find defects, anomalies and non-‐compliance with standard in code, not suggesting how to correct these.
(iii) After inspection completes, programmer corrects the identified problems. (iv) In the follow-‐up stage, moderator must decide whether re-‐inspection of code is needed. (v) Lastly, the document is approved by the moderator for release.
(e) Inspection Checks (also Automated Static Analysis Checks): (i) Data Faults: Variables initialised before their values are used? Etc. (ii) Control Faults: Condition statements correct? Is each loop certain to terminate? Etc.
(iii) Input/Output Faults: All input variables used? Output variables assigned to a value? Etc. (iv) Interface Faults: Correct No. of Parameters for functions? Parameter types match? Etc. (v) Storage Management Faults: Have all links been correctly reassigned after modification? Etc.
(vi) Exception Management Faults: Have all possible error condition been taken into account?
(4) Automated Static Analysis:
(a) Static Program Analysers: Software tools that are used to scan the source text of a program and detect possible faults and abnormalities.
(b) 5 Stages of Static Analysis (i) Control Flow Analysis: Looks for loops with multiple exit or entry points and unreachable code. (ii) Data Use Analysis: This stage emphasises how variables in the program are used.
(iii) Interface Analysis: Inspects the consistency of routine and procedure declarations and their use. (iv) Information Flow Analysis The process of identifying the dependencies between input variables
and output variables.
(v) Path Analysis This semantic phase detects all possible paths through the program and creates the statements executed in that path.
www.oumstudents.tk Page 22
f) Cleanroom Software Development: Software
development philosophy that uses a rigorous inspection process to avoid software defect. Its purpose is to produce zero-‐defect software.
i) 5 Main Characteristics of Cleanroom SD
(1) Formal Specification: The software that will be developed is formally specified.
(2) Incremental Development: The software is divided into increments, developed and validated separately.
(3) Structured Programming: Where only partial number of control and data abstraction constructs are used.
(4) Static Verification: The software being developed is statically verified using rigorous software inspections. (5) Statistical Testing of the System: In order to determine the reliability of integrated software increment, it
is tested statistically, derived from an operational profile, developed in line with the system specification.
ii) Cleanroom Process Teams:
(1) Specification Team: Responsible for developing and maintaining system specifications.
(2) Development Team: Responsible for Developing and Verifying the software.
(3) Certification Team: Responsible for developing statistical tests to test the software as it is developing.
10) Software Testing: a) Defect testing: Discover hidden defects in the
software system before it is delivered to customer. (1) A successful defect test is a test which will
cause the system to perform improperly and
then reveal a defect. (2) Test Cases: Specifications of test inputs, expected system output with a statement of what is being tested.
(3) Guidelines for Testing: (a) All system functions that are linked through menus must be tested. (b) Combinations of functions must be tested. Eg: text formatting that is linked
through the same menu (c) If user input is required by system, all functions are tested with both correct
and incorrect input.
ii) Exhaustive Testing: Testing every program execution sequence (This is impractical). iii) Defect Testing Techniques:
(1) Black-‐Box (Functional) Testing: Tests are designed based on the program or
component specification. The tester gives input to the component or the system and examines the corresponding outputs. If the outputs are not as expected, then a problem has been detected. It does not require access to source code
(2) Equivalent Partitioning: A way of building test cases, which depends on finding partitions in the input and output data sets and running the program with values from these partitions. Tests are successful if program handles
invalid output groups successfully. (a) Input Equivalence Partitions: Sets of data where all set members must be
processed in equivalent way.
(b) Output Equivalence Partitions: Program outputs, which have common characteristics so it can be considered as a separate class.
(3) Structural (White-‐Box, Glass-‐Box, Clear-‐Box) Testing: Tests are derived from knowledge of software’s
structure and implementation. Purpose of structural testing is to make sure each independent program path is executed at least once.
(a) Applied to small program units: Subroutine or operations related with an object.
www.oumstudents.tk Page 23
(b) Code Analysis: Used to find out how many test cases are needed to make sure that all of the
statements in program or component are executed at least once.
(4) Path testing is a structural testing strategy and its objective is to implement every independent execution path through a component
or program. (a) If every independent path is executed then all statements in the
component should have been executed at least once.
(b) Used at the unit testing and module testing stages of the testing process.
(c) Skeletal Model: A starting point testing, a program flow graph,
which consists of all paths through the program. (d) No. of Independent Paths in a program can be found by
computing the Cyclomatic Complexity (CC) of any connected
graph (G) and it can be computed according to the following:
b) Integration Testing: A process that involves building the system and testing the
complete system for problems that are caused by component interactions.
Integration tests should be built from the system specification.
(1) Main problem: Localising errors found during the process.
ii) Incremental Approach: Used to make it easier to locate errors, minimal system configuration is integrated and tested. Then add components to this minimal configuration and test after each added increment.
iii) Top-‐Down and Bottom-‐Up Testing:
(1) Top-‐down testing is an essential part of a top-‐down development process where the development process begins with high-‐level
components and goes down to the components hierarchy.
(2) Bottom-‐up testing
involves integrating and testing the modules at the lower levels in the
hierarchy, and then goes up the hierarchy of modules until the final module is tested.
iv) Interface Testing: To detect errors that could be introduced into the system because of interface errors or invalid input to the interfaces. Very important for Object-‐Oriented development.
(1) Types of Interface Errors: (a) Parameter Interfaces: Data passed from one procedure to another. (b) Shared Memory Interfaces: Block of memory is shared between procedures (c) Procedural Interfaces: Subsystem encapsulates a set of procedures to be called by other sub-‐systems.
(d) Message Passing Interfaces: Subsystems request services from other subsystems.
(2) Classes (Three Categories) of Interface Errors: (a) Interface Misuse: Calling component calls another component later making error in interface usage. (b) Interface Misunderstanding: Calling component misunderstands the interface specification of the
called component, and makes wrong assumption about the behaviour of the called component.
(c) Timing Errors: Occur in real-‐time systems using shared memory, or message passing interface. v) Stress Testing: Planning a series of tests where the load is steadily increased until the system performance
become unacceptable.
www.oumstudents.tk Page 24
c) Testing Workbench Tools:
i) Test Manager: Used to manage the execution of program tests. ii) Test Data Generator: This is used to generate test data for the
program to be tested.
iii) Oracle: Used to generate predictions of expected test results. iv) File Comparator: Used to compare the results of program tests
with previous test results and reports differences between them.
v) Report Generator: Used to provide report definition and generation facilities for test results.
vi) Dynamic Analyser: Used to add code to a program to count how many times each statement had executed.
vii) Simulator:
(1) Target simulators simulate the machine where program will be executed.
(2) User interface simulators are script-‐driven programs that simulate multiple simultaneous user interactions.
11) Process Improvement and Software Quality Assurance: a) Process improvement: includes process analysis, standardisation, measurement and change.
b) Process Characteristics: i) Understandability To what extent is the process explicitly defined and how easy is it to understand definition? ii) Visibility Do the process activities culminate in clear results, so that progress of process is externally visible?
iii) Supportability To what extent can the process activities be supported by CASE tools? iv) Acceptability Is the defined process acceptable to and usable by the engineers for producing the software? v) Reliability Is process designed so that process errors are avoided or trapped before resulting in product errors?
vi) Robustness Can the process continue in spite of unexpected problems? vii) Maintainability Can the process evolve to reflect changing organisational requirements?
viii) Rapidity How fast can the process of delivering a system from a given specification be completed?
c) Process Improvement Procedure:
i) Process Analysis: Includes the activities of checking existing processes and producing a process
model in order to document and understand the process.
ii) Improvement Identification: Using
the process analysis results to identify quality, schedule or cost problems where process factors might influence the product quality.
iii) Process Change Introduction: Introducing new procedures, methods,
and tools and integrating them with other process activities. iv) Process Change Training: It is impossible to get the full benefits from
process changes without training.
v) Change Tuning: When minor problems are discovered; modifications to the process are proposed and are introduced. Should continue for several months until software engineers are satisfied with new
process.
d) Purpose of process improvement: To reduce the No. of product defects.
e) Large Projects: The main factor of products quality is the software process i) Biggest problems with large projects: Integration, project
management and communications.
f) Small Projects: The quality of the development team is more important than the development process used.
www.oumstudents.tk Page 25
g) Process Analysis & Modeling: Require studying existing processes and developing an abstract model of these
processes that identifies their main characteristics. i) Process Analysis: The study of existing processes
to know relationships between different parts of
the process. ii) Process Analysis Techniques:
(1) Questionnaires & Interviews: Asking software engineers on the project about what happens in the project. The answered to which are then used during personal
interviews with those involved in the process.
(2) Ethnographic Studies: Used to understand the nature of software development as a
human activity. iii) Process Model Elements: è iv) Process Exceptions: examples of
exception types that a project manager must handle are:
(1) Several important people
become ill at the same time just before an important project review.
(2) A communications line or network failure that causes electronic mail cannot be
used for several days.
(3) An organisational reorganisation that cause the
managers to spend most of their time working on organisational matters rather
than on project management.
(4) An unexpected request for new project proposals being made. Instead of
concentrating on the project, the team has to work on a proposal.
h) Process Measurement: Consist of quantitative data about the software process.
i) Process Metrics: Can be used to assess whether or not the efficiency of a process has been improved.
(1) Three Classes of Process Metrics: (a) Time Taken for a Particular Process to be completed. Eg: Total time dedicated to the process. (b) Resources Required for a particular process. Eg: Total effort in person-‐days. (c) Number of Occurrences of a particular event. Eg: No of errors found during code inspection.
www.oumstudents.tk Page 26
ii) Goal-‐Question-‐Metric (GQM) paradigm: Used to help the developers to decide what measurements should be
taken and how they should be used
(1) GQM Measurement: (a) Goals: What is the organisation trying to achieve? The objective of process improvement is to satisfy
these goals. (b) Questions: Questions about areas of uncertainty related to the goals. You need process knowledge to
derive these.
(c) Metrics: Measurements to be collected to answer the questions.
(2) Advantage GQM: (a) It separates organisational concerns or goals from specific process concerns or questions.
(b) It focuses on data collection and proposes that collected data should be analysed in different ways depending on the question it should answer.
iii) SEI Software Capability Maturity Model (CMM):
(1) Categorises software processes as: (a) Initial: Essentially uncontrolled (b) Repeatable: Product management
procedures defined and used (c) Defined: Process management procedures
and strategies defined and used (d) Managed: Quality management strategies
defined and used
(e) Optimising: Process improvement strategies defined and used
(2) Three problems in CMM:
(a) The model focuses entirely on project management rather than product development. (b) It does not include risk analysis and
resolution as key process
(c) The area of applicability of the model is not defined.
(3) Capability Assessment: Relies on a
standard questionnaire that is intended to identify the main processes in the organisation.
(4) Six Sigma strategy is a rigorous and disciplined methodology that uses data and statistical analysis to measure and improve a company’s operational performance by identifying and eliminating ‘defects’ in
manufacturing and service-‐related processes. (a) 3 Core Steps:
(i) Define customer requirements, deliverables and project goals via well-‐defined methods of
customer communication. (ii) Measure the existing process and its output to determine current quality performance. (iii) Analyse defect metrics and determine causes.
(b) 2 Additional Steps Suggested for existing software processes in place (DMAIC) (i) Improve the process by eliminating the root causes of defects. (ii) Control the process to ensure that future work does not reintroduce the causes of defects.
(c) 2 Additional Steps Suggested for if an organisation is developing a software process (DMADV) (i) Design the process to:
1. Avoid the root causes of defects
2. Meet customer requirements. (ii) Verify that the process model will avoid defects and meet customer requirements.
www.oumstudents.tk Page 27
(5) ISO 9000 Quality Standard: Describes a quality assurance system in general that can be applied to any
business regardless of the products or services offered. (a) ISO 9001:2000: The quality assurance standard that applies to software engineering. The standard
contains 20 requirements that must be present for an effective quality assurance system.
(b) ISO 9000-‐3: Developed to help interpret the standard for use in the software process because the ISO 9001:2000 standard is applicable to all engineering disciplines.
i) Process Classification: Different types of Processes: i) Informal Processes These are the
processes where there is no strictly
defined and clear process model required.
ii) Managed Processes These are the
processes where there is a defined process model prepared. This is used to guide the development process.
iii) Methodical Processes These are the processes where some defined development method or methods like
systematic methods for object-‐oriented design are used. iv) Improving Processes These are the processes that have inherent improvement objectives. There is a specific
budget for process improvements and procedures ready for introducing such improvements.
12) Software Change: a) Three Different Strategies for Software Change: i
i) Software Maintenance: Changes to the software are made to response to the requirements change but the
basic structure of the software remains stable. This is the most frequent approach used to system change. ii) Architectural Transformation: This is a very drastic software change approach compared to maintenance
because it involves making major changes to the architecture of the software system. For example, systems
change from a centralised, data-‐centric architecture to client-‐server architecture. iii) Software Reengineering: In this strategy, the system is modified to make it easier to understand and change.
b) Configuration Management: The management of changing software products.
c) Program evolution dynamics: The study of system change. d) Lehmans Laws:
i) Continuing Change: A program that is used in a real-‐world environment necessarily must change or become progressively less useful in that environment.
ii) Increasing Complexity: As an evolving program changes, its structure tends to become more complex.
iii) Large Program Evolution: Program evolution is a self-‐regulating process. System attributes such as size, time between releases and the numbers of reported errors are approximately invariant for each to system release.
iv) Organisational Stability: Over a program’s lifetime, its rate of development is approximately constant and
independent of the resources devoted to system development. v) Conservation of Familiarity: Over the lifetime of the system, the incremental change in each release is
approximately constant.
e) Software maintenance is the normal process of changing a system after it has been delivered. i) Types of Maintenance:
(1) Maintenance to Repair Software Faults (Bug Fixing): Coding errors is normally inexpensive; design errors
are more costly. Requirements errors are the most expensive to be repaired.
(2) Maintenance to Adapt the Software to a Different Operating Environment: This maintenance type is needed when some aspect of the system’s environment such as the hardware, the platform operating
system or other support software changes.
(3) Maintenance to Add to OR Modify the System’s Functionality: This type of maintenance is essential when the system requirements change in line with organisational or business change.
www.oumstudents.tk Page 28
ii) Overall lifetime costs can be reduced if more effort is given during
system development to produce a maintainable system. iii) One main reason why maintenance costs are high is that it is more
expensive to add functionality after a system is already in operation
than to implement the same functionality during development. iv) The Main Factors that Differentiate Development and Maintenance that Lead to Higher Maintenance Costs:
(1) Team Stability: After a system has been released to users, usually the development team will disintegrate
and they will work on new projects.
(2) Contractual Responsibility: The contract to maintain a system is normally different from the system development contract.
(3) Staff Skills: Maintenance staffs are usually inexperienced and unfamiliar with the application area.
(4) Program Age and Structure: As programs grow old, their structure is likely to be degraded by change and
they become harder to understand and change. v) Maintenance Process: Different between one
organisation to another depending on the
software being maintained as well as the development processes used in an organisation and the people involved in the process
(1) Change requests: Requests for system
changes from users, customers or
management.
(2) 3 Reasons that some change requests must be implemented urgently:
(a) Fault repair (b) Changes to the system’s environment (c) Urgently required business changes
vi) Maintenance Prediction: Maintenance prediction is concerned with assessing which parts of the system may cause problems and
have high maintenance costs
(1) Predicting the number of changes requires and understanding of the relationships
between a system and its environment.
(2) Tightly coupled systems require changes whenever the environment is changed.
(3) Factors influencing this relationship are (a) Number and complexity of system
interfaces;
(b) Number of inherently volatile system requirements;
(c) The business processes where the system is used.
(4) Process measurements may be used to assess maintainability (a) Number of requests for corrective maintenance;
(b) Average time required for impact analysis; (c) Average time taken to implement a change request; (d) Number of outstanding change requests.
(5) If any or all of these is increasing, this may indicate a decline in maintainability