Application of an evaluation framework for analyzing the architecture tradeoff analysis methodSM
-
Upload
marta-lopez -
Category
Documents
-
view
213 -
download
0
Transcript of Application of an evaluation framework for analyzing the architecture tradeoff analysis methodSM
The Journal of Systems and Software 68 (2003) 233–241
www.elsevier.com/locate/jss
Application of an evaluation framework for analyzingthe architecture tradeoff analysis methodSM q
Marta Lopez
Fraunhofer Institute for Experimental Software Engineering, Sauerwiesen 6, D-67661 Kaiserslautern, Germany
Received 23 December 2002; accepted 27 December 2002
Abstract
Evaluation is a critical analytical process in all disciplines and fields and therefore also in software engineering. For developing
and analyzing an evaluation method a framework of six basic components (target, evaluation criteria, yardstick, data-gathering
techniques, synthesis techniques, and evaluation process) can be applied. This framework was developed based on the analysis of
theoretical and methodological evaluation concepts applied in software and non-software disciplines. In particular, in this paper we
present the application of the framework for analyzing the architecture tradeoff analysis methodSM (ATAMSM), developed by the
Software Engineering Institute (SEI). The results of the matching of the framework with the ATAM definition facilitate the
identification of each evaluation component and stress some key aspects, such as the relevant role of stakeholders and the signif-
icance of attribute-based architectural styles in an ATAM evaluation.
� 2003 Elsevier Inc. All rights reserved.
Keywords: Software architecture; Software architecture evaluation; Software architecture styles
1. Introduction
Evaluation can be defined as an ubiquitous process
because we can find it everywhere. Usually, in the soft-
ware engineering (SE) field evaluations are developedand performed without taking into account the efforts
and lessons learned in other software and non-software
disciplines. However, the study of developed evaluation
theories and methods (Scriven, 1991, 2001; Shadish
et al., 1993; Worthen et al., 1997) could help to elaborate
more detailed, complete and, systematic evaluation
methods for their application in the diverse SE areas.
With this purpose, an analysis of different evaluationapproaches was carried out and the result was the
identification of a set of basic elements common to any
type of evaluation method, implicitly or explicitly iden-
tified, formally or informally described.
This framework can already be used for different
purposes. For example, in (Ares et al., 2000) it was used
to develop a software process evaluation method. In this
qThis work was carried out during a sabbatical year at the Software
Engineering Institute (Carnegie Mellon University, PA).
E-mail address: [email protected] (M. Lopez).
0164-1212/$ - see front matter � 2003 Elsevier Inc. All rights reserved.
doi:10.1016/S0164-1212(03)00065-7
paper we present another application of the evaluation
framework; in particular its use to analyze other evalu-
ation method in SE: architecture tradeoff analysis
method (ATAM) (Kazman et al., 2000). Applying the
evaluation framework to ATAM will become in a moredetailed, complete and systematic way of evaluating
software architecture.
ATAM is an architecture evaluation method applied
to assess the consequences of architectural decision al-
ternatives in light of quality attribute requirements. Due
to the fact that the architecture is the earliest life-cycle
artifact that embodies significant design decisions (as
choices and tradeoffs), an analysis of this key asset couldhelp to determine whether the quality goals defined are
achievable by the architecture as it has been developed
before enormous organizational resources have been
committed to it (Kazman et al., 2000). Based on this
idea, the SEI developed an evaluation method to eval-
uate specific architecture quality attributes and engi-
neering tradeoffs to be made among possibly conflicting
quality goals (Jones and Kazman, 1999).The final purpose of the study presented in this paper
is the analysis of the complete definition of the ATAM
and the suggestions to improve the method description.
234 M. Lopez / The Journal of Systems and Software 68 (2003) 233–241
With this in mind, the evaluation components are briefly
described in Section 2. Section 3 shows a short de-
scription of the ATAM as it is defined by the SEI;
Section 4 presents the matching of the evaluation
framework with ATAM description; and finally, Section
5 presents the conclusions of this matching.
2. Evaluation components
The framework against which the ATAM will be
analyzed was derived from the analysis of theoretical
and methodological concepts from the evaluations ap-
plied in many different disciplines (Scriven, 1991, 2001;Shadish et al., 1993; Worthen et al., 1997). The basic
components of an evaluation, shown in Fig. 1, are de-
scribed as follows:
• Target, or the object under evaluation.
• Evaluation criteria, or characteristics of the target
that are to be evaluated.
• Yardstick or the ideal target against which the real
target is to be compared.
Data gathering
techniques
development
Synthesis
techniques
development
Evaluation criteria
definition
Yardstick
development
Evaluation process
development
Target
delimitation
Data gathering
techniques
development
Synthesis
techniques
development
Evaluation criteria
definition
Yardstick
development
Evaluation process
development
Target
delimitation
Fig. 1. Interrelations among evaluation components.
Planning
Decision Making
Examination
• Establish evaluati
• Design the evalua
• Analyze target
• Plan evaluation
• Apply the data ga
• Check the data ga
• Apply the synthes
• Prepare a final rep
• Present and subm
• Complete evaluat
Planning
Decision Making
Examination
• Establish evaluati
• Design the evalua
• Analyze target
• Plan evaluation
• Apply the data ga
• Check the data ga
• Apply the synthes
• Prepare a final rep
• Present and subm
• Complete evaluat
Fig. 2. Subprocesses and activitie
• Data-gathering techniques, needed to obtain data to
analyze each criterion.
• Synthesis techniques, used to judge each criterion
and, in general, to judge the target obtaining the re-
sults of the evaluation.
• Evaluation process or the series of activities and tasksby means of which an evaluation is performed.
As shown in Fig. 1, all these components are closelyinterrelated. The evaluation can be customized by means
of the target, because this is one of the parameters used
to select the evaluation method. Once the target is
known and has been delimited, its characteristics must
be identified for evaluation (criteria). All the character-
istics and their ideal values, which indicate what the
target should be like under ideal conditions (or simply,
in a certain circumstances), make up what is known asthe yardstick. Data about the real target should be ob-
tained using certain data-gathering techniques: a value
(numerical, data, information set, etc.) will be gathered
for and assigned to each criterion. Once all the data have
been collected, they are organized in an appropriate
structure and compared against the yardstick by ap-
plying synthesis techniques. This comparison will output
the results of the evaluation. Finally, all the above arelinked by the evaluation process, which indicates when
to define the scope and extent of the evaluation and
when to develop, or adapt and when necessary, the
criteria, yardstick and techniques. All this is defined by a
set of performance activities and tasks. Fig. 2 shows the
general subprocesses and activities usually carried out.
A detailed description and example of these evaluation
components can be found in (Lopez, 2000).
3. Brief description of ATAM
The key input to the ATAM is a software system
architecture and the main outputs are the following:
on goals
tion
thering techniques and obtain the data
thered for completeness
is techniques
ort
it report
ion documentation
on goals
tion
thering techniques and obtain the data
thered for completeness
is techniques
ort
it report
ion documentation
s of the evaluation process.
M. Lopez / The Journal of Systems and Software 68 (2003) 233–241 235
• Risks, or architectural alternatives that might create
future problems in some quality attribute.
• Non-risks, or good decisions relying on implicit as-
sumptions.
• Sensitivity points, or alternatives of which a slight
change makes a significant difference in some qualityattribute.
• Tradeoff points, or decisions affecting more than one
quality attribute.
To obtain these results the SEI has developed an
evaluation process (Kazman et al., 2000) with diverse
phases and steps, shown in Table 1. These steps repre-
sent the evaluation process described for the stake-holders. There is another more complete evaluation
process oriented to the evaluators, with more phases to
perform before and after those shown in Table 1. There
is a phase 1 where the steps 1–6 are carried out with a
reduced set of stakeholder (usually the architect, client
and project manager). Following a phase 2 are per-
formed where steps 1–6 are recapped and summarized in
the presence of the larger set of stakeholders.Besides the evaluation process, the SEI describes
other underlying concepts necessary to understand and
apply the ATAM (Kazman et al., 2000), such as quality
attribute characterizations, scenarios, and attribute-
based architectural styles (ABAS) (Klein and Kazman,
Table 1
Main steps of ATAM evaluation method
Presentation
1. Present the ATAM Evaluation team presents an overview o
architecture team, managers, testers, ma
outputs to obtain are identified in this p
2. Present business drivers The project manager describes what bus
level functional requirements and high-l
3. Present architecture The architect will describe the proposed
addressed the quality attribute requirem
Investigation and analysis
4. Identify architectural
approaches
Evaluation team starts to identify places
main architectural approaches are ident
5. Generate quality attribute
utility tree
The quality attributes that represent the
annotated with stimuli and responses, a
6. Analyze architectural
approaches
Evaluation team probes architectural ap
risks, sensitivity points and tradeoffs po
Testing
7. Brainstorm and prioritize
scenarios
Elicitation of a larger set of scenarios fro
via a defined voting process
8. Analyze architectural
approaches
This step reiterates step 6 but using only
to be test cases for the analysis of the arc
architectural approaches, risks, non-risk
Reporting
9. Present results The ATAM team presents the findings to
evaluation: architectural approaches/sty
1999). To evaluate an architectural design against
quality attribute requirements it is necessary a precise
characterization of the quality attributes. These char-
acterizations are described with a structure which per-
mits the analysis of each quality attribute considered
(i.e., performance, modifiability and so on). Fig. 3 showsthe elements of a quality attribute characterizations and
an example for the performance quality attribute. From
this structure evaluators can derive attribute-specific
questions, for example: how are processes allocated to
hardware? and how is queuing and prioritization done
in the network?
Another concept needed to perform an ATAM are
scenarios (Kazman et al., 2000). To elicit the specificquality goals against which the architecture will be
judged, evaluators can use scenarios or short statements
describing an interaction of one of the stakeholders with
the system. Each stakeholder would describe different
scenarios which represent higher interests and interac-
tion with the system. ATAM uses three types of sce-
narios: use case scenarios (typical uses of the existing
system), growth scenarios (anticipated changes to thesystem), and exploratory scenarios (to cover extreme
changes that are expected to stress the system). But in-
dependently of this classification, each scenario is de-
scribed with the same structure: Stimuli, Environment,
and Responses. An example of use case scenario is the
f the ATAM to the assembled stakeholders (customer representatives,
intainers, etc.). The evaluation steps, techniques to apply and main
resentation
iness goals are motivating the development effort as well as the high-
evel quality requirements
architecture, including how the architectural approaches/styles used
ents
in the architecture that are key for realizing quality attribute goals. The
ified but not analyzed
system utility are elicited, specified down to the level of scenarios,
nd prioritized
proaches from the point of view of specific quality attributes to identify
ints
m the entire group of stakeholders. These scenarios will be prioritized
the highly ranked scenarios from step 7. These scenarios are considered
hitectural approaches identified previously. The final outputs will be the
s, sensitivity points, and tradeoff points
the assembled stakeholders using the information gathered during the
les, scenarios, risks, non-risks, sensitivity points, and tradeoff points
ResourceCPUSensorsNetworkMemoryActuators
Resource arbitration
Queuing
PolicySJFFIFODeadlineFixed priority
OnlineDynamic priorityFixed priority
OfflineCyclicexecutive
Per processor1:11:m
PreemptionSharedLocking
PrecedenceCriticalityOrdering
PartialTotal
ThroughputCriticalityBest/Avg/Worst caseObservation window
LatencyBest/Avg/Worst caseResponse windowCriticalityJitter
ModeRegularOverload
SourceInternal eventClock interruptExternal event
Frequency RegularityPeriodicAperiodic
SporadicRandom
Performance
Stimuli Architectural Decision Responses
ResourceCPUSensorsNetworkMemoryActuators
Resource arbitration
Queuing
PolicySJFFIFODeadlineFixed priority
OnlineDynamic priorityFixed priority
OfflineCyclicexecutive
Per processor1:11:m
PreemptionSharedLocking
ResourceCPUSensorsNetworkMemoryActuators
ResourceCPUSensorsNetworkMemoryActuators
Resource arbitration
Queuing
PolicySJFFIFODeadlineFixed priority
OnlineDynamic priorityFixed priority
OfflineCyclicexecutive
Per processor1:11:m
Queuing
PolicySJFFIFODeadlineFixed priority
OnlineDynamic priorityFixed priority
OfflineCyclicexecutive
Per processor1:11:m
PolicySJFFIFODeadlineFixed priority
PolicySJFFIFODeadlineFixed priority
OnlineDynamic priorityFixed priority
OnlineDynamic priorityFixed priority
OfflineCyclicexecutive
OfflineCyclicexecutive
Per processor1:11:m
Per processor1:11:m
PreemptionSharedLocking
PreemptionSharedLocking
PrecedenceCriticalityOrdering
PartialTotal
ThroughputCriticalityBest/Avg/Worst caseObservation window
LatencyBest/Avg/Worst caseResponse windowCriticalityJitter
PrecedenceCriticalityOrdering
PartialTotal
ThroughputCriticalityBest/Avg/Worst caseObservation window
LatencyBest/Avg/Worst caseResponse windowCriticalityJitter
ModeRegularOverload
SourceInternal eventClock interruptExternal event
Frequency RegularityPeriodicAperiodic
SporadicRandom
ModeRegularOverload
SourceInternal eventClock interruptExternal event
Frequency RegularityPeriodicAperiodic
SporadicRandom
Performance
Stimuli Architectural Decision Responses
Stimuli
Architecture Decision
Responses
Element Description
Events that cause the architecture to respond or change.
Aspects of an architecture (components, connectors, and theirproperties) that have a direct impact on achieving attribute responses.
Measurable/observable quantities.
Stimuli
Architecture Decision
Responses
Element Description
Events that cause the architecture to respond or change.
Aspects of an architecture (components, connectors, and theirproperties) that have a direct impact on achieving attribute responses.
Measurable/observable quantities.
Fig. 3. Quality attribute characterization elements and example of performance characterization.
236 M. Lopez / The Journal of Systems and Software 68 (2003) 233–241
following: ‘‘remote user requests a database report via
the Web during peak period and receives it within 5 s’’. To
elicit scenarios two types of techniques are applied:
utility tree and brainstorming. Utility tree provides a
top–down mechanism for translating the main qualityattributes into concrete quality attribute scenarios. This
technique is used to understand how the architect per-
ceived and handled quality attribute architectural driv-
ers. On the other hand, brainstorming is applied to
obtain a larger set of scenarios working with the whole
group of stakeholders. All the scenarios will be priori-
tized to select only a subset which will be analyzed
during the evaluation.Finally, to analyze the design decisions the architec-
tural approaches used need to be identified. According
to the SEI (Kazman et al., 2000), an architectural style
can be defined as a template for a coordinated set of
architectural decisions aimed at satisfying some quality
attribute requirements. Each ABAS is an architectural
style in which the constraints focus on component types
and patterns of interaction that are particularly relevantto quality attributes (Kazman et al., 2000). Nowadays,
evaluators can use some ABASs to analyze an archi-
tecture, such as performance concurrent pipelines
ABAS, or modifiability layering ABAS. Each ABAS is
comprise of the following four parts. Fig. 4 shows some
of these parts for the performance concurrent pipeline
ABAS (Klein and Kazman, 1999).
• Problem description and criteria, or characteristics of
the problem solved.
• Stimuli/responses, or the ABAS�s quality attribute
specific stimuli and the measures of the responses.
• Architectural style, or components, connectors, pa-rameters, topology, and constraints.
• Analysis, which includes the relation of the quality at-
tribute models to the style as well as heuristics for
reasoning about the style.
The results of an ATAM will be derived from the
analysis of the selected scenarios as well as the archi-
tectural approaches/styles identified in the architecture.
All these elements of an ATAM evaluation were ana-
lyzed using the six components derived from the evalu-
ation theory. The results of the matching are presented
in the following section.
4. Analysis of ATAM from an evaluation theory viewpoint
The analysis of ATAM were carried out based on
the six evaluation components described in Section 2
and the description of the method provided by the SEI
in (Kazman et al., 2000) and work papers which has
not yet been published, such as the ATAM Refer-
ence Guide. The results of the matching are presented
below.
Fig. 4. Performance concurrent pipeline ABAS components.
M. Lopez / The Journal of Systems and Software 68 (2003) 233–241 237
4.1. Target
The target of ATAM is an architecture. Nevertheless,
it is not clear if this architecture is the software archi-
tecture, system architecture, or software system archi-
tecture; also it is not mentioned whether the words
system and software are considered synonymous. Since‘‘there is no one structure that is the software architec-
ture’’ (Bass et al., 1998), a list of the most common and
useful software structures is provided which together
describe the architecture. During the evaluation, the
architect will have to describe the following views of the
architecture: functional, module/layer/subsystem, pro-
cess/thread and hardware (Kazman et al., 2000). These
views are the main input to the evaluation. However,there is no a further description of the relationship be-
tween these views with quality attributes (such as per-
formance, modifiability and availability) to know which
attributes are related with which views.
In short, the target of ATAM had been identified.
Nevertheless, a more specific description of the main
inputs of the method could be useful for the potential
customers to know when they would be ready to requestan evaluation.
4.2. Evaluation criteria
The quality attribute characterization is the ATAM
concept that can be associated with the evaluation cri-
teria. However, not all the elements shown in Fig. 3 can
be catalogued as evaluation criteria. The architectural
decisions and responses are used to determine risks,
non-risks, sensitivity points, and tradeoff points. The
stimuli only indicate the input variables that activate a
response of the architecture related with certain archi-
tectural decisions. Since the criteria are the characteris-tics of the target to be analyzed, the stimuli are not
criteria but elements required to identify a given variable
(and its value) which will be the trigger for analyzing the
architectural decisions and responses in a given setting.
Therefore, the triplet [stimuli–architectural decisions–
responses] contains the specific criteria and so the
identification of the criteria can be said to be explicit.
Based on this structure, a set of scenarios for analyzingthe architecture can be derived easily.
With regard the elicitation of the evaluation criteria,
ATAM description provides an initial set of criteria
elicited by experts. However, owing to the fact that the
target is not delimited specifically in the ATAM, to
perform an evaluation it is necessary to elicit a set of
criteria and yardstick for each particular evaluation.
Due to this fact, there is a need to elicit the criteria byusing the utility tree and the identification of the archi-
tectural approaches. The utility tree contains the specific
criteria to be analyzed which are derived from the main
quality attributes selected. Each criterion will be asso-
ciated with a scenario for their detail analysis. On the
other hand, the identification of the architectural
238 M. Lopez / The Journal of Systems and Software 68 (2003) 233–241
approaches provides more specific criteria because each
approach has an associated set of stimuli, architectural
decisions, and responses that will be analyzed.
In short, the evaluation criteria are identified al-
though for each particular evaluation there is a need to
elicit the specific characteristics to be considered. Ingeneral, it would be better if architects and related
evaluation stakeholders share the same knowledge and
interpretation of the quality attributes to be evaluated to
avoid misunderstandings about their meaning and spe-
cific characteristics associated with each attribute.
4.3. Yardstick
The evaluation criteria are the basis for developing
the set of scenarios which can be associated with the
concept of yardstick. The set of scenarios, that have
been elicited either implicitly or explicitly in a particular
evaluation, is the reference point against which the ar-
chitecture will be judge. As the ATAM is a scenario-
based evaluation method, the structure of its yardstick is
also based (albeit implicitly) on a scenario. These com-ponents can be classed into two groups depending on
the sources used to generate them: architectural
design-dependent scenarios, which are derived from the
architectural approaches (ABAS) identified in the ar-
chitecture; and stakeholder-dependent scenarios, de-
rived from the utility tree and brainstorming techniques.
ABASs describe, sometimes implicitly, scenarios against
which a certain quality attribute will be analyzed. AnABAS is determined by the general evaluation criterion
(quality attribute) and the associated architectural style
and also include the stimuli to be applied and responses
to be obtained. Therefore, it provides a subset of the
quality attribute characterizations that are to be ana-
lyzed and a set of scenarios to be applied when using the
ABAS. This set of scenarios will be increased in number
with stakeholder-dependent scenarios, which are neces-sary to represent the perspectives of the diverse partici-
pants in the evaluation.
With regard to the elicitation of the yardstick, only
the architectural design-dependent scenarios can be
elaborated during the development of the ATAM
method. Only when the evaluation has been performed
evaluators will be able to identify the specific ABASs to
be applied and select the particular stakeholder-depen-dent scenarios among all of those elicited by the stake-
holders.
In short, the analysis of the yardstick highlights the
problem of the identification and selection of the ABASs
applicable to a given architecture and their combination.
As yardsticks, scenarios are not combined but rather
used independently. In general, these shortcomings are a
consequence of the state of the art in the software ar-chitecture discipline. The current knowledge does not
permit the detailed description of a yardstick indepen-
dently of the use of stakeholders-dependent scenarios.
As more ABASs are developed, evaluators would be
able to perform the evaluation based on the ABASs
selected and their combination and to push into the
background the stakeholders-dependent scenarios.
4.4. Data-gathering techniques
Taking into account the definition of a data-gathering
technique (obtaining data to judge the architecture
against the target) the data-gathering techniques applied
in ATAM are the following: the set of questions devel-
oped and applied in step ‘‘analyze architectural ap-
proaches’’ (shown in Table 1) to obtain informationabout the target and apply the synthesis techniques;
scenario mapping in the evaluated architecture, with
which evaluators can gather data about architectural
components and connectors; and mathematical algo-
rithms described in some ABASs to obtain a numerical
value of some criteria (as the one in Fig. 4). In an
ATAM the application of data-gathering techniques
and synthesis techniques are very closely interrelatedbecause as information is obtained the architecture is
gone judging and then risks, non-risks, sensitivity and
tradeoff points are obtained.
With regard to the development of the data-gathering
techniques, there are diverse sources to generate the
questions to be applied, such as questions derived from
the quality attribute characterizations and ABASs.
Barbacci et al. (2000) identify three groups of questions:screening questions, applied to delimit the target and
therefore not included in the data-gathering techniques;
elicitation questions, used to obtain information to be
analyzed later based on the stimulus/response branches
of the quality attribute characterizations; and analysis
questions, used to conduct analysis using attribute
models and information collected by elicitation ques-
tions. Kazman et al. (2000) identify the relationshipbetween these three types of questions and the sources
used by the evaluators to generate the questions, such as
the quality attribute characterizations. Taking into ac-
count the relationship yardstick/data-gathering tech-
niques and considering that a part of the yardstick is
variable (stakeholder-dependent scenarios), it is not
possible to generate a priori a standard list with all the
questions to be applied. However, it could be possible togenerate the set of questions related with the architec-
tural design-dependent scenarios. The ATAM evalua-
tion team has expert questioners in the quality attributes
selected although a detailed description of this key role
is not included. Also it has to be considered the relevant
role of the stakeholders because they could also ask
questions. However, the concrete participation of the
stakeholders during the step ‘‘analyze architectural ap-proaches’’ is not described in detail. Due to the intensive
participation of the stakeholders, it is prudent that at
M. Lopez / The Journal of Systems and Software 68 (2003) 233–241 239
least one evaluator verify that there is sufficient data to
judge the evaluated architecture. Respecting the sce-
nario mapping, the architect performs the mapping and
answers the questions posed by evaluators and stake-
holders. This technique is not described in detail, from
the point of view of the data-gathering techniques. Theexample proposed in (Kazman et al., 2000) is focused
more on the use of the synthesis techniques to obtain the
results of the ATAM.
In short, the independent identification and analysis
of the data-gathering and synthesis techniques should
facilitate the explicit and rigorous definition of these
techniques. Also, an important improvement of the
method will be the detailed description of the scenariomapping as a data-gathering technique.
4.5. Synthesis techniques
Synthesis techniques are closely interrelated with the
data-gathering techniques in the ATAM. Synthesis
techniques correspond with the analysis performed to
judge the architectural decisions and obtain the outputsof the ATAM. Both evaluators and stakeholders can
participate in this analysis. Nevertheless, these type of
techniques are described in a general way and it is dif-
ficult to analyze them. Only there is a template used for
capturing an architectural approach. This template
contains the scenario analyzed, reasoning applied, ar-
chitectural decisions considered and for each one deci-
sion, the risks, sensitivity and tradeoff points derived, asappropriate.
Taking into account that each ABAS includes an
analysis section in which the reasoning to apply for each
architectural style is described, and a sample combina-
tion of similar or different attribute type ABASs, it can be
deduced that a part of the synthesis techniques is devel-
oped. However, it would be useful to have a more de-
tailed description or an example of the reasoning to applyin those ABASs which analysis and reasoning are based
only on scenarios. Also, guidelines for making judgments
to obtain the final results would be an aid. Finally, it is
necessary to consider that the stakeholders� questionsposed as well as later analysis will depend completely on
the set of selected scenarios, the stakeholders and their
agreement. Based on this, it would be necessary to rec-
ognize the importance of selecting the most appropriatestakeholders to be present in an ATAM.
In short, the performance of an ATAM would be
more repeatable if the method description includes the
detailed description of the stakeholders� participation in
obtaining the final results and the application and
combination of the diverse architectural approaches
identified. Nevertheless, this situation is normal since the
elaboration of synthesis techniques is the most difficulttask during the development of an evaluation method,
independently of the area or discipline considered.
4.6. Evaluation process
The ATAM evaluation process was analyzed against
the subprocesses and activities shown in Fig. 2. For this
analysis the ATAM evaluation process oriented to the
evaluators was considered. This process describes fourphases: phase 0, or partnership and preparation; phase
1, or initial evaluation, which corresponds with steps 1–
6 of Table 1; phase 2, or complete evaluation, which
corresponds with steps 1–9 of Table 1 but including a
preliminary step for preparing the phase (and hence
renumbering the steps); and phase 3, or follow up.
The result, shown in Table 2, stresses that, generally,
the ATAM includes the three subprocesses: planning,examination, and decision making. Table 2 reflects all
the phases and steps of ATAM. Some steps have to be
repeated because there is a priori no specific delimitation
of the target, no predetermined set of criteria for data
gathering, and no yardstick as a reference point. As a
evaluation cannot be run without these components, a
set of processes has been planned for their development.
Also, Table 2 reflects that some cells are blank. Thismeans that there is no an ATAM step which can be
related with the framework activity. For example,
ATAM does not identify explicitly the synthesis tech-
niques because they will depend on the expert knowl-
edge of evaluators, and there are no tasks in which the
ATAM evaluation process should be modified. Also, it
is assumed that evaluators will use directly some or all
the templates developed by the SEI and so there is nostep related with the development of the infrastructure
needed to perform the evaluation. Finally, ATAM does
not describe task for checking the data gathered for
completeness. However, due to the active participation
of the stakeholders, it would be convenient to analyze
the information gathered to determine whether there is
sufficient data about the scenarios to judge the evaluated
architecture.To sum up, ATAM provides an evaluation process
adapted to the specific characteristics of this type of
evaluation, which are based on an intensive participa-
tion of the stakeholders and the need to develop some
evaluation components during the performance of an
ATAM. A more detailed analysis for each framework�activities can be found in (Lopez, 2000).
5. Conclusions
This paper presents an analysis of ATAM taking into
account the framework of six evaluation components.
As a result, a set of characteristics and improvements
were identified. The main characteristics of ATAM are:
an evaluation team that includes experts in each qualityattribute to be analyzed in a particular evaluation; there
is no a predefined set of criteria or yardstick to be
Table 2
Framework ATAM correspondence
Evaluation framework subprocesses and activities ATAM Phase-step ATAM step name
A. Planning
A.1. Establish the evaluation goals
A.1.1. Get to know and analyze the DO target [0–2] Description of the candidate system
[0–5] Forming the core evaluation team
A.1.2. Negotiate with the representatives of the DO [0–1] Present the ATAM
[1–1/2–2] Present the ATAM
A.1.3. Define the goals of the evaluation [0–3] Make a go/no-go decision
[0–4] Negotiate the statement of work
A.2. Design the evaluation
A.2.1. Target [1–2/2–3] Present business drivers
[1–3/2–4] Present architecture
A.2.2. Criteria [1–2/2–3] Present business drivers
[1–3/2–4] Present architecture
[1–5/2–6] Generate quality attribute utility tree
[2–8] Brainstorm and prioritize scenarios
A.2.3. Yardstick [1–3/2-4] Present architecture
[1–4/2–5] Identify architectural approaches
[1–5/2–6] Generate quality attribute utility tree
[1–6/2–7] Analyze architectural approaches
[2–8] Brainstorm and prioritize scenarios
A.2.4. Identify the data-gathering techniques [1–6/2–7/2–9] Analyze architectural approaches
A.2.5. Identification synthesis techniques
A.2.6. Evaluation process
A.3 Analyze the target
A.3.1. Request and analyze the general DO documentation [0–7] Prepare for phase 1
[2–1] Prepare for phase 2
A.3.2. Set up the full evaluation team [0–6] Hold evaluation team kick-off meeting
[2–1] Prepare for phase 2
A.3.3. Identify the professionals involved [1–2] Present business drivers
A.3.4. Develop the data-gathering and synthesis techniques [1–6/2–7] Analyze architectural approaches
A.3.5. Develop/adapt the infrastructure
A.4. Plan the evaluation
[0–7] Prepare for phase 1
[1–1/2–2] Present the ATAM
[2–1] Prepare for phase 2
B. Examination
B.1. Apply the data-gathering techniques and obtain the data needed [1–6/2–7/2–9] Analyze architectural approaches
B.2. Check the data gathered for completeness
C. Decision making
C.1. Apply the synthesis techniques [1–6/2–7/2–9] Analyze architectural approaches
C.2. Prepare the final report [2–10] Present results
[3–1] Produce the final report
C.3. Present and submit the final report [2–10] Present results
[3–1] Produce the final report
C.4. Complete the evaluation documentation [3–2] Hold post-mortem meeting
[3–3] Build portfolio and update artifact repositories
240 M. Lopez / The Journal of Systems and Software 68 (2003) 233–241
applied in any ATAM evaluation; the yardstick is
composed of a set of scenarios; and a strong collabo-
ration of the stakeholders in the evaluation process is
needed.With regard to the proposals to improve the method,
one key improvement is the control of the stakeholders
who will participate in the evaluation. Other sugges-
tions, described in detail in Lopez (2000), are the fol-
lowing: the explicit description of the target; the explicit
development of the data-gathering techniques, to have a
common pool of questions for analyzing each quality
attribute, although in a specific evaluation more ques-
tions need to be posed; development of the synthesistechniques, to provide a more detail analysis to be car-
ried out to obtain the final results of the evaluation; a
more detailed definition of the evaluation process; de-
scription of the relationships among evaluation com-
ponents, in order to interrelate these components and
M. Lopez / The Journal of Systems and Software 68 (2003) 233–241 241
identify how a change in one component may affect
others; development of basic architectural styles
(ABASs), to be used as yardstick; a more detail de-
scription of the evaluation team roles; optimum number
of evaluators in each team; and, among others, re-
quirements needed to be an evaluator.As a final conclusion, this work highlights the possi-
ble application of the framework to analyze a developed
evaluation method to find out possible enhancements,
and not only to develop a new evaluation method. The
explicit identification and definition of the basic com-
ponents of an evaluation would help us to develop more
rigorous evaluation methods taking into account the
efforts and lessons learned about evaluations in softwareand non-software disciplines.
References
Ares, J., Garcia, R., Juristo, N., Lopez, M., Moreno, A., 2000. A more
rigorous and comprehensive approach to software process assess-
ment. Software Process: Improvement and Practice 5 (1), 3–30.
Barbacci, M., Ellison, R., Weinstock, C., Wood, W., 2000. Quality
Attribute Workshop Participants Handbook. Special Report
CMU/SEI-2000-SR-001, Software Engineering Institute, Carnegie
Mellon University.
Bass, L., Clements, P., Kazman, R., 1998. Software Architecture in
Practice. Addison-Wesley.
Jones, L., Kazman, R., 1999. Software Architecture Evaluation in the
DoD Systems Acquisition Context. News@SEI Interactive vol. 2,
issue 4, December.
Kazman, R., Klein, M., Clements, P., 2000. ATAMSM: Method
for Architecture Evaluation. Technical Report CMU/SEI-2000-
TR-004, Software Engineering Institute, Carnegie Mellon Univer-
sity.
Klein, M., Kazman, R., 1999. Attribute-Based Architectural Styles.
Technical Report CMU/SEI-99-TR-022, Software Engineering
Institute, Carnegie Mellon University.
Lopez, M., 2000. An Evaluation Theory Perspective of the Architec-
ture Tradeoff Analysis MethodSM (ATAMSM). Technical Report
CMU/SEI-2000-TR-012, Software Engineering Institute, Carnegie
Mellon University.
Scriven, M., 2001. Evaluation Core Courses Notes, Claremont
Graduate University, <URL: http://eval.cgu.edu/lectures/lectu-
ren.htm>.
Scriven, M., 1991. Evaluation Thesaurus. Sage.
Shadish, W., Cook, T., Leviton, L., 1993. Foundations of Program
Evaluation. Sage.
Worthen, B., Sanders, J., Fitzpatrick, J., 1997. Program Evaluation.
Alternative Approaches and Practical Guidelines. Addison-Wesley
Longman.