Introducing replaceability into web service composition

12
Introducing Replaceability into Web Service Composition Hussein Al-Helal and Rose Gamble, Member, IEEE Abstract—By discovering and reusing relevant web services, an organization can select and compose those services that most closely meet its business and Quality of Service (QoS) needs. As the number of available web services increases, selecting the best fit services for a given task becomes more challenging. QoS attributes play a significant role in the selection process by directing service composition constraints to a workflow plan that has the best QoS values. Two major problems arise at runtime when undesirable events necessitate the need to reselect services and replan the service bindings. First, if the reselection process consumes additional time, it can impact a temporal QoS constraint. Second, the newly generated composition might not comply with other QoS constraints imposed on the plan. This paper proposes an approach to composing web services that both performs reselection and avoids the violation of QoS constraints after replanning by defining and evaluating a replaceability property. Replaceability factors directly into the algorithm’s original service selection process considering all QoS constraints. Index Terms—Web service composition, replaceability, genetic algorithms, replanning Ç 1 INTRODUCTION S INCE web services can be developed and deployed by many vendors, there are often multiple web services that can perform similar tasks with varying Quality of Service (QoS) attributes. When composing services into a workflow or plan to jointly accomplish processing toward a final result, the services must be compatible with respect to their input, output, and functionality for a temporally ordered interaction that can successfully complete the required task or query [1]. The QoS of the plan depends on individual service selections for these designated interactions. A major problem with such web service composition is that QoS values can also change at execution time from original estimations. The service may become unavailable, unreliable, or no longer provide the best solution fit. Other services must be dynamically evaluated to complete the plan. These services are chosen from the same abstract type, a group of services with functionalities that can substitute or replace any service in their type [1]. Changing QoS values can disrupt the expected compliance of the plan to maintain certain thresholds, such as costs and response time. The impact is even more dramatic if the service lies within a loop with a large number of iterations in the composition. These non-periodic changes require a dy- namic planning environment in which certain events force reselection from the physical services of the same abstract type(s) in which the service change(s) occurred to form a new, yet compliant plan. QoS attributes are increasing-dimension or decreasing- dimension. Availability and reliability are increasing-dimension attributes because the resulting plan should incorporate the highest values associated with them. Cost and response time are decreasing-dimension attributes because the plan should incorporate the lowest values associated with them. Techni- ques for web services composition based on QoS optimiza- tion aim to maximize increasing-dimension attributes and minimize decreasing-dimension attributes, while at the same time maintaining any quality constraints imposed on the plan itself. These characteristics make the composition fall into the domain of multi-objective optimization. How- ever, none of those different techniques explored in this domain takes into account redundancy as an inherent property of composition. In this paper, we propose a proactive approach that searches for an optimal plan with a lower risk of violating the constraints in the event that re-composition is needed. Our approach introduces replaceability as a metric applied to plan composition. We define replaceability as the degree to which a plan or a service is exchangeable with one that accomplishes the same goal or processing, respectively. By including a replaceability metric in the selection process, we significantly reduce the potential violation of con- straints during plan execution that can result from QoS changes requiring service reselection. A major challenge is the impact of service reselection on the plan, since substituting a service for one whose QoS values have caused the plan to violate at least one constraint consumes time. Pre-knowledge of alternatives based on replaceability values counteracts this added factor and reduces the time spent in the process. Our strategy performs reselection only when needed. To aid composition and reselection, we filter services based on dominance, since only those services likely to produce optimal solutions are involved in the plan processes. The following definition assumes a minimization problem. A . The authors are with the Tandy School of Computer Science, University of Tulsa, Tulsa, OK 74104 USA. E-mail: [email protected]; [email protected]. Manuscript received 21 Sept. 2010; revised 12 Nov. 2012; accepted 9 Mar. 2013. Date of publication 26 Mar. 2013; date of current version 13 June 2014. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference the Digital Object Identifier below. Digital Object Identifier no. 10.1109/TSC.2013.23 1939-1374 Ó 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 7, NO. 2, APRIL-JUNE 2014 198

Transcript of Introducing replaceability into web service composition

Introducing Replaceability into WebService Composition

Hussein Al-Helal and Rose Gamble, Member, IEEE

Abstract—By discovering and reusing relevant web services, an organization can select and compose those services that mostclosely meet its business and Quality of Service (QoS) needs. As the number of available web services increases, selecting the bestfit services for a given task becomes more challenging. QoS attributes play a significant role in the selection process by directingservice composition constraints to a workflow plan that has the best QoS values. Two major problems arise at runtime whenundesirable events necessitate the need to reselect services and replan the service bindings. First, if the reselection processconsumes additional time, it can impact a temporal QoS constraint. Second, the newly generated composition might not comply withother QoS constraints imposed on the plan. This paper proposes an approach to composing web services that both performsreselection and avoids the violation of QoS constraints after replanning by defining and evaluating a replaceability property.Replaceability factors directly into the algorithm’s original service selection process considering all QoS constraints.

Index Terms—Web service composition, replaceability, genetic algorithms, replanning

Ç

1 INTRODUCTION

SINCE web services can be developed and deployed bymany vendors, there are often multiple web services

that can perform similar tasks with varying Quality ofService (QoS) attributes. When composing services into aworkflow or plan to jointly accomplish processing towarda final result, the services must be compatible with respectto their input, output, and functionality for a temporallyordered interaction that can successfully complete therequired task or query [1]. The QoS of the plan dependson individual service selections for these designatedinteractions.

A major problem with such web service composition isthat QoS values can also change at execution time fromoriginal estimations. The service may become unavailable,unreliable, or no longer provide the best solution fit. Otherservices must be dynamically evaluated to complete theplan. These services are chosen from the same abstract type,a group of services with functionalities that can substituteor replace any service in their type [1]. Changing QoSvalues can disrupt the expected compliance of the plan tomaintain certain thresholds, such as costs and responsetime. The impact is even more dramatic if the service lieswithin a loop with a large number of iterations in thecomposition. These non-periodic changes require a dy-namic planning environment in which certain events forcereselection from the physical services of the same abstracttype(s) in which the service change(s) occurred to form anew, yet compliant plan.

QoS attributes are increasing-dimension or decreasing-dimension. Availability and reliability are increasing-dimensionattributes because the resulting plan should incorporate thehighest values associated with them. Cost and response timeare decreasing-dimension attributes because the plan shouldincorporate the lowest values associated with them. Techni-ques for web services composition based on QoS optimiza-tion aim to maximize increasing-dimension attributes andminimize decreasing-dimension attributes, while at thesame time maintaining any quality constraints imposed onthe plan itself. These characteristics make the compositionfall into the domain of multi-objective optimization. How-ever, none of those different techniques explored in thisdomain takes into account redundancy as an inherentproperty of composition.

In this paper, we propose a proactive approach thatsearches for an optimal plan with a lower risk of violatingthe constraints in the event that re-composition is needed.Our approach introduces replaceability as a metric appliedto plan composition. We define replaceability as the degreeto which a plan or a service is exchangeable with one thataccomplishes the same goal or processing, respectively. Byincluding a replaceability metric in the selection process,we significantly reduce the potential violation of con-straints during plan execution that can result from QoSchanges requiring service reselection. A major challenge isthe impact of service reselection on the plan, sincesubstituting a service for one whose QoS values havecaused the plan to violate at least one constraint consumestime. Pre-knowledge of alternatives based on replaceabilityvalues counteracts this added factor and reduces the timespent in the process.

Our strategy performs reselection only when needed. Toaid composition and reselection, we filter services based ondominance, since only those services likely to produceoptimal solutions are involved in the plan processes. Thefollowing definition assumes a minimization problem. A

. The authors are with the Tandy School of Computer Science, Universityof Tulsa, Tulsa, OK 74104 USA. E-mail: [email protected];[email protected].

Manuscript received 21 Sept. 2010; revised 12 Nov. 2012; accepted 9 Mar.2013. Date of publication 26 Mar. 2013; date of current version 13 June 2014.For information on obtaining reprints of this article, please send e-mail to:[email protected], and reference the Digital Object Identifier below.Digital Object Identifier no. 10.1109/TSC.2013.23

1939-1374 � 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 7, NO. 2, APRIL-JUNE 2014198

solution a is said to dominate solution b if a is not worse forall objectives than b and a is wholly better for at least oneobjective [2]. The concept of dominance is very popular inmulti-objective optimization that is used to find the Pareto-optimal front. The approach is applicable to areas thatrequire optimal QoS compositions that, as strictly aspossible, adhere to the stated plan constraints even in thecase of service failures and QoS changes.

Section 2 describes replaceability concepts and termi-nology. Section 3 defines replaceability as a metric.Section 4 discusses the datasets used as input to the fitnessfunction and our comprehensive test results. Section 5addresses replanning and its associated challenges.Section 6 overviews related research. Section 7 concludesthe paper.

2 EXEMPLIFYING THE CONCEPT

To motivate the need for a replaceability metric, weintroduce Fig. 1 that consists of 3 abstract service typesand 4 concrete service instances for each abstract type.Using definitions established in [1], the three abstractservice types are horizontally compatible in that they caninteract within a plan. The services within each abstracttype are vertically compatible in that they can replace aservice of the same type within the plan. Each concreteservice has an ordered pair of QoS values forðavailability; costÞ. The objective is to find the best planthat maximizes availability, minimizes cost and satisfies theorganizational constraint that cost � 10. For this example,we use availability=cost as a simple fitness function.

Assume a traditional selection algorithm chooses phys-ical Plan 1 as optimal, with a 72.9 fitness value andsatisfying the plan cost constraint. However, while execut-ing, this plan fails for some reason at the last serviceinstance. A general re-planning algorithm will find that allof the alternatives in Abstract Service Type 3 violate theplan cost constraint, because each concrete service has ahigher cost. Though the plan cost is violated, the bestservice selection is 3.4 (8, 3) which makes the new plan havea total availability of 648 and a cost 11.

In contrast, physical Plan 2 has a fitness value of 72;only slightly lower than Plan1. Plan 2 similarly maintainsthe plan cost constraint. In the same failure occurrence, there-planning algorithm can replace the last service with3.4 (8, 3) and still satisfy the plan cost constraint. This occursbecause Plan 2 started with more feasible siblings than Plan 1while satisfying constraints and without sacrificing a largedegree of fitness, which is calculated only on the starting

plan. This example illustrates that an optimal plan can haveinfeasible siblings resulting in a higher risk of violating theconstraints when one of its services fails for any reason.This evidence suggests that selecting certain services withslightly less optimal QoS values but with feasible siblingscan improve plan fault tolerance when composing webservices.

2.1 Terminology and DefinitionsTo understand staged service composition and QoSaggregation, Agarwal et al. [3] defines web service composi-tion as follows. Given as input Types ¼ ft1; . . . ; t�g a set of� web service types, Instances ¼ fi1; . . . ; i�g a set of �advertised web service instances, and Executing ¼ fx1; . . . ;x�g a set of � deployed and executing services. The output ofweb service composition is an invocation of web serviceinstances placed in an execution sequence Plan ¼ hp1; . . . ;p�i where � is the length of the invocation series and pi

refers to an invocation of a deployed service instance.Because a web service abstract type represents a group

of semantically and functionally substitutable web serviceinstances, the type defines the interface and the operationssupported, such as may be given by the Web ServiceDefinition Language (WSDL) specification of each service.Each web service instance can be characterized by its QoSvalues. A plan as formally defined above consists of aworkflow or composition of services. For instance, withinthe plan, services can play the role of producer/consumerand their interaction follows a standard protocol, such asthat specified using the Business Process ExecutionLanguage (BPEL). The actual service interface and theinteraction protocol specifications are outside the scope ofthe paper.

Web service composition approaches include inter-leaved, monolithic, staged, and template-based [3]. Ourapproach focuses on staged composition because it providesloose coupling between the services and better fits thetypical service-oriented architecture. Fig. 2 shows thephases of staged composition. The first phase generatesan abstract workload based on web service types, called alogical composition [3]. AWFlows ¼ faf1; . . . ; afKg is a set ofK abstract workflows selected after logical composition.The function RankAW ranks the abstract workflows. Givena specific feature or function needed from an abstractservice type, several concrete service instances implement-ing the feature may be available [4]. Thus, the second phaseselects the appropriate web service instances to form thephysical composition where WF ¼ fW1; . . . ;WLg is a set of Lexecutable workflows [3]. The final runtime phase executes

Fig. 1. Motivating example.

AL-HELAL AND GAMBLE: INTRODUCING REPLACEABILITY INTO WEB SERVICE COMPOSITION 199

the selected concrete plan. We target the physical compo-sition in which replanning is performed.

The ranking of RIW of physical compositions is normallybased on QoS attributes since they are the distinguishingfactor for service instances. Table 1 shows the aggregationfunction for each QoS attribute, given four workflowpatterns [4], [5], [6], [7], in which loop size is determinedby annotating the loops with the number of iterations (k).N , M, and P are the number of sequential, probabilistic,and parallel invocations, respectively. Thus, each aggrega-tion function calculates the QoS attribute value for theworkflow depending on the pattern of execution of itsservices.

2.2 Problem DefinitionWe formulate the physical selection process as follows.Given the following input:

. LComp: a logical composition plan composed of� service types; LComp ¼ ðt1; . . . ; t�Þ, wheret1; . . . ; t� 2 Types.

. Y ¼ fY1; . . . ;Y�g where Yi is a set of deployedservice instances associated with each service typeti. Note that Yi � Executing, or the deployed serviceinstances.

. F ¼ ff1; . . . ; f"g where fi is an aggregation functionfor a QoS attribute to be optimized as in Table 1.FiðWjÞ computes the ith QoS value for physicalcomposition Wj 2 WF.

. CF ¼ fcf1; . . . ; cf�g � F: where cfi is the aggregatedConstraint Function for a QoS attribute. Li � cfi � Ui

where Ui and Li are upper and lower boundconstants, respectively.

Given the motivation and terminology, we formallydefine the research problem. The expected output from our

approach is Wj ¼ fx1; . . . ; x�g where xi 2 Yi and Wj 2WFsuch that Wj

. Maximizes fi if fi is increasing dimension or minimizesfi if fi is decreasing dimension.

. Satisfies the constraints associated with cfi.

. If xi causes a plan failure, there exists physical planW0

ji where W0ji satisfies the constraints and shares the

same front slice (portion of the plan that has beenexecuted) with Wj (services from x1 to xi�1). Notethat also W0

ji 2W.

Creating the physical composition is NP-hard since itis analogous to the Knapsack Problem and the ResourceConstrained Scheduling Problem [8]. Genetic algorithmsprovide a solution for working with this kind of problemdomain which involves large number of concreteservices [4].

3 INTRODUCING REPLACEABILITY

Traditionally, genetic algorithms work with static environ-ments where fitness values do not change and the searchspace is fixed. Our problem is a special case of the dynamicenvironment since we do not re-optimize the wholesolution in case of a change. We dynamically re-optimizeonly the portion of the plan that has not been executed(slice). Thus, the resulting plan is a new optimization, butover a reduced slice.

As defined earlier, replaceability is the degree to which aplan or a service instance is exchangeable. It is a function of theplan, the desired QoS constraints, and the contributingdataset and applies to both the plan and the serviceinstances. Thus, the same web service instance can havedifferent replaceability values in different plans. The sameplan when executed on a different dataset of service

TABLE 1Aggregation Functions for Basic QoS Attributes

Fig. 2. Staged web service composition and execution [1].

IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 7, NO. 2, APRIL-JUNE 2014200

instances can also obtain a different replaceability value.Similarly, the same plan on the same dataset but withdifferent QoS constraints can have different replaceabilityvalues. Replaceability is the only property that changes byintroducing or removing a web service instance from theset. Two plans are equally replaceable when their capabilityof replacing slices during a service failure or unavailabilityis the same. That is, when the corresponding serviceinstance within the plan fails, both plans can replace theirfailed slices with the same degree as expressed by thereplaceability value associated with the plan. High replace-ability yields more opportunities to replace a failingservice, enabling the continued operation of a plan withoutviolating constraints. Our process attempts to avoidconstraint failures by considering redundancy, whilemaximizing increasing dimension properties, minimizingdecreasing dimension properties, maintaining the upperand lower constraints, and is maximally replaceable underthe given constraints and with the given data set.

3.1 Measuring ReplaceabilityA simplistic measure of replaceability might ascribe to ‘‘thegreater the distance from the constraint, the more replace-able is the plan.’’ Though this often applies, it is not alwaystrue. Even when the service is far from the constraint, wemight not find a replacement in the chosen plan. A bettermeasure of replaceability is to compute the number offeasible substitutes for the service, given a chosen plan. Afeasible substitute will not violate any constraint. To calculatethe replaceability of a service instance in the plan, we countthe number of services belonging to the same abstract type,such that when we replace the original service instance inthe plan, the alternative is a feasible substitute. In sequentialand parallel invocations, the replaceability of a plan is thesum of the replaceability of the individual services. Wemust evaluate replaceability every time we evaluate thefitness function. Iterating over the services to calculate theplan’s replaceability is computationally reasonable onlywhen the number of services per abstract type is small. Asthe number increases, this process tends to be computa-tionally expensive. We overcome this problem by filteringservices that serve as substitutes as described in Section 3.3.

The replaceability of probabilistic invocation is equal tothe replaceability values assigned to the branches multi-plied by the probability of invoking that branch. Thisevaluation ensures that branches with higher probability

have a higher impact on replaceability. A branch with highprobability and high replaceability has higher precedencethan a branch with low probability and high replaceability.To determine when a substitute satisfies the constraint, weconsider the QoS values’ effect of its branch independent ofthe other branches on the switch. The evaluation must beperformed as if the branch is the only one being executedand the others are removed from consideration withrespect to the QoS values. Table 2 summarizes thereplaceability aggregation functions for the workflowpatterns in Table 1.

Revisiting the example in Fig. 1 and without consideringreplaceability, we repeat the Plan 1 in Table 3. We computethe set of substitutes and obtain the number of alternativesshown in the third row of Table 3. Service 3 has zerosubstitutes for Plan 1. If it fails, it is not replaceable. Ourapproach finds that the services in Plan 2 have substitutes,such that failure of any service leads to a direct replace-ment. Thus, Plan 2, with a replaceability value of 7 is morereplaceable than Plan 1 with a replaceability value of 4 asshown in the last row of Table 3.

3.2 Direct and Indirect SubstitutesIn the previous section, the substitutes computed are directsubstitutes. The direct substitutes are feasible because theycan replace the failing service without further adjustmentto the plan. Logically, it is very easy to compute the numberof direct substitutes. However, direct substitutes are notthe only indication of replaceability. Indirect substitutes arefeasible service instances that can replace the failing serviceafter adjustments with other service replacements aremade to the resulting plan. Though important to replace-ability, finding the number of indirect substitutes is an NP-hard problem. Since the physical service composition is byitself NP-hard, we cannot have something computationallyexpensive (such as finding the number of indirect sub-stitutes) as part of the fitness evaluation. Therefore, ourapproach searches for only direct substitutes. This limita-tion alters the definition of W0

ji in the selection process to bea physical plan that shares all the services x except xi. Thesearch now is for the direct substitute for xi, which is x0j.

A fully replaceable plan has at least one x0i for every xi insome Wj. A partially replaceable plan has a majority of fullyreplaceable web service instances. Risky points are serviceinstances with zero substitutes (i.e., x0i does not exist). Riskypoints can lead to failures, thereby contributing to making

TABLE 2Replaceability (P) Aggregation Functions

TABLE 3Substitutes Computed for Fig. 1 Plans

AL-HELAL AND GAMBLE: INTRODUCING REPLACEABILITY INTO WEB SERVICE COMPOSITION 201

a plan non-replaceable. Zero substitutes do not necessarilyindicate that the plan is non-replaceable, since the plancould recover with indirect substitutes.

Our algorithm seeks a fully replaceable plan with zerorisky points, meaning the plan is a more fault-tolerantchoice without concern for violating the constraints if anyservice within the plan becomes unavailable, fails, or altersits QoS values. Where no fully replaceable plan exists, thealgorithm generates the most replaceable plans with theleast risky points that satisfy the constraints. Because riskypoints are a significant challenge, we investigated themfurther to denote the different placements of risky pointsthat allow safety to be better understood. For example, arisky point earlier in the plan leaves the plan with a greateropportunity to adjust and propose a new plan thatcontinues to satisfy the constraints. Note that a risky pointat the last plan position is the most risky, since the plancannot recover if that service fails.

3.3 Filtering Web Services and Substitutes SearchAl-Helal and Gamble [10] use nondominated sets to filterthe web services. The dominance referred to is in terms ofall QoS objectives. The selected service in the optimal planmust belong to the nondominated set of the services of itsabstract type. This form of filtering does not discard anyrelevant service for the search process. By applying thisfiltering approach, we do not need to search all services toidentify substitutes and calculate the replaceability of theplan. We evaluate dominance in terms of the ConstrainedFunction objectives (CF). Note that a deployed service (sayyik 2 Y) that belongs to the nondominated set in terms of allthe objectives (F) does not have to also belong to thenondominated set in terms of CF.

For a dominated deployed service ðyikÞ in terms ofconstrained objectives, the dominating deployed servicesð� YiÞ can be direct substitutes. If the dominated servicesatisfies the constraints, then the dominating services willalso since they are equal or better in terms of all constrainedobjectives. In the discussion of the case where yik wasnondominated in terms of both objectives (F) and con-strained objectives (CF), we start with one constraint, thentwo constraints and then we move to more than twoconstraints.

In the case of one constraint associated with cf1, it issufficient to check the maximum (or minimum) servicein terms of cf1 in order to determine whether yik has adirect substitute. However, if yik has that maximum

(or minimum) cf1 value, then the algorithm examines thenext service in Yi with following maximum (or minimum)service cf1 value. Therefore, in the case of one constraintwith the selected service belonging to the nondominatedset in terms of constrained objectives, only one service ischecked in Yi to determine the replaceability of yik. Todetermine the replaceability of a plan Wj, we check �services, where � is the number of abstract types in theplan. That is, one service for each ti (abstract type) in Wj.

We consider the services in Fig. 3. Constraints cf1 and cf2

decrease in dimension. Assume that the selected service yik

is the service in the circle in Fig. 3. If yik fails or becomes anunavailable then the nondominated set (pareto front) for Yi

will change and it will include services from the nextdomination set. The question is whether we need to searchall the services in the non-dominated set and level 1domination set for direct substitutes. The answer is no.Remember that yik is a non-dominated service in Yi.Therefore, no other service will be better than yik in termsof both cf1 and cf2. We can test a service that is better thanyik in terms of cf1 and worse in terms of cf2 if such a serviceexists. There may be many services satisfying this criterion,so we choose the closest in terms of cf2 (the left most blackservice on the edge of the circle). Also, we need to test for aservice that is better than yik in terms of cf1 if such a serviceexists. We choose the one closest in terms of cf1 (the servicedirectly to the right and outside of the circle). In the caseof two constraints with the selected service belonging tothe non-dominated set in terms of constrained objectives, onlytwo services are checked in Yi in the worst case to determinethe replaceability of yik. To determine the replaceability of aplan Wj, we must check 2� services in the worst case. That is,two services for each ti (abstract type) in Wj.

For both cases of one constraint and two constraints, wedetermine the most likely substitutes for each service at theinitialization phase. During the replaceability evaluation,we test only these services. This testing makes thecomplexity of the replaceability to be Oð�Þ.

With more than two constraints, we cannot reduce thesubstitutes search in the same way as for one or twoconstraints. Any filtering is approximate and risks dis-carding some relevant services that could be useful for thesearch process. An example of approximate filtering is thenearest neighbors method. Instead of searching all servicesfor a direct substitute, we only search the closestneighbors. In Fig. 3, the nearest neighbor search using thecircle would return three services and would discard theservice to the right and outside of the circle that is moreimportant than the two services returned by the nearestneighbor search. However, with the increased number ofservices and their improved distribution, this technique willreturn better candidate services for composition.

The Brute Force [11] search technique would store all theYi services in a sequential list sorted according to the tightestconstraint cfj. The search starts scanning the servicessequentially until a direct service is found. Then it stopsscanning once a service violated the cfj constraint.

3.4 Position-Based PenaltyRisky points often differentiate between fully replaceableand partially replaceable plans. Since they are unfavorable,

Fig. 3. Services and domination levels.

IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 7, NO. 2, APRIL-JUNE 2014202

we penalize plans with risky points. We prefer a position-based penalty function over fixed penalties. A position-basedpenalty function is based on two observations: 1) riskypoints occurring at the end of the plan are always failurepoints and should accumulate additional penalties and2) the length and the number of substitutes having animpact on the plan’s ability to recover affects the penalty.

The procedure in Fig. 4 shows how we calculatereplaceability using a dynamic penalty. Note that thepenalty changes with the replaceability of what comes afterthe service. The substitutesSumAfter procedure cal-culates the sum of the substitutes of the services executedafter the current service. Loops and switches complicatethe procedure. The substitutes search described is bruteforce search, but can be sorted based on the tightestconstraint to enhance its performance. When we encountera substitute violating that constraint, we exit the loop(innerLoop). We terminate the search when we find adirect substitute.

4 APPLYING THE WEIGHTED SUM METHOD FORPHYSICAL COMPOSITION

The output of our planning algorithm is the physical plan(chromosome) and the list of the direct substitutes for eachservice in the physical plan. To solve the physicalcomposition problem, we use genetic algorithms with a

fitness function constructed using the weighted summethod, which transfers a multi-objective problem intomono-objective problem. Weights help distinguish amongplans with conflicting objectives. The weights are selectedbased on user preferences and requirements. Weights notselected properly can produce undesirable results.

4.1 DatasetsWe test two logical plans. Dataset A is the first logical plancomposed of sequential services. Dataset B is second logicalplan, described by Fig. 5, with a loop and a probabilisticswitch. Each service instance is associated with four QoSattributes: availability, reliability, cost, and response time.Two types of datasets are used for testing. For the first type,we randomly created 14 datasets, number 1 through 14,with the number of services ranging from 35 to 100,000 andvarying QoS values (see http://www.seat.utulsa.edu/wp-content/uploads/2012/11/Hussein-2012-WSdataset.zip). The range of QoS values is 1-9 for datasets 1through 5. This range is used for simpler computation.The range of QoS values for Datasets 7-14 is from 1-99,which realistically describes the values. The seconddataset is the QWS dataset, a set of 2,507 actual serviceswhose QoS measurements were collected by web crawler[12], [13]. Each service in the QWS dataset is associatedwith nine QoS attributes but cost is not among them. Wenote that the average service reliability value in this

Fig. 4. Calculating replaceability with dynamic penalty.

AL-HELAL AND GAMBLE: INTRODUCING REPLACEABILITY INTO WEB SERVICE COMPOSITION 203

dataset was 70 percent, adding to the premise thatservices do fail. The services are not categorized seman-tically. So, we set every 250 services to one abstract type.

4.2 Fitness FunctionThe fitness function is given by Equation (1) (see equationat the bottom of the page). Wj is the physical plan (solution);w1, w2, w3, w4, and w5 are user-provided weights. Replace-ability is treated as a property with a weight that can bemuted if not needed. Penalty() is the constraint violationpenalty function. Aggregation when combined with userconstraints dictates which plans are more replaceable. Theweight w3 can be set to a dynamic value related to theprobability of success of the physical plan or to a constantvalue. In our testing, we used a constant value since wewould like to avoid violating constraints not only in the caseof service failure but also the when the QoS value changes.

To calculate the penalty, we use the approach in [10] inwhich the penalty increases with the increase of themaximum/minimum QoS value’s distance from the con-straint. Aspecial consideration isgiventoswitcheswhere theextreme value of the switch is calculated to violations in theconstraint because of route selection in the switch. Infeasiblesolutions are penalized for violating any constraint. Thepenalty increases with the increase of the maximum/minimum QoS value’s distance from the constraint.

Regarding genetic algorithm parameters, we appliedBinary Tournament Selection with Elitism where the besttwo were copied to the next generation, Uniform Crossoverwith rate 0.9, Single Inversion Mutation with rate 0.1, and aPopulation Size of 100.

4.3 Testing the Use of ReplaceabilityThe testing goal is to show that by using replaceability, theresulting plans can better avoid violating the original planconstraints in cases of failures, introducing fault tolerancetoward service changes. The results of these tests shown inTable 7 are detailed later in the section. We test position-based penalty to show it produces a more replaceable planthan the fixed penalty. We test the use of nearest neighborsearch of substitutes and show it reduces the searchconvergence time. Thus, our research questions are: Doesreplaceability help in service failures recovery? Canposition-based penalty help in finding partially replaceable

plans? Will using nearest neighbor search aid in reducingsearch space to find direct substitutes?

For QWS Dataset, we assumed the user supplies thefollowing constraints: 1) Availability � 0.913 and ResponseTime � 0.0864, 2) availability � 0.9 and Response Time �3590. In addition, the user supplies the following weights:w1 ¼ 0:2, w2 ¼ 0:2, w3 ¼ 0:2, w4 ¼ 0:2, and w5 ¼ 0:2. Table 4shows the number of substitutes for each service in theplan. We make three main observations. In the first test, theplan without replaceability failed and could not recover atthe last two services, while the plan with replaceability didnot have any risky points and never failed. In the secondtest, the plan without replaceability was able to recover atthe sixth service although it was a risky point. Thisrecovery is because of the presence of indirect substitutes.The final observation is also on the second test where theplan’s availability did not improve after introducingreplaceability although it was constrained. Thus, having aconstraint on an objective does not imply that its value willimprove by considering replaceability.

For the random datasets, tests with different combina-tions of constraints were conducted. Every test wasexecuted 10 times with two types of runs: 1) one thatconsidered replaceability and 2) one without replaceability.We conducted tests to compare the position-based penaltyfunction and fixed penalty function.

The test is conducted on random Dataset 4-A as shownin Table 5. Two physical plans alternated to become theoptimal solution during the run. The plans have identicalavailability, cost, reliability and response time. The planselected using the fixed penalty function shows the riskypoints in the ninth position (gene), whereas the planselected with position-based penalty has the risky points inthe fourth position (gene). After failing the risky points, thelatter plan was able to recover while the former plan didnot recover. Based on the position-based penalty, we forcethe algorithm to select the latter plan.

Next, we examine the rate of violations (failures torecover) for both algorithms. The rate of violations for ‘‘NoReplaceability’’ is higher than the algorithm with replace-ability. Fig. 6 shows how the algorithm behaves with twoconstraints (cost and response time) if replaceability is notconsidered. Notice the number of failures start to rise at alow cost constraint. The cost constraint at which failures

Fig. 5. Logical plan B.

FitnessðWjÞ ¼w1 �AvailabilityðWjÞ þ w2 �ReliabilityðWjÞ þ w3 �ReplaceabilityðWjÞ

w4 � CostðWjÞ þ w5 �ResponseTimeðWjÞ� PenaltyðWjÞ

IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 7, NO. 2, APRIL-JUNE 2014204

occur starts to decrease with the decrease in the responsetime constraint. The surface stays close to the zero levelonly over a small area compared to the next graph in Fig. 7where replaceability is considered.

We conducted tests on direct substitutes search tocompare nearest neighbor search with brute force search.In the brute force search, we searched the whole list of non-dominated and level 1 domination substitutes which aresorted according to the tightest constraint. In the nearestneighbor search, the list of possible substitutes for eachservice is reduced to its nearest neighbor within a specifiedEuclidean distance. Note that the nearest neighbor searchmight not reflect the true replaceability as it might showsome services at risky points as false positives. In ourimplementation, we used kd-tree, a generalization of abinary tree, to perform this search [14].

Table 6 compares the results of the searches based onnearest neighbor and brute force. The table shows Euclideandistance used for the nearest neighbor. Since the QoS rangesare different, the Euclidean distance is different. In general,with the increase of the size of the dataset, we can use lowerEuclidean distance. However, in some cases (as in Dataset10-A), we had to use higher Euclidean distance since a lowervalue will not produce a replaceable plan. We make twoobservations from this test. First, with a suitable Euclideandistance, most of the cases show that the searching nearestneighbors for direct substitutes produces the same plan assearching all the possible substitutes. Second, it is possible

that with using the nearest neighbor search, we could have adifferent plan than the full substitutes search as in the casesof Dataset 8-A and 12-A.

The main advantage with reducing the substitutessearch to the nearest neighbor is reducing the searchcomplexity and then the algorithm convergence time.Table 6 shows the average values for 100 tests. Theconvergence is gene convergence and is measured as thesum of hamming distances between any two solutions inthe population. The results in the table show the conver-gence time is much better for nearest neighbor than forbrute force (BF).

In Table 7, we summarize test results. The columndenoted by ‘‘Failures to Recover’’ indicates how manytimes the plan fails to recover when forcing the failure ofevery service instance independently. For example, twofailures mean that at two different service instances, theplan failed to recover and did not generate a new plan thatsatisfied the constraint. For every dataset in Table 7, we usethe same two types of runs denoting the constrainedobjectives. It should be noted that in some cases, there aretradeoffs and sometimes, an objective must suffer. Thisobjective does not have to be a non-constrained one. Table 7shows where the objectives do worse in bold. The arrowsindicate whether its value increases or decreases.

Overall, the results in Table 7 show that by consideringreplaceability, we obtain safer, more fault tolerant plansthat in case of service failures are less likely to violate plan

TABLE 4Test Results for QWS Dataset

TABLE 5Test Results for Dataset 4-A

AL-HELAL AND GAMBLE: INTRODUCING REPLACEABILITY INTO WEB SERVICE COMPOSITION 205

constraints. The replaceability metric does not guaranteethat we are not going to have non-replaceable plans. Forexample, in Datasets 5A and 6A with replaceability, theselected plans failed once. However, the number offailures is greatly reduced over the algorithm that doesnot consider replaceability.

5 REPLANNING

Replanning using genetic algorithms consumes additionaltime. Canfora et al. [20] indicates that if replanning occursduring runtime, the response time constraint is sometimesviolated. A better approach would have tighter timeconstraints and not require running the replanning algo-rithm at every failure. Knowledge of replaceability explic-itly provides us with the feasible plan substitutes forimmediate use if a chosen service fails. We also knowwhether replanning is necessary to avoid constraintviolations. By including replaceability, the original planminimizes the risky points. Three research questions arisein regard to replanning: When do we need to run areplanning algorithm? From where do we start replanning?How do we do the replanning?

To answer the first question, we examine the events thatnecessitate replanning. A service becomes unavailable or itfails to respond correctly or within a time window. Aservice’s QoS values change before execution. A service’sQoS values change after execution. Due to the selection ofthe execution paths in switches or a change in the number ofloops, the plan’s actual QoS values differ from the estimatedones. Fig. 8 shows the classification of these events.

The first case is the case of services failing or becomingavailable. There are two sub-cases to consider. The firstsub-case is for non-risky points. If the plan does not violatethe time constraint (we can afford to replan) and the failurepoint is not a risky point, then the algorithm shouldattempt re-optimizing the slice. There are two reasons forre-optimization. First, it could obtain better QoS values.Second, it could obtain a more replaceable plan. To lessenthe need for replanning, the algorithm initially checkswhether any of the direct substitutes will produce a fullyreplaceable plan. If the best direct substitute does notproduce such a replaceable plan, then it moves to the

re-optimization phase. In testing, we used all availablereplaceable plans and failed at every non-risky point. In75 percent of the cases where another fully replaceable planexisted, the use of the direct substitute produced this plan.In the remaining 25 percent, the plans produced by the useof the direct substitute had some risky points but they werepartially replaceable. On the other hand, in the presence ofan execution time constraint, if the time constraint can beviolated by re-optimization, then the algorithm uses thebest direct substitute. If another replaceable plan exists,most often the direct substitute will work. The other sub-case is when the failing/unavailable service is a risky point.The only option is to re-optimize the slice to avoid anyviolation of the constraint and reduce the damage.

The second case is when the actual service QoS valueschange before execution. The change could be detectedbefore or after the service execution. An example of achange detected before the execution is when the costchanges. If the change detected before execution doesviolate the constraint or exceeds a certain threshold, thenwe follow the decision making described for the failing/unavailable service. Thus, we avoid the effect of the changein QoS values. As seen in Fig. 8, this case is joined with thefirst case because we handle them in the same way.

The third case occurs when the service QoS valueschange after execution. The main example of changedetected after execution is when the actual response timediffers from the given one. If the change is detected after theexecution and the new QoS values violate the constraint orexceed the threshold, then the algorithm must replan theworkflow. The fourth case is when the estimated number ofiterations in the loop change. This change also impacts theQoS values. If the new QoS values violate the constraints,then we need to replan. The final case is when the plan’sactual QoS values deviate from the estimated one because ofthe selection of the execution path. The penalty function inEquation (1) handles the case of switches. If the maximum/minimum possible QoS values violate the constraint, thenthe plan is considered as infeasible and is penalized. For allbranches, there is no constraint violation in feasible plans.

If a new service is added or becomes available and thisaddition is detected, then, no replanning is needed. Thealgorithm compares the newly added service to itsassociated one in the plan if it is not executed yet. If itdominates the one in the plan, then we switch to the new

Fig. 6. No replaceability vs. cost and response time constraints-dataset 3-A.

Fig. 7. With replaceability vs. cost and response time constraints ondataset 3-A.

IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 7, NO. 2, APRIL-JUNE 2014206

service. Since the new one has better QoS values, the al-gorithm evaluates the substitutes again.

6 RELATED WORK

Andrikopoulos et al. [1] recognize the potential for‘‘spurious results and inconsistencies’’ when services

exhibit uncontrolled changes. They construct a frameworkto control service evolution, which accounts for structural,behavioral, and QoS induced changes to services that maycause the service to go offline or change QoS values. Thesechanges can occur after service discovery, insertion, andprior execution, undermining the plan’s use of the servicebecause it may no longer meet expected QoS thresholds.

TABLE 6Comparing Brute Force and Nearest Neighbor for Searching Replaceable Plans

TABLE 7Results for Random Datasets Runs

AL-HELAL AND GAMBLE: INTRODUCING REPLACEABILITY INTO WEB SERVICE COMPOSITION 207

There exists significant research activity on variousaspects of and approaches to web service composition,though only the initial composition is formulated. Cluster-ing can be applied to the service functions and functionalplan behavior expectations to form the sets of abstracttypes [15]. Semantics, in terms of annotating web servicesbeyond basic WSDL interface specification, can be used tobetter facilitate composition. SAWSDL (Semantic Annota-tion for WSDL and XML Schema) can be input to servicesuggestion algorithms to rank services based on function-ality annotations to assist with composition [16]. Semanticscan also be used with intelligent planners to composeservices based on usage and plan knowledge [17]. Thoughthis broker framework performs dynamic discovery andexecution coordination of a small set of services, itexamines best fit only in terms of functionality.

In [18], Leitner et al., use the association of QoS attributeswith Service Level Agreements (SLAs) to minimize the costof SLA violations to service compositions. Service LevelObjectives (SLOs) embody the numerical associations withspecific QoS attributes. As part of the SLA, if the SLOs arenot honored by the service, the organization providing theservice may incur a fee. This consequence can result in theneed for runtime prediction of service performancepertaining to when a QoS value might change. Providerscan then improve QoS properties, such as reducing itsresponse time, but this too comes at a cost. Leitner et al.,examine the associated cost tradeoff as an optimizationproblem in terms of expected QoS values versus the cost oftheir violations. They must derive a set of specificadaptation actions for every usable service against itspotential for SLA violation. Their goal is to improve theexisting service by optimizing its adaptations for thecomposition rather than select a service that functionssimilarly and will not violate its SLA.

Canfora et al. [4] and Jaeger and Muhl [19] use a single-objective genetic algorithm to perform physical web servicecomposition that finds the best solution to maximize/minimize objectives and satisfy user constraints. Theirfitness function uses the Weighted Sum Method where thenormalized, individual QoS values assign one value to thechromosome. To compose, the requestor supplies weightsfor the QoS attributes before applying the fitness function.Schuller et al. [20] propose aggregation functions for QoSparameters to formulate the non-linear problem into a

linear optimization problem. They similarly assume thatabstract service types are composed for a particular queryand examine cost, execution time, reliability, and through-put. Because of the computational complexity of theirinteger linear programming solution, they must use amixed integer linear programming technique, but achieveonly a valid, but probability non-optimal solution to satisfyuser preferences and constraints.

Canfora et al. [21] incorporate replanning of the servicebindings when an undesirable event occurs. If the eventresults in a large deviation of QoS or violates a userconstraint, then replanning is triggered to ‘‘reduce thedamage.’’ Besides the user constraint, their algorithmrequires as a parameter a QoS threshold that, whenexceeded, triggers replanning, which tries to identify theplan slice still to be executed. The algorithms compute theslice of the workflows that include loops, switches andsequences, then run the genetic algorithm on this sliceto determine the optimal physical composition thatmaximizes/minimizes fitness and maintains constraints.Their results are weakened by the potential for responsetime violation after replanning when the algorithm is rerunfor each service failure. As a reactive approach, theirselection of the optimal plan does not consider potentialfuture services failures. Thus, alternative replacementservices are not explicitly made available that can auto-matically meet user and QoS constraints upon theirinsertion into the plan. An adaptive replanning mechanismby Na et al. [22] places more emphasis on the cause-effectrelationship among system execution states, solution spacechanges, replanning strategies and the potential effects insystem adaptions. Their approach also triggers replanningbut is still a reactive approach that can be improved byintroducing the concept of risky points. Their approachuses Pareto dominance to manage the solution space whichis what we used to manage the space. They detailed anadaption framework and provided dynamic optimizationstrategies selection. Their framework can be extended toinclude even our strategies as part of their repositories.

Al-Helal and Gamble [10] define a method to filter webservices based on dominance without discarding anyrelevant services that could be part of an optimallyconstructed plan. This filtering approach is coupled witha new method to handle loops, called partial unfolding.Compared to unlooping and complete unfolding, partial

Fig. 8. Classification of the undesirable events that could trigger re-optimization.

IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 7, NO. 2, APRIL-JUNE 2014208

unfolding with filtering improves the fitness valueswithout exponentially raising composition time. In thiswork, only a limited number of service instances canbelong to the same abstract type.

7 OBSERVATIONS AND CONCLUSION

In this paper, we introduce replaceability as a metric fordetermining a web services composition. Our algorithmfinds a plan that is equal to or more tolerant than the planproposed by the traditional algorithm, unless such a plandoes not exist. Our algorithm filters the services searchspace based on all QoS attributes and filters the substitutessearch space based on constrained QoS attributes. Wecompare the position-based penalty with a fixed penaltyand show the advantage of such penalty function onreducing risky points in the plan. The approximated resultsshow that our approach improves tremendously theconvergence time but true replaceability largely dependson the choice of Euclidean distance.

We plan to test a multi-objective genetic algorithmsuch as NSGA-II [2] to find a replaceable optimal physicalplan. With NSGA-II, we can find the pareto-front of theoptimal plans and use replaceability as a determiningfactor for selection. Because it may be necessary tosubstitute a plan for a workflow that cannot complete,instead of a local service substitution, future work willexamine methods for incorporating indirect substitutesinto the fitness evaluation.

REFERENCES

[1] V. Andrikopoulos, S. Benbernou, and M. Papazoglou, ‘‘On theEvolution of Services,’’ IEEE Trans. Softw. Eng., vol. 38, no. 3,pp. 609-628, May/June 2012.

[2] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, ‘‘A Fast andElitist Multiobjective Genetic Algorithm: NSGA-II,’’ IEEE Trans.Evol. Comput., vol. 6, no. 2, pp. 182-197, Apr. 2002.

[3] V. Agarwal, G. Chafle, S. Mittal, and B. Srivastava, ‘‘Under-standing Approaches for Web Service Composition and Execu-tion,’’ in Proc. 1st Bangalore Comput. Conf., 2008, p. 1.

[4] G. Canfora, M. Penta, R. Esposito, and M.L. Villani, ‘‘An Approachfor QoS-Aware Service Composition Based on Genetic Algorithms,’’in Proc. GECCO, 2005, pp. 1069-1075.

[5] L. Zeng, B. Benatallah, A.H.H. Ngu, M. Dumas, J. Kalagnanam,and H. Chang, ‘‘QoS-Aware Middleware for Web Services Com-position,’’ IEEE Trans. Softw. Eng., vol. 30, no. 5, pp. 311-327,May 2004.

[6] J. Cardoso, A. Sheth, J. Miller, J. Arnold, and K. Kochut, ‘‘Qualityof Service and Semantic Composition of Workflows,’’ J. WebSemant., vol. 1, no. 3, pp. 281-308, 2004.

[7] M.C. Jaeger, G. Rojec-Goldmann, and G. Muehl, ‘‘QoS Aggrega-tion for Web Service Composition using Workflow Patterns,’’ inProc. 8th IEEE Int. Enterprise Distrib. Object Comput. Conf., 2004,pp. 149-159.

[8] M.C. Jaeger, G. Muhl, and S. Golze, ‘‘QoS-Aware Compositionof Web Services: An Evaluation of Selection Algorithms,’’ inProc. Confederated Int. Conf. CoopIS, DOA, ODBASE, Nov. 2005,pp. 646-661.

[9] H. Al-Helal, ‘‘Web services composition based on QoS andreplaceability with space reduction,’’ M.S. thesis, Dept. Comput.Sci., Univ. Tulsa, Tulsa, OK, USA, 2009.

[10] H. Al-Helal and R. Gamble, ‘‘Web Service Composition withDominance-Based Filtering and Partial Unfolded Loops,’’ inProc. GEM, 2010, pp. 71-77.

[11] J.L. Bentley and J.H. Friedman, ‘‘A Survey of Algorithms andData Structures for Ranges Searching,’’ ACM Comput. Surveys,vol. 11, no. 4, pp. 397-409, Dec. 1979.

[12] E. Al-Masri and Q.H. Mahmoud, ‘‘Discovering the Best WebService,’’ in Proc. 16th Int. Conf. World Wide Web, 2007,pp. 1257-1258.

[13] E. Al-Masri and Q.H. Mahmoud, ‘‘QoS-Based Discovery andRanking of Web Services,’’ in Proc. IEEE 16th Int. Conf. Comput.Commun. Netw., 2007, pp. 529-534.

[14] J.L. Bentley, J.H. Friedman, and R.A. Finkel, ‘‘An Algorithm forFinding Best Matches in Logarithmic Expected Time,’’ ACMTrans. Math. Softw., vol. 3, no. 3, pp. 209-226, Sept. 1977.

[15] X. Wang, Z. Wang, and X. Xu, ‘‘Semi-Empirical ServiceComposition: A Clustering Based Approach,’’ in Proc. IEEE Int.Conf. Web Serv., 2011, pp. 219-226.

[16] R. Wang, C. Guttula, M. Panahiazar, H. Yousaf, J. Miller,E. Kraemer, and J. Kissinger, ‘‘Web Service Composition usingService Suggestions,’’ in Proc. IEEE Serv. Congr., 2011, pp. 482-489.

[17] Y. Zhang and H. Zhu, ‘‘An Intelligent Broker Approach toSemantics-Based Service Composition,’’ in Proc. IEEE Comput.Softw. Appl. Conf., 2011, pp. 20-25.

[18] P. Leitner, W. Hummer, and S. Dustdar, ‘‘Cost-Based Optimiza-tion of Service Compositions,’’ IEEE Trans. Services Comput.,vol. 6, no. 2, pp. 239-251, Apr./June 2011.

[19] M.C. Jaeger and G. Muhl, ‘‘QoS-Based Selection of Services: TheImplementation of a Genetic Algorithm,’’ in Proc. KiVS Workshop,Serv.-Orient. Archit. Serv.-Orient. Comput., Mar. 2007, pp. 1-12.

[20] D. Schuller, J. Eckert, A. Miede, S. Schulte, and R. Steinmetz,‘‘QoS-Aware Service Composition for Complex Workflows,’’ inProc. 5th Int. Conf. Internet, 2010, pp. 333-338.

[21] G. Canfora, M. Penta, R. Esposito, and M.L. Villani, ‘‘QoS-AwareReplanning of Composite Web Services,’’ in Proc. Int. Conf. WebServ., 2005, pp. 121-129.

[22] J. Na, G. Li, B. Zhang, L. Zhang, and Z. Zhu, ‘‘An AdaptiveReplanning Mechanism for Dependable Service-Based Systems,’’in Proc. IEEE Int. Conf. E-Business Eng., 2010, pp. 262-269.

Hussein Al-Helal received the BS degree insoftware engineering from King Fahd Universityof Petroleum and Minerals, Dhahran, SaudiArabia, in 2006, and the MS degree incomputer science from the University of Tulsa,Tulsa, OK, USA, in 2009. Currently, he isworking for EXPEC Computer Center SaudiAramco, Dhahran, Saudi Arabia.

Rose Gamble received the BS degree inmathematics and computer science from West-minster College, New Wilmington, PA, USA, in1986, and the MS and DSc degrees in computerscience from Washington University, St. Louis,MO, USA, in 1992. She is a Professor in theTandy School of Computer Science at theUniversity of Tulsa, Tulsa, OK, USA, where shedirects the Software Engineering and Architec-ture Team. Her research interests include cloudcomputing, web services, security, and collabo-

rative learning. She is a member of the IEEE Computer Society.

. For more information on this or any other computing topic,please visit our Digital Library at www.computer.org/publications/dlib.

AL-HELAL AND GAMBLE: INTRODUCING REPLACEABILITY INTO WEB SERVICE COMPOSITION 209