Model-based Performance Analysis of Service …QUASOSS 2010 page 2 Analysis of Non-Functional...
Transcript of Model-based Performance Analysis of Service …QUASOSS 2010 page 2 Analysis of Non-Functional...
QUASOSS 2010 page 1
Model-based Performance Analysis of
Service-Oriented Systems
Dorina C. Petriu Carleton University
Department of Systems and Computer Engineering Ottawa, Canada, K1S 5B6
http://www.sce.carleton.ca/faculty/petriu.html
QUASOSS 2010 page 2
Analysis of Non-Functional Properties Model-Driven Engineering enables the analysis of non-functional properties
(NFP) of software models
examples of NFPs: performance, scalability, reliability, security, etc.
many existing formalisms and tools for NFP analysis
queueing networks, Petri nets, process algebras, Markov chains, fault trees, probabilistic time automata, formal logic, etc.
research challenge: bridge the gap between MDD and existing NFP analysis formalisms and tools rather than „reinventing the wheel‟
Approach:
add additional annotations for expressing different NFPs to software models
define model transformation from annotated software models to different NFP analysis models
using existing solvers, analyze the NFP models and give feedback to designers
In the UML world: define extensions as UML Profiles for expressing NFPs
UML Profile for Schedulability, Performance and Time (SPT)
UML Profile for Modeling and Analysis of Real-Time and Embedded systems (MARTE)
QUASOSS 2010 page 3
Software performance evaluation in MDE
Software performance evaluation in the context of Model-Driven Engineering:
starting point: UML software model used also for code generation
add performance annotations (using the MARTE profile)
generate a performance analysis model
queueing networks, Petri nets, stochastic process algebra, Markov chain, etc.
solve analysis model to obtain quantitative results
analyze results and give feedback to designers
UML + MARTE
Software Model UML Tool
Model-to-model
Transformation Performance
Analysis Tool
Feedback
to designers
Performance
Model
Performance
Analysis
Results
Model-to-code
Transformation Software
Code
Code generation
Performance evaluation
QUASOSS 2010 page 4
Transformation Target: Performance Models
QUASOSS 2010 page 5
Performance modeling formalisms
Analytic models Queueing Networks (QN)
capture well contention for resources
efficient analytical solutions exists for a class of QN (“separable” QN): possible to derive steady-state performance measures without resorting to the underlying state space.
Stochastic Petri Nets
good flow models, but not as good for resource contention
Markov chain-based solution suffers from state space explosion
Stochastic Process Algebra
introduced in mid-90s by merging Process Algebra and Markov Chains
Stochastic Automata Networks
communicating automata synchronized by events; random execution times
Markov chain-based solution (corresponds to the system space state)
Simulation models less constrained in their modeling power, can capture more details
harder to build and more expensive to solve (running the model repeatedly).
QUASOSS 2010 page 6
Queueing Networks (QN)
Queueing network model = a directed graph: nodes are service centres, each representing a resource;
customers, representing the jobs, flow through the system and compete for these resources;
arcs with associated routing probabilities (or visit ratios) determine the paths that customers take through the network.
used to model systems with stochastic characteristics
multiple customer classes: each class has its own workload intensity (the arrival rate or number of customers), service demands and visit ratios
bottleneck service center: saturates first (highest demand, utilization)
CPU
Disk 1
Disk 2
out
CPU CPU
Disk Disk
Disk Disk
out
CPU
Disk 1
Disk 2 Terminals
.
.
.
CPU
Disk Terminals
.
.
.
Open QN system Closed QN system
QUASOSS 2010 page 7
Single Service Center: Non-linear Performance
Typical non-linear behaviour for queue length and waiting time
server reaches saturation at a certain arrival rate (utilization close to 1)
at low workload intensity: an arriving customer meets low competition, so its residence time is roughly equal to its service demand
as the workload intensity rises, congestion increases, and the residence time along with it
as the service center approaches saturation, small increases in arrival rate result in dramatic increases in residence time.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0.2 0.4 0.6 0.8
Arrival Rate
Uti
lizati
on
0
5
10
15
20
0 0.2 0.4 0.6 0.8
Arrival rate
Resid
en
ce T
ime
0
5
10
15
20
0 0.2 0.4 0.6 0.8
Arrival rateQ
ueu
e len
gth
Utilization Residence Time Queue length
QUASOSS 2010 page 8
Layered Queueing Network (LQN) model http://www.sce.carleton.ca/rads/lqn/lqn-documentation
LQN is a extension of QN
models both software tasks (rectangles) and hardware devices (circles)
represents nested services (a server is also a client to other servers)
software components have entries corresponding to different services
arcs represent service requests (synchronous, asynchronous and forwarding)
multi-servers used to model components with internal concurrency
clientE
service1 service2 Appl
query1 query2
Client
CPU
ClientT
DB
Appl
CPU
DB
CPU
Disk1 Disk2
task entries
device
QUASOSS 2010 page 9
LQN extensions: activities, fork/join
e2 Remote
Client
Internet
Local Wks
Web Proc
RemoteWks
SDisk
eComm Proc
DB
Proc Secure
Proc
1..n
e1 Local
Client
1..m
e4 Web
Server e3
a4 [e4]
&
a1
a2 a3
&
e5
eComm
Server
e7 DB e6 Secure
DB
Disk
QUASOSS 2010 page 10
Performance versus Schedulability
Difference between performance and schedulability analysis
performance analysis: timing properties of best-effort and soft real-time systems
e.g., information processing systems, web-based applications and services, enterprise systems, multimedia, telecommunications
schedulability analysis: applied to hard real-time systems with strict deadlines
analysis often based on worst-case execution time, deterministic assumptions
Statistical performance results (analysis outputs):
mean (and variance) of throughput, delay (response time), queue length
resource utilization
probability of missing a target response time
Input parameters to the analysis - also probabilistic:
random arrival process
random execution time for an operation
probability of requesting a resource
Performance models represents a system at runtime
must include characteristics of software application and underlying platforms
QUASOSS 2010 page 11
UML Profiles for performance annotations:
SPT and MARTE
QUASOSS 2010 page 12
UML SPT Profile Structure
Analysis Models Infrastructure Models
«modelLibrary» RealTimeCORBAModel
General Resource Modeling Framework
«profile» RTresourceModeling
«profile» RTconcurrencyModeling
«import» «import»
«profile» RTtimeModeling
«profile» PAprofile
«import»
«profile» RSAprofile
«import»
«profile» SAProfile
«import» «import»
QUASOSS 2010 page 13
SPT Performance Profile: Fundamental concepts
Scenarios define execution paths with externally visible end points.
QoS requirements can be placed on scenarios.
Each scenario is executed by a workload:
open workload: requests arriving at in some predetermined pattern
closed workload: a fixed number of active or potential users or jobs
Scenario steps: the elements of scenarios joined by predecessor-successor relationships which may include forks, joins and loops.
a step may be an elementary operation or a whole sub-scenario
Resources are used by scenario steps. Quantitative resource demands for each step must be given in performance annotations.
The main reason for building performance models is to compute additional delays due to the competition for resources!
Performance results include resource utilizations, waiting times, response times, throughputs.
Performance analysis is applied to real-time systems with stochastic characteristics and soft deadlines (use mean value analysis methods).
QUASOSS 2010 page 14
SPT Performance Profile: the domain model
PerformanceContext
Workload
responseTime
priority
PScenario
hostExecDemand
responseTime
PResource
utilization
schedulingPolicy
throughput
PProcessingResource
processingRate
contextSwitchTime
priorityRange
isPreeemptible
PPassiveResource
waitingTime
responseTime
capacity
accessTime
PStep
probability repetition delay operations interval executionTime
ClosedWorkload
population
externalDelay
OpenWorkload
occurencePattern
0..n
1..n
1..n 1
1 1
1..n 1..n
0..n
0..n
0..n
0..1
1..n
1
1 {ordered}
+successor
+predecessor
+root
+host
Workload Resources
Scenario/
Step
QUASOSS 2010 page 15
MARTE overview
MARTE domain model
MarteFoundations
MarteAnalysisModel MarteDesignModel
Foundations for modeling and
analysis of RT/E systems :
CoreElements
NFPs
Time
Generic resource modeling
Generic component modeling
Allocation
Specialization of MARTE foundations
for annotating models for analysis
purpose:
Generic quantitative analysis
Schedulability analysis
Performance analysis
Specialization of MARTE foundations
for modeling purpose (specification,
design, etc.):
RTE model of computation and
communication
Software resource modeling
Hardware resource modeling
QUASOSS 2010 page 16
GQAM dependencies and architecture
GQAM (Generic Quantitative Analysis Modeling): Common concepts for analysis
SAM: Modeling support for schedulability analysis techniques.
PAM: Modeling support for performance analysis techniques.
GQAM
Time GRM
« import » « import »
SAM PAM
« import » « import »
« modelLibrary »
MARTE_Library
« import »
NFPs
« import »
GQAM_Workload GQAM_Resources« import »
GQAM_Observers« import »
QUASOSS 2010 page 17
Annotated deployment diagram
«execHost»
dbHost: {commRcvOverhead = (0.14,ms/KB), commTxOverhead = (0.07,ms/KB), resMult = 3}
«execHost»
ebHost: {commRcvOverhead = (0.15,ms/KB), commTxOverhead = (0.1,ms/KB), resMult = 5}
«execHost»
webServerHost: {commRcvOverhead = (0.1,ms/KB), commTxOverhead = (0.2,ms/KB)}
«commHost»
internet: {blockT = (100,us)}
«deploy» «deploy»
: Database : WebServer
«artifact»
webServerA
: EBrowser
«artifact»
ebA
«artifact»
databaseA
«deploy»
«manifest» «manifest» «manifest»
«commHost»
lan: {blockT = (10,us), capacity = (100,Mb/s)}
blockT describes a
pure latency for the link
commRcvOvh and commTxOvh
are host-specific costs of receiving
and sending messages
resMult = 5
describes a
symmetric
multiprocessor
with 5
processors
QUASOSS 2010 page 18
Simple scenario
«PaRunTInstance» webServer: WebServer {poolSize = (webthreads=80),
instance = webserver}
«PaRunTInstance» database: Database {poolSize = (dbthreads=5),
instance = database}
eb: EBrowser
«paStep» «paCommStep»
2:
{hostDemand = (12.4,ms), rep = (1.3,-,mean), msgSize = (2,KB)}
«PaCommStep»
4:
{msgSize = (75,KB)}
«paCommStep»
3:
{msgSize = (50,KB)}
«PaStep»
«PaWorkloadEvent»
1:
{open (interArrT=(exp(17,ms))),
{hostDemand = (4.5,ms)}
«PaRunTInstance»
initial step is stereotyped
for workload (open),
execution demand
request message size
a swimlane or lifeline stereotyped
«PaRunTInstance» references a
runtime active instance; poolSize
specifies the multiplicity
«paCommStep»
QUASOSS 2010 page 19
Transformation Principles from SModels to PModels
QUASOSS 2010 page 20
UML model for performance analysis
For performance analysis, a UML model should contain:
Key use cases realized by representative scenarios
• frequently executed, with performance constraints
Resources used by each scenario
resource types: active or passive, physical or logical,
hardware or software
• examples: processor, disk, process, software server, lock, buffer
quantitative resource demands for each scenario step
• how much, how many times?
Workload intensity for each scenario
open workload: arrival rate of requests for the scenario
closed workload: number of simultaneous users
QUASOSS 2010 page 21
Direct UML to LQN Transformation: our first approach
Mapping principle:
software and hardware resources → service centres
scenarios → job flow from centre to centre
Generate LQN model structure (tasks, devices and their connections) from the structural view:
active software instances → LQN tasks
map deployment nodes → LQN devices
Generate LQN detailed elements (entries, phases, activities and their parameters) from the behavioural view:
identify communication patterns in key scenarios due to architectural patterns
client/server, forwarding server chain, pipeline, blackboard, etc.
aggregate scenario steps according to each pattern and map to entries, phases, etc.
compute LQN parameters from resource demands of scenario steps.
QUASOSS 2010 page 22
Generating the LQN model structure
a) High-level architecture
b) Deployment ProcS
ProcC Modem
Internet
LAN
ProcDB
Disk1
Client
WebServer
Database
Generated LQN model structure
• Software tasks generated for high-level
software components according to the
architectural patterns used.
• Hardware tasks generated for devices
from deployment diagram
<<process>>
Client
1..n
Client Server
CLIENT SERVER
Client Server
<<process>>
Server
<<process>>
Database
Client Server
CLIENT SERVER
Client Server
<<process>>
Client
1..n
Client Server
CLIENT SERVER
Client Server
<<process>>
Server
<<process>>
Database
Client Server
CLIENT SERVER
Client Server
<<process>>
User
1..n
Client Server
CLIENT SERVER
Client Server
<<process>>
Server
<<process>>
WebServer
<<process>>
Database
<<process>>
Database
Client Server
CLIENT SERVER
Client Server
Client1 ClientN
<<Internet>>
<<Modem>> <<Modem>>
ProcC1 ProcCN
Server
<<LAN>> ProcS
Database
<<disk>>
ProcDB
User1 UserN
<<Internet>>
<<Modem>> <<Modem>>
ProcC1 ProcCN
<<LAN>> ProcS
Database
<<disk>> Disk1
ProcDB
WebServer
QUASOSS 2010 page 23
Client Server Pattern
Client
Sever ClientServer
Client Server 1..n 1
a) Client Sever
collaboration
Structure the participants and their relationship
Behaviour
Synchronous communication style - the client sends the request and
remains blocked until the sender replies
b) Client Sever
behaviour
Client
continue work
request service
serve request and reply
waiting
Server
complete service (opt)
wait for reply
QUASOSS 2010 page 24
Mapping the Client Server Pattern to LQN
e1 [ph1]
e1, ph1 e2, ph1
e2, ph2
e2 [ph1, ph2]
Client CPU
User
continue
request service
and reply
waiting
WebServer
complete
e1, ph1
Client
work
request service
and reply
Server User
continue
request service
waiting
WebServer
complete service (opt)
...
Client
work
request service
serve request
Server
wait for reply
Client
CPU Server
Server
For each subset of scenario steps mapped to a LQN phase or activity, compute
the execution time S:
S = Si=1,n ri si
where ri = number of repetitions and si = execution time of step i.
LQN
QUASOSS 2010 page 25
Identify patterns in a scenario
browse and select items
UserInterface
idle
ECommServ DBMS
<<PAstep>>
{PArep= $r}
<<PAstep>>
{PAdemand= ‘assm’, ’mean’, $md1, ‘ms’}
<<PAstep>>
{PAdemand= ‘assm’, ’mean’, $md2, ‘ms’}
<<PAstep>>
{PAdemand= ‘assm’, ’mean’, $md3, ‘ms’}
idle
add to invoice
log transaction
generate page
display
check valid item code
add item to query
sanitize query
phase 1
phase 1
phase 1
phase 2
<<PAstep>>
{PAdemand= ‘assm’,
’mean’, $md3, ‘ms’}
waiting
e1
[ph1]
User
Interface
LQN
e2 [ph1]
EComm
Server
e3
[ph1, ph2] DBMS
waiting
QUASOSS 2010 page 26
Transformation using a pivot language
QUASOSS 2010 page 27
Pivot languages
Pivot language, also called a bridge
or intermediate language, can be
used as an intermediary for
translation.
Avoids the combinatorial explosion
of translators across every
combination of languages.
Examples of pivot languages for
performance analysis:
Core Scenario Model (CSM)
Klaper
PMIF + S-PMIF
Paladio
Transformations from N source
languages to M target languages
require N*M transformations.
L1
L2
LN
L’1
L’2
L’M
. . . . . .
Using a pivot language, only
N+M transformations.
Also, a smaller semantic gap
L1
L2
LN
L’1
L’2
L’M
. . . . . . Lp
QUASOSS 2010 page 28
Core Scenario Model
CSM: a pivot Domain Specific Language used in the PUMA project
at Carleton University (Performance from Unified Model Analysis)
Semantically – between the software and performance domains
focused on scenarios and resources
performance data is intrinsic to CSM
quantitative resource demands made by scenario steps
workload
UML+SPT
UML+MARTE
UCM
LQN
QN
Petri Net
CSM
Simulation
PUMA Transformation chain
QUASOSS 2010 page 29
CSM metamodel Basic scenario elements, closely based on the SPT Performance Profile
steps with components and hosts (Processor resources)
Step refined as a sub-scenario
precedence among Steps
sequence, branch, merge, fork, join, loop
resources and acquire/release operations on them
inferred for Component-based resources (Processes)
Four kinds of resources in CSM: ProcessingResource (a node in a deployment diagram)
ComponentResource (process, or active object)
component in a deployment
lifeline in SD correspond to a runtime component
Swimlane in AD
LogicalResource (declared as GRMresource)
extOp resource - implied resource to execute external operations
QUASOSS 2010 page 30
CSM example: Acquire Video scenario
Start/end
Steps
Res. Acquire/Release
Connectors
Processor Resorce
Component Resource
Logical Resource
extOp
ResRel
ResAcq
getBuffer
ResAcq
getImage
passImage
ResAcq
storeImage
store
ResAcq
writeImg
ResRel
ResRel
freeBuf
ResAcq
ResRel
ResRel
Component
Acquire
Proc
Component
Buffer
Manager
Component
StoreProc
Component
Database
Processing
Resource
DB CPU
Processing
Resource
Applic
CPU
Fork
<<Scenario>> procOneImage
ExtOp
network
Start
End
End
ExtOp
write
Block
Passive
Resource
Buffer
allocBuf
ResAcq
releaseBuf
ResRel
Applic
CPU
Database
Buffer
write
Fork
QUASOSS 2010 page 31
Resource context:
covers the subgraph of
steps holding a
resource
Important for the
transformation from
CSM to a performance
model
Processor resources
are inferred from
deployment
Non-nested resource
contexts:
e.g.; the handover of
the buffer
Resource Context
ResRel
ResAcq
getBuffer
ResAcq
getImage
passImage
ResAcq
storeImage
store
ResAcq
writeImg
ResRel
ResRel
freeBuf
ResAcq
ResRel
ResRel
Component
GetImage
Component
Buffer
Manager
Component
StoreProcComponent
Database
Fork
ExtOp
network
Start
End
End
ExtOp
write
Block
Passive
Resource
Buffer
allocBuf
ResAcq
releaseBuf
ResRel
ResRel
Component
Buffer
Manager
VideoController context
GetImage context
BufferManager context
Buffer context
Database
context
BufferManager context
StoreProc context
ResRel
QUASOSS 2010 page 32
PUMA approach
Software
model with
performance
annotations
(Smodel)
Performance
results and
design advice
Extract CSM
from Smodel
(S2C)
Explore
solution
space
Improve
Smodel
Performance
model
(Pmodel)
Core Scenario
Model
(CSM)
Convert CSM to
some performance
model language
(C2P)
PUMA project: Performance from Unified Model Analysis
QUASOSS 2010 page 33
Extending PUMA for SOA
QUASOSS 2010 page 34
PUMA4SOA
Layered software model: business processes invoking services
PIM to PSM: platform modelled as aspects
Transformation
to CSM
Feedback Performance
Results
Performance model
Transformation from CSM to P erformance
Model
Core Scenario
Model (CSM)
PSM
SOA system model with
performance annotations
Middleware Aspect Model
PIM
SOA system model with
performance annotations
Deployment Diagram of
Primary model
Explore Solution
space
QUASOSS 2010 page 35
Eligibility Referral System: Process Model
QUASOSS 2010 page 36
Service Architecture Model
QUASOSS 2010 page 37
Service Behaviour Model
join points
for platform
aspects
QUASOSS 2010 page 38
Deployment of the primary model
Admission node
Transferring node
Insurance node
QUASOSS 2010 page 39
Generic Aspect Model: Service Request Invocation (deployment view)
The platform – deployed middleware - offers its own services to the applications
e.g., service invocation, service response
Platform services are represented as generic aspect model to be woven into the primary model
Similar with the “completion” technique used in existing work to add platform details.
Client node Provider node
QUASOSS 2010 page 40
Generic aspect model: Service Request Invocation (behavioral view)
marshaling (to XML)
unmarshaling (from XML)
SOAP message
QUASOSS 2010 page 41
Generic aspect model: Service Response (behavioral view)
marshaling
(to XML)
unmarshaling
(from XML)
SOAP
message
QUASOSS 2010 page 42
Binding generic to concrete resources
Generic names (parameters) are bound to concrete names
corresponding to the context of the join point
Sometime new resources are added to the primary model
QUASOSS 2010 page 43
Binding performance annotation variables
Annotation variables allowed in MARTE are used as generic
performance annotations
Bound to concrete reusable platform-specific annotations
QUASOSS 2010 page 44
PSM: scenario after composition
composed service
invocation aspect
composed service
response aspect
QUASOSS 2010 page 45
CSM example for Eligibility Referral
CSM models are automatically
generated from UML+SPT or
UML +MARTE models
from behaviour diagrams
(Sequence or Activity)
and deployment diagrams
Aspect weaving can be
implemented at:
UML level (as shown)
CSM level
LQN level
QUASOSS 2010 page 46
Aspect composition in CSM
Why composing in CSM?
UML has a complex metamodel
current tools make it awkward to add/manipulate performance
annotations
different behaviour representations (e.g., activities and interactions)
working with CSM makes sense for performance analysis
behaviour and resources in the same model
unified view of behaviour from different UML representations
Annotated UML
(primary)
CSM
(primary)
CSM
(composed)
CSM (aspect)
Annotated UML (aspect)
Performance Model
(LQN)
Extract
(PUMA
tools)
Compose
Convert (PUMA
tools)
QUASOSS 2010 page 47
Aspect composition at LQN level
Figure 10. Effect of aspect composition at the LQN level
a) Primary model b) Composed aspects based on forwarding
c) Composed aspects based on rendezvous
c) Aggregating midleware tasks in the composed model
QUASOSS 2010 page 48
Eligibility Referral System: LQN Model
QUASOSS 2010 page 49
Finer service granularity
a) Finer granularity: Response time Vs. # of Users
0
10
20
30
40
50
60
70
80
90
100
0 20 40 60 80 100 120
# of Users
Re
sp
on
se
tim
e (
se
c.)
A
B
C
D
E
b) Finer granularity: Throughput vs. # of Users
0
0.0005
0.001
0.0015
0.002
0.0025
0.003
0.0035
0 20 40 60 80 100 120
# of Users
Th
rou
gh
pu
t
A
B
C
D
E
A: The base case; multiplicity of all tasks and hardware devices is 1, except for the number of users. Transferring processor is the system bottleneck.
B: Resolve bottleneck by increasing the multiplicity of the bottleneck processor node to 4 processors. Only slight improvement because the next bottleneck - the middleware MW_NA - kicks in.
C: The software bottleneck is resolved by multi-threading MW_NA.
D: Increasing to 2 the number of disks units for Disk1 and adding additional threads to the next software bottleneck tasks, dm1 and MW-DM1. The throughput goes up by 24% with respect to case C. The bottleneck moves to DM1 processor.
E: Increasing the number of DM1 processors to 2 has a considerable effect.
QUASOSS 2010 page 50
Coarser service granularity
A: The base case. The software task dm1 is the initial bottleneck.
B: The software bottleneck is resolved by multi-threading dm1. The response time is reduced slightly and the bottleneck moves to Disk1.
C: Increasing to 2 the number of disks units for Disk1 has a considerable effect. The maximum throughput goes up by 60% with respect to case B. The bottleneck moves to the Transferring processor.
D: Increasing the multiplicity of the Transferring processor to 2 processors, and adding additional threads to the next software bottleneck task MW-NA; the throughput grows by 11 %.
c) Coarser service granularity: Response time
vs. # of Users
0
10
20
30
40
50
60
70
80
0 20 40 60 80 100 120
# of Users
Re
spo
nse
tim
e (
sec)
A
B
C
D
d) Coarser service granularity: Throughput
vs. #of Users
0
0.0005
0.001
0.0015
0.002
0.0025
0.003
0.0035
0 20 40 60 80 100 120
# of Users
Th
rou
gh
pu
t A
B
C
D
QUASOSS 2010 page 51
Coarser versus finer service granularity
Difference between the D cases the two alternatives. The compared configurations are similar in number of processors, disks and threads, except that the latter performs fewer service invocations through the web service middleware.
e) Finer and Coarser service granularity: Response time
vs, # of Users
0
10
20
30
40
50
60
0 20 40 60 80 100 120
# of Users
Re
sp
on
se
tim
e (
se
c)
Finer service granularity
Coarser service granularity
f) Finer and Coarser service granularity: Throughput vs.
# of Users
0
0.0005
0.001
0.0015
0.002
0.0025
0.003
0.0035
0 20 40 60 80 100 120
# of Users
Th
rou
gh
pu
t
Finer service granularity
Coarser service granularity
QUASOSS 2010 page 52
Analyzing Performance Effects of SOA Patterns
QUASOSS 2010 page 53
Metamodel for change propagation
SModel PModel
SElement Problem
SOA
Pattern
PElement
Solution
Description
Application
Rule
PModel
Change
mapping
mapping
Task
Replication
Multiplicity
Change
Redeployment
ShrinkExec
MoreAsynch
Partitioning
Batching
has
solvedBy affectedSElem
determines
applyTo
*
*
*
* 1
1
1
*
* *
* *
1 *
affectedPElem
1
Parameters
*
SModel
Change
Parameters
*
mapping
QUASOSS 2010 page 54
Approach for incremental change propagation from SModel to PModel based on SOA patterns
SModelModel
TransformationPModel
Application
rules
SOA Pattern
Extracting
the Application
Rules
Identifying Affected
PElements by Pattern
Affected
PElements
Pmodel
Changes
Applying Changes
To PModel
Performance
Analysis
Results
Determining the
changes
Non Functional
Properties
Analysis Results
Identify the Problem and
Choose Pattern
Affected
SElements
Non Functional
Properties Analysis Performance
Analysis
QUASOSS 2010 page 55
Examples of SOA Patterns
“Functional Decomposition “ SOA Pattern :
Applications instruction:
Depending on the nature of the large problem, a service oriented process can be created to
cleanly deconstruct it into smaller problems.
Application Rule :
Condition: If there is a large service with multiple sub-functions in the system
Actions: Split the service into smaller services.
“Asynchronous Queuing “ SOA Pattern :
Applications instruction:
Portion of a service that can be processed without locking the requestor should be identified
and postponed to after an intermediate response message to the requestor.
Application Rule :
Condition: If there is any service with a long processing time in the system
Actions: Provide the requestor with an intermediate response and postpone the rest of
processing after sending the intermediate response.
QUASOSS 2010 page 56
SModel example: Browsing sequence diagram
UserShopping and
Browsing Services
Catalogue
Service
Browse
Catalogue
Send Catalogue
Check out
Shopping cart
Order Confirmation
Create Product
Catalogue
Return Products
Catalouge
DB Server
Filter Products
Products
Format into Catalogue
QUASOSS 2010 page 57
SModel example: Shopping sequence diagram
UserShopping and
Browsing Services
Order Processing
Service
Browse
Catalogue
Payment
Processing
Return Catalouge
Check out
Shopping cart
Process
Payment
Confirm
Payment
Validate Payment Info
Order Placed
Place Order
Order Confirmation
QUASOSS 2010 page 58
LQN model before and after the first pattern
User [z=30s]
users
Entrynet [pure delay
1 ms] N etwork
Networkp
usersp
EntryServ2
[s=0.5ms] EntryServ1
[s=0.2ms]
Shopping &Browsing
Service
Shop&B
rowP
EntryPay1
[s=0.8ms] EntryPay2
[s=0.4ms]
Order Processing
Service
Orderp EntryVisa
[s=100ms]
Entry MasterCard
[s=100 ms]
Payp
Payment Processing
EntryCT
[s=1s] Product Service
CTp
EntryDCT
[s=50ms] DB
DBp
1
0.5 0.5
3 1.5
1.5 0.5
2
1.5
User [z=30s]
users
Entrynet [pure delay
1 ms] N etwork
Networkp
usersp
EntryServ2
[s=0.5ms] EntryServ1
[s=0.2ms] Browsing
Service
BrowP
EntryPay1
[s=0.8ms] EntryPay2
[s=0.4ms]
Order Processing
Service
Orderp EntryVisa
[s=100ms]
Entry MasterCard
[s=100 ms]
Payp
Payment Processing
EntryCT
[s=1s] Product Service
CTp
EntryDCT
[s=50ms] DB
DBp
1
0.5 0.5
3 1.5
1.5 0.5
2
1.5
Shopping
Service
ShopP
QUASOSS 2010 page 59
Results
Assess the performance effectiveness of applying SOA patterns
Three cases: 1) before; 2) after the first; 3) after the second pattern
First pattern Functional Decomposition: aims to manage the software complexity
big performance improvement effect because it affects the software bottleneck task
Second pattern Asynchronous Queueing:
Aims to improve performance
Practically no performance effect because it‟s applied to a performance non-critical task.
QUASOSS 2010 page 60
Conclusions
Integrating performance analysis with model-driven development of service-oriented systems has many potential benefits
For service consumers: how to chose the “best” services available
For service providers: how to design and configure their systems to optimize the use of resources and meet performance requirements
For software developers: analyze design and configuration alternative, evaluate tradeoffs
For performance analysts: automate the generation of PModels from SModels to keep them in synch, reuse platform performance annotations
A lot more to do, theoretically and practically
merging performance modeling and measurements
use runtime monitoring data for better performance models
use performance models to support runtime changes (autonomic systems)
applying variability modeling to service-oriented systems
manage runtime changes, adapt to context
provide ideas and background for building better tools.