BenchFlow, a Framework for Benchmarking BPMN 2.0 Workflow Management Systems
-
Upload
vincenzo-ferme -
Category
Technology
-
view
543 -
download
2
Transcript of BenchFlow, a Framework for Benchmarking BPMN 2.0 Workflow Management Systems
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme, Ana Ivanchikj, Cesare Pautasso Faculty of Informatics
University of Lugano (USI) Switzerland
A FRAMEWORK FOR BENCHMARKING BPMN 2.0
WORKFLOW MANAGEMENT SYSTEMS
BENCHFLOW
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
2
BPMN 2.0: A Widely Adopted Standard
https://en.wikipedia.org/wiki/List_of_BPMN_2.0_engines
Year Number Sum2009 1 12010 5 62011 4 102012 1 112013 8 192014 2 212015 0 21
Grand Total 21
Num
ber o
f BPM
N 2
.0 W
fMSs
0
5
10
15
20
25
Year of the First Version Supporting BPMN 2.0 2009 2010 2011 2012 2013 2014 2015
Jan 2011
BPMN 2.0
Jan 2014
BPMN 2.0.2ISO/IEC 19510
Aug 2009
BETABPMN 2.0
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
Application Server 3
WES
Workflow Management System’s Main Components
Job Executor
Core Engine
…Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
Application Server 3
WES
Workflow Management System’s Main Components
Job Executor
Core Engine
Process Navigator
D
A
BC
…Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
Application Server 3
WES
Workflow Management System’s Main Components
Job Executor
Core Engine
Process Navigator
D
A
BC
…
Task Dispatcher Users
Application
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
Application Server 3
WES
Workflow Management System’s Main Components
Job Executor
Core Engine
Process Navigator
D
A
BC
…
Task Dispatcher Users
Application
Event Handler
Service Invoker Web Service
Application
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
Application Server 3
WES
Workflow Management System’s Main Components
Job Executor
Core Engine
Process Navigator
D
A
BC
…
Instance Database
DBMS
Persistent Manager
Transaction Manager
Task Dispatcher Users
Application
Event Handler
Service Invoker Web Service
Application
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
4
Workflow Management System’s Diversification
Supported Languages• BPMN, BPEL, Petri-Nets, YAML
System’s Architecture• Distributed workflow support • Migrating workflow objects support • Transactional workflow support
Functionalities• Dynamic workflow changes • Integration capabilities
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Deployment Infrastructure• Standalone • Cluster Deployment • Cloud Deployment • Mobile Deployment
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
5
Workflow Management System’s Diversification
Supported Languages• BPMN, BPEL, Petri-Nets, YAML
System’s Architecture• Distributed workflow support • Migrating workflow objects support • Transactional workflow support
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Functionalities• Dynamic workflow changes • Integration capabilities
Deployment Infrastructure• Standalone • Cluster Deployment • Cloud Deployment • Mobile Deployment
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
6
Workflow Management System’s Diversification
Supported Languages• BPMN, BPEL, Petri-Nets, YAML
• Distributed workflow support • Migrating workflow objects support • Transactional workflow support
System’s Architecture
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Functionalities• Dynamic workflow changes • Integration capabilities
Deployment Infrastructure• Standalone • Cluster Deployment • Cloud Deployment • Mobile Deployment
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
7
Workflow Management System’s Diversification
Deployment Infrastructure• Standalone • Cluster Deployment • Cloud Deployment • Mobile Deployment
Supported Languages• BPMN, BPEL, Petri-Nets, YAML
System’s Architecture• Distributed workflow support • Migrating workflow objects support • Transactional workflow support
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Functionalities• Dynamic workflow changes • Integration capabilities
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
8
The BenchFlow Project
Design the first benchmark to assess and compare the performance of WfMSs that are compliant with Business Process Model and Notation 2.0 standard.
”
“
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
9
BenchFlow Framework: Requirements & Functionalities
• Automate the SUT deployment
• Simplify the SUT’s deployment configuration
• Adapt to different API provided by different WfMSs
• Deal with the asynchronous execution of business processes
System Under Test (SUT)
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
10
BenchFlow Framework: Requirements & Functionalities
• Simulate all the entities interacting with the WfMS
• Accomodate and automate different kinds of performance test:
• Ensure reliable execution
• Ensure repeatability
• Automate the performance data collection and analyses
Performance Benchmark
SOABench, SOArMetrics, Betsy, LoadUI + SoapUISimilar Tools:
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
11
BenchFlow Framework: Requirements & Functionalities
SOABench, SOArMetrics, Betsy, LoadUI + SoapUISimilar Tools:
• Simulate all the entities interacting with the WfMS
• Accomodate and automate different kinds of performance test:
• Ensure reliable execution
• Ensure repeatability
• Automate the performance data collection and analyses
Performance Benchmark
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
Instance Database
12
BenchFlow Framework
DBMSFaban Drivers
ContainersServers
DATA CLEANERS
ANALYSERS
Performance Metrics
Performance KPIs
harnessWES
Test
Exe
cuti
onA
naly
ses
Faban+
Web Service
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
13
BenchFlow Framework
Faban Drivers
ContainersServers
Test
Exe
cuti
on
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
13
BenchFlow Framework
Faban Drivers
ContainersServers
Test
Exe
cuti
onUnsaved diagram
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
14
BenchFlow Framework
DBMSFaban Drivers
ContainersServers
harnessWES
Test
Exe
cuti
on Web Service
Adapters
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
15
BenchFlow Framework
DBMSFaban Drivers
ContainersServers
harnessWES
Test
Exe
cuti
on Web Service
MONITOR
Adapters
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
15
BenchFlow Framework
DBMSFaban Drivers
ContainersServers
harnessWES
Test
Exe
cuti
on
Instance Database
Ana
lyse
s
Web Service
MONITOR
CO
LLECT
OR
S
Adapters
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
16
BenchFlow Framework
DBMSFaban Drivers
Instance Database
ContainersServers
DATA CLEANERS
ANALYSERS
Performance Metrics
Performance KPIs
harnessWES
Test
Exe
cuti
onA
naly
ses
Web Service
Data Mappers
MONITOR
CO
LLECT
OR
S
Adapters
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
17
Performance Metrics and KPIsTest process
Empty Script
TaskWait 2 seconds
TEST PROCESS
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
17
Performance Metrics and KPIs
LOAD FUNCTION
Use
rs
06
12182430
0 20 60 120
180
240
300
Test process
Empty Script
TaskWait 2 seconds
TEST PROCESS
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
17
Performance Metrics and KPIs
LOAD FUNCTION
Use
rs
06
12182430
0 20 60 120
180
240
300
Test process
Empty Script
TaskWait 2 seconds
TEST PROCESS
TEST ENVIRONMENT
CPU 64 Cores @ 1400 MHz
RAM 128 GB
Load Drivers
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
17
Performance Metrics and KPIs
LOAD FUNCTION
Use
rs
06
12182430
0 20 60 120
180
240
300
Test process
Empty Script
TaskWait 2 seconds
TEST PROCESS
TEST ENVIRONMENT
CPU 64 Cores @ 1400 MHz
RAM 128 GB
Load Drivers
CPU 12 Cores @ 800 MHz
RAM 64 GB
WES
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
17
Performance Metrics and KPIs
LOAD FUNCTION
Use
rs
06
12182430
0 20 60 120
180
240
300
Test process
Empty Script
TaskWait 2 seconds
TEST PROCESS
TEST ENVIRONMENT
CPU 64 Cores @ 1400 MHz
RAM 128 GB
Load Drivers
CPU 12 Cores @ 800 MHz
RAM 64 GB
WES
CPU 64 Cores @ 2300 MHz
RAM 128 GB
DBMS
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
17
Performance Metrics and KPIs
LOAD FUNCTION
Use
rs
06
12182430
0 20 60 120
180
240
300
Test process
Empty Script
TaskWait 2 seconds
TEST PROCESS
TEST ENVIRONMENT
CPU 64 Cores @ 1400 MHz
RAM 128 GB
Load Drivers
CPU 12 Cores @ 800 MHz
RAM 64 GB
WES
CPU 64 Cores @ 2300 MHz
RAM 128 GB
DBMS
10 Gbit/s
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
18
Throughput
Test process
Empty Script
TaskWait 2 seconds
Fig. 2: Experiment Business Process Model
4.1 Experiment Description and Set-up
This performance testing experiment is based on the following elements [11]:1) Workload: the instances of the BP models set executed by the WfE during
the experiment. Given the limited scope of this experiment we use only onesimple BP model presented in Fig. 2. The Script task is an empty script. TheTimer event defines a wait time of 2s before allowing the process to continue.
2) Load Function: the function handling system’s load. In our case the Loadfunction determines how the BP instances are initiated by a variable number ofsimulated users (since we are testing system’s scalability), growing from 5 to 150concurrent users. Due to experiment’s simplicity the load driver executes theLoad function only once when the test starts. The duration of the load functionis 300s with 30s of ramp-up period. The ramp-up period defines the transitionfrom none to all simulated users being active. This means that it takes 30s beforeall users start issuing requests for BP instance instantiation. For example, in anexperiment with 5 users, a new simulated user is created every 6s during theramp-up period. After becoming active each user issues one BP instance startrequest per second.
3) Test environment : the characteristics of the hardware used to run theexperiment. We use three servers (Fig. 1): one for the harness executing the loaddriver, one for the WfE and one for the Database Management System (DBMS).We deploy the WfE on the least powerful machine (12 CPU Cores at 800Mhz,64GB of RAM) to ensure that the machine where we deploy the Load driver (64CPU Cores at 1400MHz, 128GB of RAM) can issue su�cient load and that theDBMS (64 CPU Cores at 2300MHz, 128GB of RAM) can handle the requestsfrom the WfE. After each test we verify the absence of measurement noise bychecking the environment metrics (CPU, RAM and network usage) and the WfElogs to ensure that all the BP instances are completed.
We run the experiment on two open-source WfMSs supporting native exe-cution of BPMN2. We test them on top of Apache Tomcat 7.0.59 using OracleJava 8 and MySQL Community Server 5.5.42. We use the default configurationas specified on vendors’ websites.
4.2 Results
The first metric we analyze is the Throughput = #BPInstances(bp)Time(s) [9, ch. 11].
As per Fig. 3 Engine B does not scale well after 25 and the throughput startsdegrading after 50 users. Engine A can handle a load up to 125, with the through-put decreasing abruptly with 135-150 users. Thus the Capacity of the WfEs canbe estimated to less than 135 for Engine A and less than 50 users for Engine B.
0 25 50 75 100 125 1500
50
100
Concurrent Users
Through
put(bp/
s)
Engine AEngine B
Fig. 3: Throughput
Instan
ceDuration
(s)
5 25 50 75 100 110 1252
4
6
8
(a)EngineA
135 1500
2,000
4,000
6,000
8,000
5 25 500
10
20
30
40
(b)EngineB
75 100 125 1500
200
400
600
Concurrent Users
Fig. 4: Aggregated Process Instance Duration Comparison
The BP instance duration is the time di↵erence between the start and thecompletion of a BP instance. It is presented in the box and whisker plot inFig. 4(a) for Engine A and Fig. 4(b) for Engine B. This type of plot displaysthe analyzed data into quartiles where the box contains the second and thirdquartile, while the median is the line inside the box. The lines outside of thebox, called whiskers, show the minimum and maximum value of the data [10].The measurements show that Engine A scales better since it starts having anunexpected behaviour after 125 concurrent users, while the first execution per-formance problems of Engine B appear at 50 users, as evident from the instanceduration increase of one order of magnitude. In Fig. 5 we report Engine A’sCPU utilization for each of the tests. It is interesting to notice that while the in-stance duration increases substantially starting from 135 concurrent users (Fig. 4(a)), the CPU utilization decreases, indicating that the slowdown of the WfEis not caused by lack of resources. The same has been verified by checking theCPU/RAM utilization of the DBMS.
After noticing a bottleneck in performance scaling, we investigate the causes.Since only two constructs, a Script task and a Timer event, are used in theexperiment BP model, we test the WfE performance in handling each of themindividually. The test processes used consist of a Start event, the tested constructand an End event. As per the previously gathered information we focus on thecritical number of users (125/135 for Engine A and 25/50 for Engine B). Weuse the delay metric which compares the expected to the actual duration of the
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
19
Instance Duration Time
Test process
Empty Script
TaskWait 2 seconds
Instance Duration Time
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
20
0 25 50 75 100 125 1500
50
100
Concurrent UsersThrough
put(bp/
s)
Engine AEngine B
Fig. 3: Throughput
Instan
ceDuration
(s)
5 25 50 75 100 110 1252
4
6
8
(a)EngineA
135 1500
2,000
4,000
6,000
8,000
5 25 500
10
20
30
40
(b)EngineB
75 100 125 1500
200
400
600
Concurrent Users
Fig. 4: Aggregated Process Instance Duration Comparison
The BP instance duration is the time di↵erence between the start and thecompletion of a BP instance. It is presented in the box and whisker plot inFig. 4(a) for Engine A and Fig. 4(b) for Engine B. This type of plot displaysthe analyzed data into quartiles where the box contains the second and thirdquartile, while the median is the line inside the box. The lines outside of thebox, called whiskers, show the minimum and maximum value of the data [10].The measurements show that Engine A scales better since it starts having anunexpected behaviour after 125 concurrent users, while the first execution per-formance problems of Engine B appear at 50 users, as evident from the instanceduration increase of one order of magnitude. In Fig. 5 we report Engine A’sCPU utilization for each of the tests. It is interesting to notice that while the in-stance duration increases substantially starting from 135 concurrent users (Fig. 4(a)), the CPU utilization decreases, indicating that the slowdown of the WfEis not caused by lack of resources. The same has been verified by checking theCPU/RAM utilization of the DBMS.
After noticing a bottleneck in performance scaling, we investigate the causes.Since only two constructs, a Script task and a Timer event, are used in theexperiment BP model, we test the WfE performance in handling each of themindividually. The test processes used consist of a Start event, the tested constructand an End event. As per the previously gathered information we focus on thecritical number of users (125/135 for Engine A and 25/50 for Engine B). Weuse the delay metric which compares the expected to the actual duration of the
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Instance Duration Time
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
21
Instance Duration Time and CPU Utilisation
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Fig. 3: Throughput
Instan
ceDuration
(s)
5 25 50 75 100 110 1252
4
6
8
(a)EngineA
135 1500
2,000
4,000
6,000
8,000
600Fig. 4: Aggregated Process Instance Duration Comparison
5 25 50 75 100 110 125 135 1500
10
20
Concurrent Users
CPU
(%)
Fig. 5: Aggregated CPU Usage (Engine A)
(a) Engine A
125 135 1250
0.2
0.4
0.6
0.8
1
Con
struct
Instan
ceDelay
(s)
1350
2,000
4,000
Concurrent Users
(b) Engine B
25 500
0.2
0.4
0.6
0.8
1
Con
struct
Instan
ceDelay
(s)
25 500
100
200
300
Concurrent Users
Fig. 6: Script Task ( ) and Timer Event ( ) feature comparison
construct execution. The expected duration of the Timer is 2s, while the emptyScript task should take 0s to complete. The delay measurements (Fig. 6) showthat both WfEs handle the Script task e�ciently with an average delay below10ms. The same does not hold for the Timer. For Engine A, the average delay ofthe Timer at 135 users is by three orders of magnitude greater than at 125 users.For Engine B, the delay increases by two orders of magnitude between 25 and50 users. The observed system behaviour could be due to an excessive overheadintroduced by concurrently handling many Timers, which could cause growth inthe Timers queue thus postponing their execution and increasing their delay.
5 Conclusion and Future Work
The BenchFlow framework greatly simplifies the performance benchmarking ofBPMN2 WfMSs, by abstracting the heterogeneity of their interfaces and au-tomating their deployment, the data collection and the metrics and Key Per-formance Indicators (KPIs) computation. It does so by relying on Faban andDocker, and by verifying the absence of noise in the performance measurements.While the complexity of BPMN2 makes it challenging to benchmark the perfor-mance of the WfMSs implementing it, the benefits of doing so are evident. Thefirst experimental results obtained with a simple BP model running on two pop-ular open-source WfMSs have uncovered important scalability issues. We havediscussed the identified performance bottlenecks with the WfMS vendors whohave clarified the probable cause. Namely, in Engine A we have used a di↵erentDBMS configuration in the setup of the system. In Engine B the goal of the
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
22
5 25 50 75 100 110 125 135 1500
10
20
Concurrent Users
CPU
(%)
Fig. 5: Aggregated CPU Usage (Engine A)
(a) Engine A
125 135 1250
0.2
0.4
0.6
0.8
1
Con
struct
Instan
ceDelay
(s)
1350
2,000
4,000
Concurrent Users
(b) Engine B
25 500
0.2
0.4
0.6
0.8
1
Con
struct
Instan
ceDelay
(s)
25 500
100
200
300
Concurrent Users
Fig. 6: Script Task ( ) and Timer Event ( ) feature comparison
construct execution. The expected duration of the Timer is 2s, while the emptyScript task should take 0s to complete. The delay measurements (Fig. 6) showthat both WfEs handle the Script task e�ciently with an average delay below10ms. The same does not hold for the Timer. For Engine A, the average delay ofthe Timer at 135 users is by three orders of magnitude greater than at 125 users.For Engine B, the delay increases by two orders of magnitude between 25 and50 users. The observed system behaviour could be due to an excessive overheadintroduced by concurrently handling many Timers, which could cause growth inthe Timers queue thus postponing their execution and increasing their delay.
5 Conclusion and Future Work
The BenchFlow framework greatly simplifies the performance benchmarking ofBPMN2 WfMSs, by abstracting the heterogeneity of their interfaces and au-tomating their deployment, the data collection and the metrics and Key Per-formance Indicators (KPIs) computation. It does so by relying on Faban andDocker, and by verifying the absence of noise in the performance measurements.While the complexity of BPMN2 makes it challenging to benchmark the perfor-mance of the WfMSs implementing it, the benefits of doing so are evident. Thefirst experimental results obtained with a simple BP model running on two pop-ular open-source WfMSs have uncovered important scalability issues. We havediscussed the identified performance bottlenecks with the WfMS vendors whohave clarified the probable cause. Namely, in Engine A we have used a di↵erentDBMS configuration in the setup of the system. In Engine B the goal of the
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Fig. 3: Throughput
Instan
ceDuration
(s)
5 25 50 75 100 110 1252
4
6
8
(a)EngineA
135 1500
2,000
4,000
6,000
8,000
600Fig. 4: Aggregated Process Instance Duration Comparison
Instance Duration Time and CPU Utilisation
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
23
Future Work
• Perform the first real-world experiments
• Increase the number of supported WfMSs
• Simplify and automate the execution of common performance tests: Load Test, Spike Test, Scalability Test, …
Experiments
BenchFlow Framework
• Release a development version on GitHubbenchflow
Context » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » …
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
24
Highlights
… » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » Highlights
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
24
Highlights
… » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » Highlights
Workflow Management System
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
24
Highlights
… » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » Highlights
Workflow Management System BenchFlow Project
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
24
Highlights
… » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » Highlights
Workflow Management System BenchFlow Project
BenchFlow Framework
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
24
Highlights
… » WfMS Components » WfMSs Diversification » BenchFlow » Requirements » BenchFlow Framework » Experiments » Future Work » Highlights
Workflow Management System BenchFlow Project
BenchFlow Framework Proof of Concept
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
25
Call for Action
• We want to characterise the Workload using Real-World process models
• Send us your executable BPMN process models, even anonymised!
Process Models
• We want to characterise the Workload using Real-World behaviours
• Send us your execution logs, even anonymised!
Execution Logs
Architecture, Design and Web Information Systems Engineering Group
benchflowbenchflow
Vincenzo Ferme (@VincenzoFerme), Ana Ivanchikj, Cesare Pautasso Faculty of Informatics
University of Lugano (USI) Switzerland
A FRAMEWORK FOR BENCHMARKING BPMN 2.0
WORKFLOW MANAGEMENT SYSTEMS
BENCHFLOW
http://benchflow.inf.usi.ch
Architecture, Design and Web Information Systems Engineering Group
Vincenzo Ferme
Join Us @ ICWE 2016 in Lugano!
http://icwe2016.inf.usi.ch