Workflow Engine Performance Benchmarking with BenchFlow

119
Vincenzo Ferme Faculty of Informatics USI Lugano, Switzerland WORKFLOW ENGINE PERFORMANCE BENCHMARKING WITH BENCHFLOW

Transcript of Workflow Engine Performance Benchmarking with BenchFlow

Page 1: Workflow Engine Performance Benchmarking with BenchFlow

VincenzoFermeFacultyofInformaticsUSILugano,Switzerland

WORKFLOW ENGINE PERFORMANCE BENCHMARKING

WITH BENCHFLOW

Page 2: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

2

The BenchFlow Project

Design and implement the first benchmark to assess and compare the performance of WfMSs that are compliant with Business Process Model and Notation 2.0 standard.

Page 3: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

3

What is a Workflow Management System?

WfMS Users

Applications

Application Server

InstanceDatabase

DBMS

Web Service

D

A

B

C

Page 4: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

4

Many Vendors of BPMN 2.0 WfMSs

https://en.wikipedia.org/wiki/List_of_BPMN_2.0_engines

Jan 2011

BPMN 2.0

Jan 2014

BPMN 2.0.2ISO/IEC 19510

Aug 2009

BETABPMN 2.0

Year Number Sum2009 1 12010 5 62011 4 102012 1 112013 8 192014 2 212015 2 232016 0 23

Grand Total 23

Num

ber o

f BPM

N 2

.0 W

fMSs

0

5

10

15

20

25

Year of the First Version Supporting BPMN 2.0 2009 2010 2011 2012 2013 2014 2015 2016

�1

Page 5: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

5

end-users, vendors, developersWhy do We Need a Benchmark?

Page 6: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

5

end-users, vendors, developersWhy do We Need a Benchmark?

1. How to choose the best WfMS according to the company’s technical requirements?

2. How to choose the best WfMS according to the company’s business process models (workflows)? B

CD

Page 7: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

5

end-users, vendors, developersWhy do We Need a Benchmark?

1. How to choose the best WfMS according to the company’s technical requirements?

2. How to choose the best WfMS according to the company’s business process models (workflows)? B

CD

4. How to identify WfMS’s bottlenecks?

3. How to evaluate performance improvements during WfMS’s development?

Page 8: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

6

The BenchFlow Benchmarking Process

Workload Model WfMS

ConfigurationsWorkload Metrics

KPIsMeasure

Input/Process/Output Model

Page 9: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

6

The BenchFlow Benchmarking Process

Workload Model WfMS

ConfigurationsWorkload Metrics

KPIsMeasure

Input/Process/Output Model

Workload Mix

80%C

A

B

20%D

Page 10: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

6

The BenchFlow Benchmarking Process

Workload Model WfMS

ConfigurationsTest

TypesWorkload Metrics

KPIsMeasure

Input/Process/Output Model

Workload Mix

80%C

A

B

20%D

Load FunctionsTest Data

Unsaved diagramUnsaved diagram

Page 11: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

6

The BenchFlow Benchmarking Process

Workload Model WfMS

ConfigurationsTest

TypesWorkload Metrics

KPIsMeasure

Input/Process/Output Model

Workload Mix

80%C

A

B

20%D

Load FunctionsTest Data

Containers

Page 12: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

6

The BenchFlow Benchmarking Process

Workload Model WfMS

ConfigurationsTest

TypesWorkload Metrics

KPIsMeasure

Input/Process/Output Model

Workload Mix

80%C

A

B

20%D

Load FunctionsTest Data

Performance Data

Process Instance Duration

Throughput

Containers

Page 13: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

Vendor

BenchFlow

Provide Benchmarking Methodology

Vendor

BenchFlow

Agree on adding Vendor's WfMS to the

Benchmark

Benchmarking Methodology Agreement Proposal

Signed Agreement

Vendor

BenchFlow

Provide Containerized Distribution of WfMS

Containerized WfMS Request

Containerized WfMS

Vendor

BenchFlow

Verify the Benchmark Results

Results Verification Outcome

Results Verification Request

Vendor

BenchFlow

Provide Draft Benchmark Results

Draft Benchmark ResultsVerified Benchmark Results

Not Valid

Are the Results Valid?

Community

BenchFlow

Publish Benchmark Results Valid

Figure 4: Benchmarking Methodology Choreography

pending on the WfMS’s architecture. The DBMSContainer can refer to an existing publicly availableContainer distributions. The containerized WfMSshould be publicly available (e.g., at the Docker Hubregistry8), or the Benchflow team should be grantedaccess to a private registry used by the vendor. Thesame applies to the Containers’ definition file, i.e., theDockerfile (Turnbull, 2014, ch. 4). While private reg-istries are a solution that can work with vendors ofclosed-source systems, they impact the reproducibil-ity of the results. For each WfMS version to be in-cluded in the benchmark, there must be a default con-figuration, i.e., the configuration in which the WfMSContainers start without modifying any configurationparameters, except the ones required in order to cor-rectly start the WfMS, e.g., the database connection.However, if vendors want to benchmark the perfor-mance of their WfMS with different configurations,for example, the configuration provided to users asa “getting started” configuration, or production-gradeconfigurations for real-life usage, they can also pro-vide different configurations. To do that, the Con-tainers must allow to issue the WfMS configurationsthrough additional environment variables9, and/or byspecifying the volumes to be exposed10 in order to

8https://hub.docker.com/

9https://docs.docker.com/reference/run/

#env-environment-variables

10https://docs.docker.com/

userguide/dockervolumes/

access the configuration files. Exposing the volumesallows to access the files defined inside the Contain-ers, on the host operating system. Precisely, the WESContainer has to, at least, enable the configuration of:1) the used DB, i.e., DB driver, url, username andpassword for connection; 2) the WES itself; and 3) thelogging level of the WES, and the application stacklayers on which it may depend on. Alternatively, in-stead of providing configuration details, the vendorscan provide different Containers for different config-urations. However, even in that case enabling the DBconfiguration is required.

In order to access relevant data, all WfMS Con-tainers have to specify the volumes to access theWfMS log files, and to access all the data useful tosetup the system. For instance, if the WfMS definesexamples as part of the deployment, we may wantto remove those examples by overriding the volumecontaining them. Moreover, the WES Container (orthe DBMS Container) has to create the WfMS’s DBand populate it with data required to run the WfMS.The vendors need to provide authentication configu-ration details of the WfMS components (e.g., the userwith admin privileges for the DBMS). Alternativelythe vendor may simply provide the files needed to cre-ate and populate the DB.

#mount-a-host-directory-as-a-data-volume

7

Methodology Framework

+

Container Based Methodology and Framework

BenchFlow Frameworkarchitecture

InstanceDatabase

DBMSFaban Drivers

ContainersServers

DATA TRANSFORMERSANALYSERS

Performance Metrics

Performance KPIs

WfMS

Test

Exe

cuti

onA

naly

ses

Faban+

Web Service

Minio

harness

MONITORAdapters

CO

LLECT

OR

S

Page 14: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

Vendor

BenchFlow

Provide Benchmarking Methodology

Vendor

BenchFlow

Agree on adding Vendor's WfMS to the

Benchmark

Benchmarking Methodology Agreement Proposal

Signed Agreement

Vendor

BenchFlow

Provide Containerized Distribution of WfMS

Containerized WfMS Request

Containerized WfMS

Vendor

BenchFlow

Verify the Benchmark Results

Results Verification Outcome

Results Verification Request

Vendor

BenchFlow

Provide Draft Benchmark Results

Draft Benchmark ResultsVerified Benchmark Results

Not Valid

Are the Results Valid?

Community

BenchFlow

Publish Benchmark Results Valid

Figure 4: Benchmarking Methodology Choreography

pending on the WfMS’s architecture. The DBMSContainer can refer to an existing publicly availableContainer distributions. The containerized WfMSshould be publicly available (e.g., at the Docker Hubregistry8), or the Benchflow team should be grantedaccess to a private registry used by the vendor. Thesame applies to the Containers’ definition file, i.e., theDockerfile (Turnbull, 2014, ch. 4). While private reg-istries are a solution that can work with vendors ofclosed-source systems, they impact the reproducibil-ity of the results. For each WfMS version to be in-cluded in the benchmark, there must be a default con-figuration, i.e., the configuration in which the WfMSContainers start without modifying any configurationparameters, except the ones required in order to cor-rectly start the WfMS, e.g., the database connection.However, if vendors want to benchmark the perfor-mance of their WfMS with different configurations,for example, the configuration provided to users asa “getting started” configuration, or production-gradeconfigurations for real-life usage, they can also pro-vide different configurations. To do that, the Con-tainers must allow to issue the WfMS configurationsthrough additional environment variables9, and/or byspecifying the volumes to be exposed10 in order to

8https://hub.docker.com/

9https://docs.docker.com/reference/run/

#env-environment-variables

10https://docs.docker.com/

userguide/dockervolumes/

access the configuration files. Exposing the volumesallows to access the files defined inside the Contain-ers, on the host operating system. Precisely, the WESContainer has to, at least, enable the configuration of:1) the used DB, i.e., DB driver, url, username andpassword for connection; 2) the WES itself; and 3) thelogging level of the WES, and the application stacklayers on which it may depend on. Alternatively, in-stead of providing configuration details, the vendorscan provide different Containers for different config-urations. However, even in that case enabling the DBconfiguration is required.

In order to access relevant data, all WfMS Con-tainers have to specify the volumes to access theWfMS log files, and to access all the data useful tosetup the system. For instance, if the WfMS definesexamples as part of the deployment, we may wantto remove those examples by overriding the volumecontaining them. Moreover, the WES Container (orthe DBMS Container) has to create the WfMS’s DBand populate it with data required to run the WfMS.The vendors need to provide authentication configu-ration details of the WfMS components (e.g., the userwith admin privileges for the DBMS). Alternativelythe vendor may simply provide the files needed to cre-ate and populate the DB.

#mount-a-host-directory-as-a-data-volume

7

Methodology Framework

+

Container Based Methodology and Framework

BenchFlow Frameworkarchitecture

InstanceDatabase

DBMSFaban Drivers

ContainersServers

DATA TRANSFORMERSANALYSERS

Performance Metrics

Performance KPIs

WfMS

Test

Exe

cuti

onA

naly

ses

Faban+

Web Service

Minio

harness

MONITORAdapters

CO

LLECT

OR

S

Provides Great Advantages for Automation

and Reproducibility of Results, while

Ensuring Negligible Performance Impact

Page 15: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

8

[CLOSER ’16]Container-centric Methodology for Benchmarking Workflow Management Systems.

Vendor

BenchFlow

Provide Benchmarking Methodology

Vendor

BenchFlow

Agree on adding Vendor's WfMS to the

Benchmark

Benchmarking Methodology Agreement Proposal

Signed Agreement

Vendor

BenchFlow

Provide Containerized Distribution of WfMS

Containerized WfMS Request

Containerized WfMS

Vendor

BenchFlow

Verify the Benchmark Results

Results Verification Outcome

Results Verification Request

Vendor

BenchFlow

Provide Draft Benchmark Results

Draft Benchmark ResultsVerified Benchmark Results

Not Valid

Are the Results Valid?

Community

BenchFlow

Publish Benchmark Results Valid

Figure 4: Benchmarking Methodology Choreography

pending on the WfMS’s architecture. The DBMSContainer can refer to an existing publicly availableContainer distributions. The containerized WfMSshould be publicly available (e.g., at the Docker Hubregistry8), or the Benchflow team should be grantedaccess to a private registry used by the vendor. Thesame applies to the Containers’ definition file, i.e., theDockerfile (Turnbull, 2014, ch. 4). While private reg-istries are a solution that can work with vendors ofclosed-source systems, they impact the reproducibil-ity of the results. For each WfMS version to be in-cluded in the benchmark, there must be a default con-figuration, i.e., the configuration in which the WfMSContainers start without modifying any configurationparameters, except the ones required in order to cor-rectly start the WfMS, e.g., the database connection.However, if vendors want to benchmark the perfor-mance of their WfMS with different configurations,for example, the configuration provided to users asa “getting started” configuration, or production-gradeconfigurations for real-life usage, they can also pro-vide different configurations. To do that, the Con-tainers must allow to issue the WfMS configurationsthrough additional environment variables9, and/or byspecifying the volumes to be exposed10 in order to

8https://hub.docker.com/

9https://docs.docker.com/reference/run/

#env-environment-variables

10https://docs.docker.com/

userguide/dockervolumes/

access the configuration files. Exposing the volumesallows to access the files defined inside the Contain-ers, on the host operating system. Precisely, the WESContainer has to, at least, enable the configuration of:1) the used DB, i.e., DB driver, url, username andpassword for connection; 2) the WES itself; and 3) thelogging level of the WES, and the application stacklayers on which it may depend on. Alternatively, in-stead of providing configuration details, the vendorscan provide different Containers for different config-urations. However, even in that case enabling the DBconfiguration is required.

In order to access relevant data, all WfMS Con-tainers have to specify the volumes to access theWfMS log files, and to access all the data useful tosetup the system. For instance, if the WfMS definesexamples as part of the deployment, we may wantto remove those examples by overriding the volumecontaining them. Moreover, the WES Container (orthe DBMS Container) has to create the WfMS’s DBand populate it with data required to run the WfMS.The vendors need to provide authentication configu-ration details of the WfMS components (e.g., the userwith admin privileges for the DBMS). Alternativelythe vendor may simply provide the files needed to cre-ate and populate the DB.

#mount-a-host-directory-as-a-data-volume

Page 16: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

9

BenchFlow Benchmarking Methodologyrequirements from the WfMS

Availability of APIs

• Deploy Process

• Start Process Instance

• Users API

• Web Service APIs

• Events APIs

Page 17: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

9

BenchFlow Benchmarking Methodologyrequirements from the WfMS

Availability of APIs

• Deploy Process

• Start Process Instance

• Users API

• Web Service APIs

• Events APIs

• Workflow & Construct:

• Start Time

• End Time

• [Duration]

Availability of Timing Data

Page 18: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

9

BenchFlow Benchmarking Methodologyrequirements from the WfMS

Availability of APIs

• Deploy Process

• Start Process Instance

• Users API

• Web Service APIs

• Events APIs

• Workflow & Construct:

• Start Time

• End Time

• [Duration]

Availability of Timing Data

Availability of Execution StateState of the workflow execution. E.g., running, completed, error

Page 19: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

10

BenchFlow Frameworkarchitecture

InstanceDatabase

DBMSFaban Drivers

ContainersServers

DATA TRANSFORMERSANALYSERS

Performance Metrics

Performance KPIs

WfMS

Test

Exe

cuti

onA

naly

ses

Faban+

Web Service

Minio

harness

benchflow [BPM ’15] [ICPE ’16]

MONITORAdapters

CO

LLECT

OR

S

Page 20: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

11

BenchFlow FrameworkWfMSs’ specific characteristics

•Manages the Automatic WfMS deployment;

•Provides a “plugin” mechanism to add new WfMSs;

•Performs automatic Process Models deployment;

•Collects Client and Server-side data and metrics;

•Automatically computes performance metrics and statistics.

Page 21: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

12

Applications of the Methodology and Framework

[CAiSE ’16]Micro-Benchmarking BPMN 2.0 Workflow Management Systems with Workflow Patterns.

WfMS

MySQL

3 WfMSsWorkload

sequencePattern

EmptyScript 1

EmptyScript 2

0

750

1500

0:00 4:00 10:00

Metrics

• Engine Level

• Process Level

• Environment

Page 22: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

Micro-Benchmarking with Workflow Patterns

13

research questions

Selected three WfMSs: popular open-source WfMSs actually used in industry

1. What is the impact of individual or a mix of workflow patterns on the performance of each one of the benchmarked BPMN 2.0 WfMSs?

2. Are there performance bottlenecks for the selected WfMSs?

Page 23: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

14

sequencePattern

EmptyScript 1

EmptyScript 2

Workload Mix

Micro-Benchmarking with Workflow Patternsworkload mix

Sequence Pattern [SEQ]

• N. Russell et al, Workflow Control-Flow Patterns: A Revised View, 2006Pattern Reference:

Page 24: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

14

sequencePattern

EmptyScript 1

EmptyScript 2

Workload Mix

Micro-Benchmarking with Workflow Patternsworkload mix

Sequence Pattern [SEQ]exclusiveChoicePattern

generatenumber

EmptyScript 1

EmptyScript 2case_2

case_1

Exclusive Choice and Simple Merge [EXC]

• N. Russell et al, Workflow Control-Flow Patterns: A Revised View, 2006Pattern Reference:

Page 25: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

15

arbitaryCyclesPattern

generate1 or 2

EmptyScript 1

EmptyScript 2

i++

i < 10

i >= 10case_1

case_2

Workload Mix

Micro-Benchmarking with Workflow Patternsworkload mix

Arbitrary Cycle [CYC]

• N. Russell et al, Workflow Control-Flow Patterns: A Revised View, 2006Pattern Reference:

Page 26: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

16

Workload Mix

Micro-Benchmarking with Workflow Patternsworkload mix

parallelSplitPattern

EmptyScript 1

EmptyScript 2

Parallel Split and Synchronisation [PAR]

• N. Russell et al, Workflow Control-Flow Patterns: A Revised View, 2006Pattern Reference:

Page 27: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

16

Workload Mix

Micro-Benchmarking with Workflow Patternsworkload mix

implicitTerminationPattern

EmptyScript 1

Wait 5Sec

Explicit Termination Pattern [EXT]

parallelSplitPattern

EmptyScript 1

EmptyScript 2

Parallel Split and Synchronisation [PAR]

• N. Russell et al, Workflow Control-Flow Patterns: A Revised View, 2006Pattern Reference:

Page 28: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

17

Micro-Benchmarking with Workflow Patternsworkload mix design decisions

1. Maximise the simplicity of the model expressing the workflow pattern

2. Omit the interactions with external systems by implementing all tasks as script tasks

Page 29: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

18

Load Functions

Micro-Benchmarking with Workflow Patternsload function

Test Type

Load test

Inst

ance

Pr

oduc

ers

0300600900

12001500

Time (min:sec)

0:000:3

01:0

02:0

03:0

04:0

05:0

06:0

07:0

08:0

010

:00

Example for u=1500

Page 30: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

19

WfMSConfigurations Micro-Benchmarking with Workflow Patterns

configurations: WfMS environments

WfMS A WfMS B WfMS C

Page 31: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

19

WfMSConfigurations Micro-Benchmarking with Workflow Patterns

configurations: WfMS environments

WfMS A WfMS B WfMS C

MySQL MySQL MySQL

MySQL: Community Server 5.6.26

Page 32: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

19

WfMSConfigurations Micro-Benchmarking with Workflow Patterns

configurations: WfMS environments

WfMS A WfMS B WfMS C

MySQL MySQL MySQL

MySQL: Community Server 5.6.26

O.S.: Ubuntu 14.04.01 Ubuntu 14.04.01 Ubuntu 14.04.01J.V.M.: Oracle Serv. 7u79 Oracle Serv. 7u79 Oracle Serv. 7u79

Page 33: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

19

WfMSConfigurations Micro-Benchmarking with Workflow Patterns

configurations: WfMS environments

WfMS A WfMS B WfMS C

MySQL MySQL MySQL

MySQL: Community Server 5.6.26

O.S.: Ubuntu 14.04.01 Ubuntu 14.04.01 Ubuntu 14.04.01J.V.M.: Oracle Serv. 7u79 Oracle Serv. 7u79 Oracle Serv. 7u79

App. Server: Ap. Tomcat 7.0.62 Ap. Tomcat 7.0.62 Wildfly 8.1.0. Final

Page 34: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

19

WfMSConfigurations Micro-Benchmarking with Workflow Patterns

configurations: WfMS environments

WfMS A WfMS B WfMS C

MySQL MySQL MySQL

MySQL: Community Server 5.6.26

O.S.: Ubuntu 14.04.01 Ubuntu 14.04.01 Ubuntu 14.04.01J.V.M.: Oracle Serv. 7u79 Oracle Serv. 7u79 Oracle Serv. 7u79

App. Server: Ap. Tomcat 7.0.62 Ap. Tomcat 7.0.62 Wildfly 8.1.0. Final

Max Java Heap: 32GB

Page 35: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

19

WfMSConfigurations Micro-Benchmarking with Workflow Patterns

configurations: WfMS environments

WfMS A WfMS B WfMS C

MySQL MySQL MySQL

MySQL: Community Server 5.6.26

O.S.: Ubuntu 14.04.01 Ubuntu 14.04.01 Ubuntu 14.04.01J.V.M.: Oracle Serv. 7u79 Oracle Serv. 7u79 Oracle Serv. 7u79

App. Server: Ap. Tomcat 7.0.62 Ap. Tomcat 7.0.62 Wildfly 8.1.0. Final

Max Java Heap: 32GB Max DB Connections Number: 100

Page 36: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

19

WfMSConfigurations Micro-Benchmarking with Workflow Patterns

configurations: WfMS environments

WfMS A WfMS B WfMS C

MySQL MySQL MySQL

MySQL: Community Server 5.6.26

WfMS’s Containers: official ones, if available on DockerHub

O.S.: Ubuntu 14.04.01 Ubuntu 14.04.01 Ubuntu 14.04.01J.V.M.: Oracle Serv. 7u79 Oracle Serv. 7u79 Oracle Serv. 7u79

App. Server: Ap. Tomcat 7.0.62 Ap. Tomcat 7.0.62 Wildfly 8.1.0. Final

Max Java Heap: 32GB Max DB Connections Number: 100

Page 37: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

19

WfMSConfigurations Micro-Benchmarking with Workflow Patterns

configurations: WfMS environments

WfMS A WfMS B WfMS C

MySQL MySQL MySQL

MySQL: Community Server 5.6.26

WfMS’s Containers: official ones, if available on DockerHubWfMS’s Configuration: as suggested by vendor’s documentation

O.S.: Ubuntu 14.04.01 Ubuntu 14.04.01 Ubuntu 14.04.01J.V.M.: Oracle Serv. 7u79 Oracle Serv. 7u79 Oracle Serv. 7u79

App. Server: Ap. Tomcat 7.0.62 Ap. Tomcat 7.0.62 Wildfly 8.1.0. Final

Max Java Heap: 32GB Max DB Connections Number: 100

Page 38: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

20

WfMSConfigurations Micro-Benchmarking with Workflow Patterns

configurations: WfMS deployment

TEST ENVIRONMENT

CPU64 Cores

@ 1400 MHz

RAM 128 GB

Load Drivers

CPU12 Cores

@ 800 MHz

RAM 64 GB

WfMS

CPU64 Cores

@ 2300 MHz

RAM 128 GB

DBMS

Page 39: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

20

WfMSConfigurations Micro-Benchmarking with Workflow Patterns

configurations: WfMS deployment

TEST ENVIRONMENT

CPU64 Cores

@ 1400 MHz

RAM 128 GB

Load Drivers

CPU12 Cores

@ 800 MHz

RAM 64 GB

WfMS

CPU64 Cores

@ 2300 MHz

RAM 128 GB

DBMS

10 Gbit/s 10 Gbit/s

Page 40: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

20

WfMSConfigurations Micro-Benchmarking with Workflow Patterns

configurations: WfMS deployment

TEST ENVIRONMENT

CPU64 Cores

@ 1400 MHz

RAM 128 GB

Load Drivers

CPU12 Cores

@ 800 MHz

RAM 64 GB

WfMS

CPU64 Cores

@ 2300 MHz

RAM 128 GB

DBMS

10 Gbit/s 10 Gbit/s

O.S.: Ubuntu 14.04.3 LTS Ubuntu 14.04.3 LTS Ubuntu 14.04.3 LTS

Page 41: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

20

WfMSConfigurations Micro-Benchmarking with Workflow Patterns

configurations: WfMS deployment

TEST ENVIRONMENT

CPU64 Cores

@ 1400 MHz

RAM 128 GB

Load Drivers

CPU12 Cores

@ 800 MHz

RAM 64 GB

WfMS

CPU64 Cores

@ 2300 MHz

RAM 128 GB

DBMS

10 Gbit/s 10 Gbit/s

O.S.: Ubuntu 14.04.3 LTS Ubuntu 14.04.3 LTS Ubuntu 14.04.3 LTS

1.8.2 1.8.2 1.8.2

Page 42: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

21

Micro-Benchmarking with Workflow Patternsperformed experiments

Individual patterns

1. 1500 (max) concurrent Instance Producers starting each pattern [SEQ, EXC, CYC, EXT, PAR]

Mix of patterns

2. 1500 concurrent Instance Producers starting an equal number of pattern instances (20% mix of [SEQ, EXC, CYC, EXT, PAR])

Page 43: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

21

Micro-Benchmarking with Workflow Patternsperformed experiments

Three runs for each execution of the experiments

Individual patterns

1. 1500 (max) concurrent Instance Producers starting each pattern [SEQ, EXC, CYC, EXT, PAR]

Mix of patterns

2. 1500 concurrent Instance Producers starting an equal number of pattern instances (20% mix of [SEQ, EXC, CYC, EXT, PAR])

Page 44: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

21

Micro-Benchmarking with Workflow Patternsperformed experiments

Three runs for each execution of the experimentsThe data collected in the first minute of the execution has not been included in the analysis

Individual patterns

1. 1500 (max) concurrent Instance Producers starting each pattern [SEQ, EXC, CYC, EXT, PAR]

Mix of patterns

2. 1500 concurrent Instance Producers starting an equal number of pattern instances (20% mix of [SEQ, EXC, CYC, EXT, PAR])

Page 45: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

21

Micro-Benchmarking with Workflow Patternsperformed experiments

Three runs for each execution of the experimentsThe data collected in the first minute of the execution has not been included in the analysis

Individual patterns

1. 1500 (max) concurrent Instance Producers starting each pattern [SEQ, EXC, CYC, EXT, PAR]

Mix of patterns

2. 1500 concurrent Instance Producers starting an equal number of pattern instances (20% mix of [SEQ, EXC, CYC, EXT, PAR])

Page 46: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

22

Number of Process Instances

Computed Metrics and Statisticsengine level metrics

Throughput

Execution Time

Database Size

WfMS UsersLoad Driver

InstanceDatabase

Application Server

DBMS

Web Service

D

A

BC

Page 47: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

22

Number of Process Instances

Computed Metrics and Statisticsengine level metrics

Throughput

Execution Time

Database Size

WfMS UsersLoad Driver

InstanceDatabase

Application Server

DBMS

Web Service

D

A

BC

Page 48: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

23

individual patterns: throughput (bp/sec)Micro-Benchmarking with Workflow Patterns

SEQ EXC EXT PAR CYC

WfMS A

WfMS B

WfMS C

1456.79 1417.17 1455.68 1433.12 327.99

1447.66 1436.03 1426.35 1429.65 644.02

63.31 49.04 0.38 48.89 13.46

u=600

u=800

u=1500

Page 49: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

24

Micro-Benchmarking with Workflow Patternsindividual patterns: WfMS C Scalability for SEQ pattern

WfMS C uses a synchronous instantiation API, load drivers must wait for workflows to end

before starting new ones

Page 50: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

24

Micro-Benchmarking with Workflow Patternsindividual patterns: WfMS C Scalability for SEQ pattern

WfMS C uses a synchronous instantiation API, load drivers must wait for workflows to end

before starting new ones

Res

pons

e T

ime

0

14

28

500 1000 1500 2000

u Instance Producers

Thr

ough

put

0

40

80

500 1000 1500 2000

u Instance Producers

Page 51: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

24

Micro-Benchmarking with Workflow Patternsindividual patterns: WfMS C Scalability for SEQ pattern

WfMS C uses a synchronous instantiation API, load drivers must wait for workflows to end

before starting new ones

Res

pons

e T

ime

0

14

28

500 1000 1500 2000

u Instance Producers

Thr

ough

put

0

40

80

500 1000 1500 2000

u Instance Producers

We have found a bottleneck! (~60 bp/sec)

Page 52: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

25

Computed Metrics and Statisticsprocess level metrics

Throughput

WfMS UsersLoad Driver

InstanceDatabase

Application Server

Web Service

D

A

BC

DBMS…

Instance Duration

Per

Proc

ess

D

efini

tion

Page 53: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

25

Computed Metrics and Statisticsprocess level metrics

Throughput

WfMS UsersLoad Driver

InstanceDatabase

Application Server

Web Service

D

A

BC

DBMS…

Instance Duration

Per

Proc

ess

D

efini

tion

Page 54: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

26

Micro-Benchmarking with Workflow Patternsindividual patterns: mean (instance duration time)

Page 55: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

26

Micro-Benchmarking with Workflow Patternsindividual patterns: mean (instance duration time)

sequencePattern

EmptyScript 1

EmptyScript 2

[SEQ]

Page 56: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

26

implicitTerminationPattern

EmptyScript 1

Wait 5Sec

[EXT]

Micro-Benchmarking with Workflow Patternsindividual patterns: mean (instance duration time)

Page 57: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

26

Micro-Benchmarking with Workflow Patternsindividual patterns: mean (instance duration time)

parallelSplitPattern

EmptyScript 1

EmptyScript 2

[PAR]

Page 58: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

26

Micro-Benchmarking with Workflow Patternsindividual patterns: mean (instance duration time)

CYC Instance Producers:

u=600

[CYC]

arbitaryCyclesPattern

generate1 or 2

EmptyScript 1

EmptyScript 2

i++

i < 10

i >= 10case_1

case_2

Page 59: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

27

Computed Metrics and Statisticsenvironment metrics

WfMS UsersLoad Driver

InstanceDatabase

Application Server

Web Service

D

A

BC

DBMS

CPU/RAM Usage per Second

IO Operations/Network Usage per Second

Page 60: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

27

Computed Metrics and Statisticsenvironment metrics

WfMS UsersLoad Driver

InstanceDatabase

Application Server

Web Service

D

A

BC

DBMS

CPU/RAM Usage per Second

IO Operations/Network Usage per Second

Page 61: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

28

Micro-Benchmarking with Workflow Patternsindividual patterns: mean (RAM) and mean (CPU) usage

RAM CPU

Page 62: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

29

Micro-Benchmarking with Workflow Patternsperformed experiments

Individual patterns

1. 1500 (max) concurrent Instance Producers starting each pattern [SEQ, EXC, CYC, EXT, PAR]

Mix of patterns

2. 1500 concurrent Instance Producers starting an equal number of pattern instances (20% mix of [SEQ, EXC, CYC, EXT, PAR])

Three runs for each execution of the experimentsThe data collected in the first minute of the execution has not been included in the analysis

Page 63: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

30

Micro-Benchmarking with Workflow Patternsmix of patterns: duration, RAM, CPU

Page 64: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

31

Micro-Benchmarking with Workflow Patternssummary

Even though the Executed Workflows are Simple

Page 65: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

31

Micro-Benchmarking with Workflow Patternssummary

There are relevant differences in workflows’ duration and throughput among the WfMSs

Even though the Executed Workflows are Simple

Page 66: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

31

Micro-Benchmarking with Workflow Patternssummary

There are relevant differences in workflows’ duration and throughput among the WfMSs

There are relevant differences in resource usage among WfMSs

CPURAM

Even though the Executed Workflows are Simple

Page 67: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

31

Micro-Benchmarking with Workflow Patternssummary

There are relevant differences in workflows’ duration and throughput among the WfMSs

One WfMS has bottlenecks

There are relevant differences in resource usage among WfMSs

CPURAM

Even though the Executed Workflows are Simple

Page 68: Workflow Engine Performance Benchmarking with BenchFlow

Highlights

Page 69: Workflow Engine Performance Benchmarking with BenchFlow

Highlights

Vincenzo Ferme

6

The BenchFlow Benchmarking Process

Workload Model WfMS

ConfigurationsTest

TypesWorkload Metrics

KPIsMeasure

Input/Process/Output Model

Workload Mix

80%C

A

B

20%D

Load FunctionsTest Data

Performance Data

Process Instance Duration

Throughput

Containers

Benchmarking Process

Page 70: Workflow Engine Performance Benchmarking with BenchFlow

Highlights

Vincenzo Ferme

6

The BenchFlow Benchmarking Process

Workload Model WfMS

ConfigurationsTest

TypesWorkload Metrics

KPIsMeasure

Input/Process/Output Model

Workload Mix

80%C

A

B

20%D

Load FunctionsTest Data

Performance Data

Process Instance Duration

Throughput

Containers

Benchmarking ProcessVincenzo Ferme

8

[CLOSER ’16]Container-centric Methodology for Benchmarking Workflow Management Systems.

Vendor

BenchFlow

Provide Benchmarking Methodology

Vendor

BenchFlow

Agree on adding Vendor's WfMS to the

Benchmark

Benchmarking Methodology Agreement Proposal

Signed Agreement

Vendor

BenchFlow

Provide Containerized Distribution of WfMS

Containerized WfMS Request

Containerized WfMS

Vendor

BenchFlow

Verify the Benchmark Results

Results Verification Outcome

Results Verification Request

Vendor

BenchFlow

Provide Draft Benchmark Results

Draft Benchmark ResultsVerified Benchmark Results

Not Valid

Are the Results Valid?

Community

BenchFlow

Publish Benchmark Results Valid

Benchmarking Methodology

Page 71: Workflow Engine Performance Benchmarking with BenchFlow

Highlights

Vincenzo Ferme

6

The BenchFlow Benchmarking Process

Workload Model WfMS

ConfigurationsTest

TypesWorkload Metrics

KPIsMeasure

Input/Process/Output Model

Workload Mix

80%C

A

B

20%D

Load FunctionsTest Data

Performance Data

Process Instance Duration

Throughput

Containers

Benchmarking ProcessVincenzo Ferme

8

[CLOSER ’16]Container-centric Methodology for Benchmarking Workflow Management Systems.

Vendor

BenchFlow

Provide Benchmarking Methodology

Vendor

BenchFlow

Agree on adding Vendor's WfMS to the

Benchmark

Benchmarking Methodology Agreement Proposal

Signed Agreement

Vendor

BenchFlow

Provide Containerized Distribution of WfMS

Containerized WfMS Request

Containerized WfMS

Vendor

BenchFlow

Verify the Benchmark Results

Results Verification Outcome

Results Verification Request

Vendor

BenchFlow

Provide Draft Benchmark Results

Draft Benchmark ResultsVerified Benchmark Results

Not Valid

Are the Results Valid?

Community

BenchFlow

Publish Benchmark Results Valid

Benchmarking Methodology

Vincenzo Ferme

10

BenchFlow Frameworkarchitecture

InstanceDatabase

DBMSFaban Drivers

ContainersServers

DATA TRANSFORMERSANALYSERS

Performance Metrics

Performance KPIs

WfMS

Test

Exe

cuti

onA

naly

ses

Faban+

Web Service

Minio

harness

benchflow [BPM ’15] [ICPE ’16]

MONITORAdapters

CO

LLECT

OR

S

BenchFlow Framework

Page 72: Workflow Engine Performance Benchmarking with BenchFlow

Highlights

Vincenzo Ferme

6

The BenchFlow Benchmarking Process

Workload Model WfMS

ConfigurationsTest

TypesWorkload Metrics

KPIsMeasure

Input/Process/Output Model

Workload Mix

80%C

A

B

20%D

Load FunctionsTest Data

Performance Data

Process Instance Duration

Throughput

Containers

Benchmarking ProcessVincenzo Ferme

8

[CLOSER ’16]Container-centric Methodology for Benchmarking Workflow Management Systems.

Vendor

BenchFlow

Provide Benchmarking Methodology

Vendor

BenchFlow

Agree on adding Vendor's WfMS to the

Benchmark

Benchmarking Methodology Agreement Proposal

Signed Agreement

Vendor

BenchFlow

Provide Containerized Distribution of WfMS

Containerized WfMS Request

Containerized WfMS

Vendor

BenchFlow

Verify the Benchmark Results

Results Verification Outcome

Results Verification Request

Vendor

BenchFlow

Provide Draft Benchmark Results

Draft Benchmark ResultsVerified Benchmark Results

Not Valid

Are the Results Valid?

Community

BenchFlow

Publish Benchmark Results Valid

Benchmarking Methodology

Vincenzo Ferme

10

BenchFlow Frameworkarchitecture

InstanceDatabase

DBMSFaban Drivers

ContainersServers

DATA TRANSFORMERSANALYSERS

Performance Metrics

Performance KPIs

WfMS

Test

Exe

cuti

onA

naly

ses

Faban+

Web Service

Minio

harness

benchflow [BPM ’15] [ICPE ’16]

MONITORAdapters

CO

LLECT

OR

S

BenchFlow FrameworkVincenzo Ferme

26

EmptyScript 1

Wait 5Sec

[EXT]

Micro-Benchmarking with Workflow Patternsindividual patterns: mean (instance duration time)

EmptyScript 1

EmptyScript 2

[SEQ] EmptyScript 1

EmptyScript 2

[PAR]

CYC Instance Producers:

u=600

[CYC]

generate1 or 2

EmptyScript 1

EmptyScript 2

i++

i < 10

i >= 10case_1

case_2

Micro-Benchmark with WP

Page 73: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

33

•We want to characterise the Workload Mix using Real-World process models

• Share your executable BPMN 2.0 process models, even anonymised

Process Models

• We want to characterise the Load Functions using Real-World behaviours

• Share your execution logs, even anonymised

Execution Logs

•We want to add more and more WfMSs to the benchmark

• Contact us for collaboration, and BenchFlow framework support

WfMSs

Call for CollaborationWfMSs, process models, process logs

Page 74: Workflow Engine Performance Benchmarking with BenchFlow

benchflowbenchflow

[email protected]

http://benchflow.inf.usi.ch

WORKFLOW ENGINE PERFORMANCE BENCHMARKING

WITH BENCHFLOW

VincenzoFermeFacultyofInformaticsUSILugano,Switzerland

Page 75: Workflow Engine Performance Benchmarking with BenchFlow

BACKUP SLIDES

VincenzoFermeFacultyofInformaticsUSILugano,Switzerland

Page 76: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

36

Published Work

[BTW ’15]C. Pautasso, V. Ferme, D. Roller, F. Leymann, and M. Skouradaki. Towards workflow benchmarking: Open research challenges. In Proc. of the 16th conference on Database Systems for Business, Technology, and Web, BTW 2015, pages 331–350, 2015.

[SSP ’14]M. Skouradaki, D. H. Roller, F. Leymann, V. Ferme, and C. Pautasso. Technical open challenges on benchmarking workflow management systems. In Proc. of the 2014 Symposium on Software Performance, SSP 2014, pages 105–112, 2014.

[ICPE ’15]M. Skouradaki, D. H. Roller, L. Frank, V. Ferme, and C. Pautasso. On the Road to Benchmarking BPMN 2.0 Workflow Engines. In Proc. of the 6th ACM/SPEC International Conference on Performance Engineering, ICPE ’15, pages 301–304, 2015.

Page 77: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

37

Published Work

[CLOSER ’15]M. Skouradaki, V. Ferme, F. Leymann, C. Pautasso, and D. H. Roller. “BPELanon”: Protect business processes on the cloud. In Proc. of the 5th International Conference on Cloud Computing and Service Science, CLOSER 2015. SciTePress, 2015.

[SOSE ’15]M. Skouradaki, K. Goerlach, M. Hahn, and F. Leymann. Application of Sub-Graph Isomorphism to Extract Reoccurring Structures from BPMN 2.0 Process Models. In Proc. of the 9th International IEEE Symposium on Service-Oriented System Engineering, SOSE 2015, 2015.

[BPM ’15]V. Ferme, A. Ivanchikj, C. Pautasso. A Framework for Benchmarking BPMN 2.0 Workflow Management Systems. In Proc. of the 13th International Conference on Business Process Management, BPM ’15, pages 251-259, 2015.

Page 78: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

38

Published Work

[BPMD ’15]A. Ivanchikj, V. Ferme, C. Pautasso. BPMeter: Web Service and Application for Static Analysis of BPMN 2.0 Collections. In Proc. of the 13th International Conference on Business Process Management [Demo], BPM ’15, pages 30-34, 2015.

[ICPE ’16]V. Ferme, and C. Pautasso. Integrating Faban with Docker for Performance Benchmarking. In Proc. of the 7th ACM/SPEC International Conference on Performance Engineering, ICPE ’16, 2016.

[CLOSER ’16]V. Ferme, A. Ivanchikj, C. Pautasso., M. Skouradaki, F. Leymann. A Container-centric Methodology for Benchmarking Workflow Management Systems. In Proc. of the 6th International Conference on Cloud Computing and Service Science, CLOSER 2016. SciTePress, 2016.

Page 79: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

[CAiSE ’16]M. Skouradaki, V. Ferme, C. Pautasso, F. Leymann, A. van Hoorn. Micro-Benchmarking BPMN 2.0 Workflow Management Systems with Workflow Patterns . In Proc. of the 28th International Conference on Advanced Information Systems Engineering, CAiSE ’16, 2016.

39

Published Work

[ICWE ’16]C. Jürgen, V. Ferme, H.C. Gall. Using Docker Containers to Improve Reproducibility in Software and Web Engineering Research. In Proc. of the 16th International Conference on Web Engineering, 2016.

[ICWS ’16]M. Skouradaki, V. Andrikopoulos, F. Leymann. Representative BPMN 2.0 Process Model Generation from Recurring Structures. In Proc. of the 23rd IEEE International Conference on Web Services, ICWS '16, 2016.

Page 80: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

40

Published Work

[SummerSOC ’16]M. Skouradaki, T. Azad, U. Breitenbücher, O. Kopp, F. Leymann. A Decision Support System for the Performance Benchmarking of Workflow Management Systems. In Proc. of the 10th Symposium and Summer School On Service-Oriented Computing, SummerSOC '16, 2016.

[OTM ’16]M. Skouradaki, V. Andrikopoulos, O. Kopp, F. Leymann. RoSE: Reoccurring Structures Detection in BPMN 2.0 Process Models Collections. In Proc. of On the Move to Meaningful Internet Systems Conference, OTM '16, 2016. (to appear)

[BPM Forum ’16]V. Ferme, A. Ivanchikj, C. Pautasso. Estimating the Cost for Executing Business Processes in the Cloud. In Proc. of the 14th International Conference on Business Process Management, BPM Forum ’16, 2016. (to appear)

Page 81: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

41

Docker Performance

[IBM ’14]W. Felter, A. Ferreira, R. Rajamony, and J. Rubio. An updated performance comparison of virtual machines and Linux containers. IBM Research Report, 2014.

Although containers themselves have almost no overhead, Docker is not without performance gotchas. Docker volumes have noticeably better performance than files stored in AUFS. Docker’s NAT also introduces overhead for workloads with high packet rates. These features represent a tradeoff between ease of management and performance and should be considered on a case-by-case basis.

“Our results show that containers result in equal or better performance than VMs in almost all cases.

“”

BenchFlow Configures Docker for Performance by Default

Page 82: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

42

Benchmarking Requirements

• Relevant

• Representative

• Portable

• Scalable

• Simple

• Repeatable

• Vendor-neutral

• Accessible

• Efficient

• Affordable

• K. Huppler, The art of building a good benchmark, 2009 • J. Gray, The Benchmark Handbook for Database and Transaction Systems, 1993 • S. E. Sim, S. Easterbrook et al., Using benchmarking to advance research: A

challenge to software engineering, 2003

Page 83: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

43

BenchFlow Benchmarking MethodologyWfMS

Configurations

requirements from the WfMS

NON-CORECORE

Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work

Page 84: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

43

BenchFlow Benchmarking MethodologyWfMS

Configurations

requirements from the WfMS

NON-CORECORE

WfMS

Initialisation APIs

Deploy Process

Start Process Instance

Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work

Page 85: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

43

BenchFlow Benchmarking MethodologyWfMS

Configurations

requirements from the WfMS

NON-CORECORE

WfMS

Initialisation APIs

Deploy Process

Start Process Instance

+ WfMS

Claim Task Complete Task

User APIs

Create User Create Group

Pending User Tasks

Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work

Page 86: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

43

BenchFlow Benchmarking MethodologyWfMS

Configurations

requirements from the WfMS

NON-CORECORE

WfMS

Initialisation APIs

Deploy Process

Start Process Instance

+ WfMS

Claim Task Complete Task

User APIs

Create User Create Group

Pending User Tasks

Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work

Invoke WS

Web Service APIs

Page 87: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

43

BenchFlow Benchmarking MethodologyWfMS

Configurations

requirements from the WfMS

NON-CORECORE

WfMS

Initialisation APIs

Deploy Process

Start Process Instance

Event APIs

Issue Event

WfMS

Pending Event Tasks

+ WfMS

Claim Task Complete Task

User APIs

Create User Create Group

Pending User Tasks

Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work

Invoke WS

Web Service APIs

Page 88: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

44

BenchFlow Benchmarking MethodologyWfMS

Configurations

requirements from the WfMSTable 1: Summary of Core and Non-core APIs to be imple-mented by the WfMS

Functionality Min Response Data

Cor

eAP

Is

InitialisationAPIs

Deploy a process Deployed process IDStart a process instance Process instance ID

Non

-cor

eAP

Is

User APIs Create a user User IDCreate a group of users User group IDAccess pending tasks Pending tasks IDsClaim a task*Complete a task

Event APIs Access pending events Pending events IDsIssue events

Web serviceAPIs

Map tasks to Webservice endpoints

*Optional depending on the WfMS implementation

2.1.2 System Under Test

The SUT refers to the system that is being tested forperformance. In our case a WfMS which, accord-ing to the WfMC (Hollingsworth, 1995), includes theWorkflow Enactment Service (WES) and any envi-ronments and systems required for its proper func-tioning, e.g., application servers, virtual machines(e.g., Java Virtual Machine) and DBMSs. The WESis a complex middleware component which, in addi-tion to handling the execution of the process models,interacts with users, external applications, and Webservices. This kind of SUT offers a large set of con-figuration options and deployment alternatives. TheDBMS configuration as well as the WES configura-tions (e.g., the granularity level of history logging,the DB connection options, the order of asynchronousjobs acquisition), may affect its performance. More-over, WfMSs often offer a wide range of deploymentalternatives (e.g., standalone, in a clustered deploy-ment, behind a load balancer in an elastic cloud in-frastructure) and target different versions of applica-tion server stacks (e.g., Apache Tomcat, JBoss).

To add a WfMS in the benchmark, we require theavailability of certain Core and Non-core APIs. Asummary of the same is presented in Table 1.

The Core APIs are Initialisation APIs necessaryfor automatic issuing of the simplest workload to theSUT in order to test SUT’s performance. They in-clude: 1) deploy a process and return as a response anidentifier of the deployed process (PdID); and 2) starta process instance by using the PdID and return as aresponse the new instance identifier (PiID).

Depending on the execution language of theWfMS and the constructs that it supports, other Non-

core APIs might be necessary for testing more com-plex workloads. For instance, if we are targetingBPMN 2.0 WfMSs we might also require the follow-

ing APIs:For applying workloads involving human tasks,

the following User APIs are necessary: 1) createa user and return the identifier of the created user(UsID); 2) create a group of users, return the cre-ated group identifier (UgID), and enable adding usersby using their UsIDs; 3) pending user/manual tasks:access all the pending user/manual task instances ofa given user/manual task identified by its id (Jordanand Evdemon, 2011, sec. 8.2.1) as specified in themodel serialization. We want to obtain all the pend-ing tasks with the given id of all the process instances(PiIDs) of a given deployed process (PdID). The APIhas to respond with data, enabling the creation of acollection that maps the process instances to the listof their pending tasks <PiID, UtIDs> and <PiID,MtIDs>; 4) claim a user/manual task identified byUtID/MtID, if tasks are not automatically assigned bythe WfMS; and 5) complete a user/manual task iden-tified by UtID/MtID by submitting the data requiredto complete the task.

To issue a workload containing process mod-els with catching external events, the followingEvent APIs are necessary: 1) pending catchingevents/receive tasks: access the pending catchingevent/receive task instances of a given event/taskidentified by its id (Jordan and Evdemon, 2011, sec.8.2.1) specified in the model serialization. We wantto obtain all the pending catching events/receive taskswith the given id of all the process instances (Pi-IDs) of a given deployed process (PdID). The APIhas to respond with data enabling the creation of acollection that maps the process instances to the listof their pending catching events/receive tasks <PiID,CeIDs> and <PiID, RtIDs>; and 2) issue an event toa pending catching event/receive task identified by us-ing CeID/RtID. We require the APIs to accept the datanecessary to correlate the issued event to the correctprocess instance, e.g., a correlation key.

Finally, to be able to issue a workload defining in-teraction with Web services and/or containing throw-ing events, the WfMS has to support a binding mech-anism to map each Web service task/throwing eventto the corresponding Web service/throwing event end-point. The WfMS should preferably allow to specifythe mapping in the serialized version of the model,so that the binding can be added before deploying theprocess.

Since many WfMSs are offered as a service, it issafe to assume that many WfMSs expose, what wecall, the Core APIs. In our experience with systemswe have evaluated so far (e.g., Activiti, Bonita BPM,Camunda, Imixs Workflow, jBPM), they support notonly the core APIs, but also the non-core APIs. The

Context » Benchmarking Requirements » Methodology Overview » Methodology Details » Advantage of Containers » 1st Application » Future Work

Page 89: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

45

BenchFlow Frameworksystem under test

Docker Engine

Page 90: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

45

BenchFlow Frameworksystem under test

Docker EngineContainers

Page 91: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

45

BenchFlow Frameworksystem under test

Docker Engine

Docker Machine

prov

ides

Containers

Page 92: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

45

BenchFlow Frameworksystem under test

Docker Swarm

Docker Engine

Docker Machine

prov

ides

Containers

Page 93: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

45

BenchFlow Frameworksystem under test

Docker Swarm

Docker Engine

Docker Machine

prov

ides

manages

Containers

Servers

Page 94: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

45

BenchFlow Frameworksystem under test

Docker Swarm

Docker Engine

Docker Compose

Docker Machine

prov

ides

manages

Containers

Servers

Page 95: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

45

BenchFlow Frameworksystem under test

Docker Swarm

Docker Engine

Docker Compose

Docker Machine

prov

ides

manages

Containers

Servers

SUT’s Deployment Conf.

deploys

Page 96: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

46

Server-side Data and Metrics Collection

WfMS

Start Workflow

asynchronous execution of workflows

Page 97: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

Users

D

A

BC

Start

Application Server

Web Service

Load Driver

DBMS

WfMS

InstanceDatabase

46

Server-side Data and Metrics Collection

WfMS

Start Workflow

asynchronous execution of workflows

Page 98: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

Users

D

A

BC

Start

Application Server

Web Service

Load Driver

DBMS

WfMS

InstanceDatabase

46

Server-side Data and Metrics Collection

WfMS

Start Workflow

End

asynchronous execution of workflows

Page 99: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

47

Server-side Data and Metrics Collectionmonitors

DBMSFaban Drivers

ContainersServers

harnessWfMS

Test

Exe

cuti

on Web Service

Page 100: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

47

Server-side Data and Metrics Collectionmonitors

DBMSFaban Drivers

ContainersServers

harnessWfMS

Test

Exe

cuti

on Web Service

MONITOR

Page 101: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

48

Server-side Data and Metrics Collectionmonitors

DBMSFaban Drivers

ContainersServers

harnessWfMS

Test

Exe

cuti

on Web Service

• CPU usage• Database state

Examples of Monitors:Monitors’ Characteristics:• RESTful services• Lightweight (written in Go)• As less invasive on the SUT as possible

Page 102: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

48

Server-side Data and Metrics Collectionmonitors

DBMSFaban Drivers

ContainersServers

harnessWfMS

Test

Exe

cuti

on Web Service

CPU Monitor

DB Monitor

• CPU usage• Database state

Examples of Monitors:Monitors’ Characteristics:• RESTful services• Lightweight (written in Go)• As less invasive on the SUT as possible

Page 103: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

49

Server-side Data and Metrics Collectioncollect data

DBMSFaban Drivers

ContainersServers

harnessWfMS

Test

Exe

cuti

on Web Service

Page 104: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

49

Server-side Data and Metrics Collectioncollect data

DBMSFaban Drivers

ContainersServers

harnessWfMS

Test

Exe

cuti

on Web Service

MinioInstanceDatabase

Ana

lyse

sC

OLLEC

TO

RS

Page 105: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

50

Server-side Data and Metrics Collectioncollect data

DBMSFaban Drivers

ContainersServers

harnessWfMS

Test

Exe

cuti

on Web Service

• Container’s Stats (e.g., CPU usage)• Database dump• Applications Logs

Examples of Collectors:Collectors’ Characteristics:

• RESTful services• Lightweight (written in Go)• Two types: online and offline• Buffer data locally

Page 106: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

50

Server-side Data and Metrics Collectioncollect data

DBMSFaban Drivers

ContainersServers

harnessWfMS

Test

Exe

cuti

on Web Service

Stats Collector

DB Collector

• Container’s Stats (e.g., CPU usage)• Database dump• Applications Logs

Examples of Collectors:Collectors’ Characteristics:

• RESTful services• Lightweight (written in Go)• Two types: online and offline• Buffer data locally

Page 107: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

51

Test

Exe

cuti

on

A high-throughput distributed messaging system

MinioInstance

DatabaseAna

lyse

s

DBMSFaban Drivers

ContainersServers

harnessWfMS

Web Service

Stats Collector

DB Collector

Synchronisation with the Analysesschedule analyses

Page 108: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

51

Test

Exe

cuti

on

A high-throughput distributed messaging system

MinioInstance

DatabaseAna

lyse

s

DBMSFaban Drivers

ContainersServers

harnessWfMS

Web Service

Stats Collector

DB Collector

CO

LLECT

Synchronisation with the Analysesschedule analyses

Page 109: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

51

Test

Exe

cuti

on

A high-throughput distributed messaging system

MinioInstance

DatabaseAna

lyse

s

DBMSFaban Drivers

ContainersServers

harnessWfMS

Web Service

Stats Collector

DB Collector

CO

LLECT

Publish

Synchronisation with the Analysesschedule analyses

Page 110: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

51

Test

Exe

cuti

on

A high-throughput distributed messaging system

MinioInstance

DatabaseAna

lyse

s

DBMSFaban Drivers

ContainersServers

harnessWfMS

Web Service

Stats Collector

DB Collector

CO

LLECT

PublishSu

bscr

ibe

Synchronisation with the Analysesschedule analyses

Page 111: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

51

Test

Exe

cuti

on

A high-throughput distributed messaging system

MinioInstance

DatabaseAna

lyse

s

DBMSFaban Drivers

ContainersServers

harnessWfMS

Web Service

Stats Collector

DB Collector

CO

LLECT

PublishSu

bscr

ibe

Read

Synchronisation with the Analysesschedule analyses

Page 112: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

52

Performance Metrics and KPIsamount of data

REALISTIC

DATA

Number of

Repetitions

1

1053

Page 113: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

52

Performance Metrics and KPIsamount of data

REALISTIC

DATA

Number of

Repetitions

1

1053

Number of WfMSs

1

Page 114: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

53

Micro-Benchmarking with Workflow Patternsmix of patterns: throughput (bp/sec)

MIX

WfMS A

WfMS B

WfMS C

1061.27

1402.33

1.78

Page 115: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

54

Computed Metrics and Statisticsfeature level metrics

WfMS UsersLoad Driver

InstanceDatabase

Application Server

Web Service

D

A

BC

DBMS

Throughput

Instance Duration

Per

Con

stru

ct

(typ

e, n

ame)

Page 116: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

54

Computed Metrics and Statisticsfeature level metrics

WfMS UsersLoad Driver

InstanceDatabase

Application Server

Web Service

D

A

BC

DBMS

Throughput

Instance Duration

Per

Con

stru

ct

(typ

e, n

ame)

Page 117: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

55

Computed Metrics and Statisticsinteractions metrics

Response Time

Latency

WfMS UsersLoad Driver

InstanceDatabase

Application Server

DBMS

Web Service

D

A

BC

Page 118: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

55

Computed Metrics and Statisticsinteractions metrics

Response Time

Latency

WfMS UsersLoad Driver

InstanceDatabase

Application Server

DBMS

Web Service

D

A

BC

Page 119: Workflow Engine Performance Benchmarking with BenchFlow

Vincenzo Ferme

56

Computed Metrics and Statisticsdescriptive and homogeneity statistics

• Descriptive Statistics (e.g., Mean, Confidence Interval, Standard Deviation, Percentiles, Quartiles, …)

• Coefficient of Variation across trials

• Levene Test on trials