Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT...

40
Innovative Experiments Call identifier: F4Fp-01 Experiment Report Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 1 of 40 + Fed4FIRE + Experiment Report Large Scale EMPATIA Evaluation (EMPATIAxxl) Date of preparation of your proposal: 31/10/2017 Version number: 1.0 Your organisation name: OneSource, Consultoria Informática Lda. Your organisation address: Urb. Ferreira Jorge, Lote 14, 1º D 3040-016 Coimbra, Portugal Name of the coordinating person: Luis Cordeiro Coordinator telephone number: (+351) 964 264 868 Coordinator email: [email protected]

Transcript of Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT...

Page 1: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 1 of 40

+

Fed4FIRE+ Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl)

Date of preparation of your proposal: 31/10/2017

Version number: 1.0

Your organisation name: OneSource, Consultoria Informática Lda.

Your organisation address: Urb. Ferreira Jorge, Lote 14, 1º D 3040-016 Coimbra, Portugal

Name of the coordinating person: Luis Cordeiro

Coordinator telephone number: (+351) 964 264 868 Coordinator email: [email protected]

Page 2: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 2 of 40

Section A Project Summary Participatory budgeting (PB) is one of the most successful civic innovations of the last quarter-century. PB represents a powerful tool for citizens to join in the essential tasks of governing, not only as voters but also as decision-makers. The H2020 EMPATIA Project1 aims to increase the participation of citizens, by designing, evaluating and making publicly available an advanced ICT platform for supporting participatory budgeting processes. The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness and political sustainability of PB processes, while managing and aggregating voting processes and contents analysis through advanced data visualization and infographics mechanisms. The EMPATIA platform includes several components released as open-source2, which have been designed to be scalable, redundant, and supportive of fault tolerance and different deployment models, including all-in-one (e.g. deployment in a single server), cloud deployment and Software as a Service (SaaS) (e.g. with dedicated servers per component, or running on public clouds like Amazon Web Services, Windows Azure, etc.). Currently, the EMPATIA platform already supports real-world participatory processes in several European cities, including larger cities such as Lisbon and Milan (each with hundreds of thousands of users engaged). Additionally, there are ongoing negotiations to use the EMPATIA platform in municipalities and regional authorities of other countries, such as the USA (Boston, Chicago and Baltimore), Brazil, Peru, Mozambique and Colombia. The Large Scale EMPATIA Evaluation Project (EMPATIAxxl) herein reported has evaluated the ICT platform previously developed in the EMPATIA project for operation in highly demanding scenarios, such as municipalities or regions with millions of citizens and/or significant bursts of demand. In particular, EMPATIAxxl included experimentation with the EMPATIA platform (i) with different deployment models of the EMPATIA servers (all-in-one, cloud); (ii) using multiple and simultaneous clients connected to different access technologies; and (iii) performing different operations on the platform, representative of real-world scenarios. The large-scale experimentation conducted in several testbeds enabled the analysis and evaluation of the EMPATIA platform in controlled large-scale scenarios able to produce a systematic understanding of the key factors affecting its performance and scalability. The findings in the EMPATIAxxl experiment allowed to further enhance several components of the EMPATIA platform, so they can accommodate higher levels of load. Such enhancements have been possible through the large-scale experimentation framework provided by Fed4FIRE+. Succinctly, the EMPATIAxxl experiment had the following goals: 1. To validate and improve the EMPATIA platform in large-scale deployments. 2. To validate the performance of the EMPATIA platform in all-in-one deployment models. 3. To validate the performance of the EMPATIA platform in cloud deployment models. 4. To validate the EMPATIA platform when producing/using large amounts of heterogeneous data. 5. To validate the EMPATIA components within different operations. 6. To produce recommendations to the underlying (Fed4FIRE+) experimentation platforms.

1 EMPATIA Project website: https://www.empatia-project.eu/ 2 EMPATIA platform available at: https://github.com/EMPATIA

Page 3: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 3 of 40

Section B Detailed Description

B.1 Concept,Objectives,Set-upandBackground

B.1.1 Concept & objectives

The concept of the EMPATIAxxl project focused on the validation of the EMPATIA platform, through experimentation on testbeds representative of real-world highly demanding scenarios (e.g. high number of simultaneous users). This validation work has considered the performance of the EMPATIA components in distinct deployment models and under loads representing a high number of users simultaneously accessing the platform and/or using different options. The performance evaluation performed in this experiment included two main perspectives: the quality perceived by the user and server-side metrics. The quality perceived by the user includes metrics such as the number of successful/failed responses and average response time for successful/failed responses. Server-side metrics include for instance the number of requests per second, as well as the impact on the system by considering overhead metrics such as memory consumption and CPU usage, among others.

The goals of the conducted work included the following aspects:

1. Validation and improvement of the EMPATIA platform in large-scale deployments:

1.1. Determine the bottleneck components of the EMPATIA platform, based on expected loads.

1.2. Assess the performance of the EMPATIA platform within a variable number of clients.

1.3. Assess the impact of the EMPATIA platform in clients using different access networks.

2. Validation of the EMPATIA platform performance in all-in-one deployment models:

2.1. Determine the best configurations for each component in all-in-one deployment models.

2.2. Determine the optimal performance of EMPATIA components, considering cost-performance ratios.

3. Validation of the EMPATIA platform performance in cloud deployment models:

3.1. Determine the best distributions of EMPATIA components, based on expected loads.

3.2. Determine the best configurations for each component, in cloud deployment models.

3.3. Determine the optimal performance of EMPATIA components, considering cost-performance ratios.

4. Validation of the EMPATIA platform for dealing with large amounts of heterogeneous data:

4.1. Determine the performance of the EMPATIA platform for administrative/manager users to upload diverse types of content (e.g. images, video, files and others).

4.2. Determine the performance of the EMPATIA platform for bulk registration of users (e.g. for voting sessions).

4.3. Determine the performance of fraud detection mechanisms in voting sessions.

4.4. Determine the performance of analytics components in data visualization and aggregation processes.

5. Validation of EMPATIA components for different operations:

Page 4: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 4 of 40

5.1. Produce and disseminate guidelines for scalable and efficient ICT platforms for PB.

5.2. Produce and disseminate guidelines for the adaptation of ICT platforms for PB and e-voting to cloud infrastructures, considering load-balancing and operations & maintenance aspects.

5.3. Publish scientific papers disseminating the results of the experiments.

6. Recommendations to the experimentation platforms:

6.1. Document issues/recommendations to improve the Fed4FIRE+ experimentation platforms.

6.2. Take advantage of the planned scientific publications (cf. point 5.3) to disseminate the potential interest of the platform for similar future experiments.

Most goals have been completed throughout the lifecycle of the project, being reported here. A few others, such as goal 5.3 (scientific papers), will span beyond the formal end of the experiment, since the required process (preparing the experiments, collecting and analysing the results, writing the paper, submitting the paper, receiving the acceptance notification and finally publishing the paper) takes longer than the duration of project.

B.1.2 Set-up of the experiment

The EMPATIA platform includes several components, as summarized in the Table 1.

Table 1 - EMPATIA components

Component Name Description Analytics Component supporting real-time and batch/deferred analytics Empatia Manages authentication and authorization of users, supporting JSON

Web Tokens (JWT) Handles the creation of participation, collaboration and communities, such as forums, ideas and discussions Handles all the functionalities related with static content, such as sites, pages, menus, news and other types of content Handles the interaction between all the other components Manages the lifecycle of projects Manages the exporting and importing of data models, for instance for Opendata repositories like CKAN

Events Provides the mechanisms to organize and manage events like conferences, workshops

Design Component that manages the multichannel participation models Files Component that manages the operations of files, for uploading pictures,

photos or other kinds of documents Monitoring Supports monitoring and logging activities of EMPATIA components Notifications Handles notifications of diverse types including email, SMS and others Questionnaires Component that handles questionnaires and surveys Voting Component that handles all the votes Web User Interface (WUI)

Web User Interface, acting as the frontend of EMPATIA

External tools Components from third parties and integrated in the EMPATIA platform. Examples of such tools include Piwik, Google reCAPTCHA

Page 5: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 5 of 40

The evaluation conducted in EMPATIAxxl targeted a specific set of components, as illustrated in Figure 1, with especial focus on the WUI and two core components (Empatia and Voting). The Files and Monitoring components have also been included in the experimentation since they are important parts of the regular usage of the platform. Logging is very important to understand and identify possible problems and the files, such as images, are highly used in current website designs.

Figure 1 - EMPATIA components included in EMPATIAxxl experiment

One of the key strengths of the EMPATIA platform is its support for flexible deployment. All the components can be deployed in a single instance or, alternatively, in distinct machines, with one or more components per machine. Given such, one of the devised goals included the assessment of the EMPATIA within distinct and disparate deployment models:

• All-in-One – components are installed in a single server. • Components per physical server – Each component has its own server.

Besides the deployment scenarios, load balancing and fault tolerance support have also been considered, leading to the deployment strategies summarized in Table 2.

Table 2 - Deployment models considered in the experimentation

Deployment Strategy Description SA – Single All-in-one All the components are deployed in a single server. SS – Single Splitted Each component is deployed in a specific server. CA – Cluster All-in-one Multiple instances of a server where all the components deployed are

used. A proxy exists to perform load balancing.

The associated database instances are also configured with distinct approaches: either as a single instance (SDB) or as a cluster (CDB). Figure 2 depicts, for example, the deployment of the combined CA-CDB scenario (Cluster All-in-One/Clustered Database), where the database instances are configured with a cluster with three nodes, and three instances of servers are deployed (each with all the EMPATIA components).

Web User Interface

EMPA

TIA

API

(pub

lic in

terfa

ce)

EMPA

TIA

API

(inte

r com

pone

nts)

EMPA

TIA

Exte

rnal

tool

s

Auth

entic

atio

n

PAD

Anal

ytic

s

Que

stio

nnai

res

Votin

g

Con

tent

M

anag

emen

t

Des

ign

File

s

Mon

itorin

g

Not

ifica

tions

Even

ts

Orc

hest

rato

r

Proj

ects

Ope

nDat

a

Page 6: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 6 of 40

Figure 2 - Example of CA-CDB deployment scenario

The evaluation in the different scenarios considers several metrics, as summarized in Table 3.

Table 3 – Evaluation Metrics

Metric Description Connection time (ms) Time to establish the connection with the WUI components, including

TCP and SSL handshake. Latency (ms) Time between the first request and the reception of the first response. Server Processing Time (ms)

Time taken by a server to reply to a request. This metric is determined based on the difference between Latency and Connection Time.

Requests with Errors (%) Percentage of request responses with error-associated codes (e.g. 40X, 50X), representing requests which, for some reason, could not be processed by the platform (e.g. due to overload) and need to be repeated.

Elapsed time (ms) Time between the first request and the reception of the last response. Response Messages and Codes

HTTP codes of response messages sent by the EMPATIA components.

Bytes in requests and reply messages

Size of request and reply messages, in bytes.

The metrics were measured using the Apache Jmeter tool3, which allows to simulate a heavy load on a server through the replication of requests in a variety of protocols, such as HTTP, HTTPS and SOAP/REST. With Jmeter a test plan is defined including all the steps/items that must be executed by Jmeter. Some of these items are summarized in Table 4.

3 Apache Jmeter available at: http://jmeter.apache.org/

Page 7: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 7 of 40

Table 4 – Configuration items in Jmeter Metrics

Item Description Thread Group Item that allows to simulate the requests of concurrent users. HTTP Request Default Item with settings regarding the address of the server, ports and pages

to test (e.g. URL). Regular Expression Extractor

Item for extracting information from replies (for instance tokens after a successful login).

The number of users simultaneously performing requests was configured with the values of 50, 100, 500 and 1,000. The different number of simultaneous users was employed according to the configurations present in the database of the platform. The database of the EMPATIA platform was configured with the following parameters (for a more detailed analysis of the specific meaning of these parameters, in the scope of the EMPATIA platform, the reader is referred to the public deliverables of the EMPATIA platform4):

• 30k users • 1k ideas • 60k votes • 1 Entity • 1 Site • 1 Community Building (CB) • 10 Parameters configured in the platform • Support for user levels • Support for different pages

These parameters emulate a real-world scenario based on the city of Cascais, Portugal, which has a population of 200k inhabitants and had a similar level of participation from its inhabitants in its last Participatory Process. Nevertheless, it should be noted that the number of users simultaneously accessing the platform in an active manner (up to 1,000) greatly exceeds the levels observed so far in real-word scenarios and is expected to surmount peak demand in much larger scenarios, with several millions of users.

Some of the functionalities included in experiment are available without requiring login from the user, while others are only available in authenticated mode, after the user has performed login in the platform. To vote, for instance, the user needs to be logged in the platform.

The support for different types of pages aims to assess the performance of the different components of the platform. Table 5 presents the pages that were tested in in the EMPATIAxxl experiment and the involved components.

The tools used in the experimentation included jFed, to setup the different scenarios, configuration of servers and services. The first step of the experiment is to configure all the components of EMPATIA in different images, as per the deployment models presented in Table 2. The experimentation was

4 https://www.empatia-project.eu

Page 8: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 8 of 40

performed in the testbeds of Imec, including Virtual Wall 1 and Virtual Wall 2. Both servers and clients were deployed in these testbeds.

Table 5 – EMPATIA pages use in the experiment (and involved components)

Page Description Involved Components Associated mode(s)

Home Page

Main page of the EMPATIA platform

WUI, Empatia, Files Anonymous, Authenticated

List of Ideas

Page with a static list of ideas

WUI, Empatia, Files Anonymous, Authenticated

Details of Idea

Page with the details of a specific idea, including images

WUI, Empatia, Files Anonymous, Authenticated

Vote Page to vote in a specific proposal.

WUI, Empatia, Voting Authenticated

B.1.3 Background / Motivation OneSource is an SME that, among other services, provides tailored solutions and consulting and outsourcing services in the field of IT management activities. OneSource includes in its service portfolio the development, customization and support of Participatory Budgeting platforms. OneSource was the core developer of the EMPATIA PB platform, which is to be made freely available as open source, and monetizes this platform providing customization, deployment and support services to municipalities, governments and other entities promoting PB processes (as well as to specialized PB consulting companies that use EMPATIA in the PB services they sell to third parties).

The output of this experiment has an impact in distinct areas of the business of the company:

1. Results of the experimentation consolidate the know-how on PB platforms such as EMPATIA, in terms of scalability to handle high demand peaks introduced by concurrent clients using the platform. This is relevant since there is currently a trend towards using PB platforms in larger scenarios (e.g. nationwide processes, increased adoption in large cities) and shorter of voting cycles, where this becomes a relevant issue.

2. The experiment results are important for understanding and improving the performance of the EMPATIA platform.

3. Results evaluate the impact of using distinct deployment models, single all in one (traditional deployment in bare metal servers), single split and cluster all-in-one for improved failure tolerance.

4. Results evaluate the different communication steps that are required between the different components of the EMPATIA platform, providing useful insights on how the platform can be improved.

By experimenting highly demanding scenarios OneSource expects to assess the advantages, scalability and performance of EMPATIA platform, thus increasing its potential customer base.

Page 9: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 9 of 40

B.2 TechnicalResults&LessonslearnedThe experimentation involved different steps, as detailed in Table 6.

Table 6 – Steps involved in the EMPATIAxxl Experimentation

Step Description

#1 Setup the experiment in jFed tool, for vWall1 and testbeds. Configure users performing the experiment in the Fed4FIRE portal.

#2

Instantiate an experiment with all the nodes and install (and configure) the following software components:

• client: Apache Jmeter software • WUI: EMPATIA WUI component • Logs: EMPATIA Monitoring component • Events: EMPATIA Events component • EMPATIA: EMPATIA Empatia component • Q: EMPATIA Questionnaire component • Analytics: EMPATIA Analytics component • Files: EMPATIA Files component • Notify: EMPATIA Notification component • Vote: EMPATIA Voting component • DB nodes: Nodes with the database • Proxy nodes: Nodes to load balance the database and webservers

#3 Make software images of the different nodes, according to the deployment model. This step also included feeding the required information in the DBNodes, including the ideas, users, entity and other settings related with the parameters of the platform.

#4 Develop the test scripts to perform the different tests with Apache Jmeter (cf. Table 4).

#5 Run preliminary tests to assure that the test plans are correctly implemented using jFed. Build the process of analysing results.

#6 Perform experiments, analyse final results and document the final report.

The different tests have been designed according with the following experiment goals:

• Goal #1: Validation and improvement of the EMPATIA platform in large-scale deployments.

• Goal #2: Validation of the EMPATIA platform performance in all-in-one deployment models.

• Goal #3: Validation of the EMPATIA platform performance in cloud deployment models.

• Goal #4: Validation of the EMPATIA platform when producing/using large amounts of heterogeneous data.

• Goal #5: Validation of the EMPATIA components within different operations.

Page 10: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 10 of 40

• Goal #6: Recommendations to the experimentation platform.

Goals #1, #2 and #3 are closely associated, as they focus on the impact that the deployment models have in the EMPATIA platform. The results validating the achievement of such goals are analysed together, considering the metrics of latency and error rates.

The latency and the server processing time, depicted in Figure 3 and Figure 4 for the Home page, vary according to the concurrency of users. All the scenarios present a higher latency with a high number of concurrent users (e.g. 01-SA-SDB, 02-SA-CDB). The performance distinction between the deployment models is not so evident in this page (as for instance in the “Detail Idea” page), since this page has a low volume of data. In fact, due to the small size of the page, the load balancing introduced in the cluster scenarios introduces more overhead, leading to higher delays in retrieving the content. For instance, considering 500 simultaneous users, all the DB clusters (x-CDB) scenarios have higher values for latency. With these results, the load balancing of the databases might decrease the performance when considering the time required to retrieve content.

Figure 3 - Latency of Home page

Figure 4 - Server processing time of Home page requests

On the other hand, when considering pages with more content and higher volumes of data, such as the “Detail of an Idea” page, the observed values for these Latency and Server Processing metrics differ, as shown in Figure 5 and Figure 6, respectively. At a first glance, the difference of performance between the load balancing database scenarios (x-CDB) becomes more evident, since load balancing is able to provide content in reduced time intervals. Considering the case with 1,000 simultaneous users, the CA-SDB scenario has a latency around 15k ms, while in the CA-CDB this value falls to 5k ms. The main difference between the “Detail of Idea” page and the “Home” page is that the former has more content (including images) and more information, and also requires additional lookup in the database to retrieve the information.

0

150

300

450

600

750

900

1050

1200

1350

1500

1650

1800

1950

2100

2250

2400

2550

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBDeployment Models

Late

ncy

(ms)

Users 50 100 500 1000

0100200300400500600700800900

1000110012001300140015001600170018001900200021002200230024002500

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBDeployment Models

Serv

er P

roce

ssin

g Ti

me

(ms)

Users 50 100 500 1000

Page 11: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 11 of 40

Figure 5 - Latency of “Detail Idea” page

Figure 6 - Server processing time of Home page requests

Due to the processing inherent to the “Detail Idea” page, the number of errors is also higher. For instance, when loading the home page, no errors were verified by the different clients. Figure 7 and Figure 8 depict the error ratios for the “Detail Idea” and “List of Idea” pages, respectively. The errors include all the replies that have HTTP code different from 200 (the ok status). As pictured, the higher number of concurrent clients introduces more errors. In fact all the scenarios with 1,000 simultaneous users have representative error ratios (above 15% in the Detail Idea). Such errors occur due to high number of requests that are performed simultaneously. The deployment with all the components in a single server has the lowest performance, by introducing more errors, and in high rates (around 90%). The logic of splitting the functionalities of the diverse EMPATIA components leads to better performance but introduces more communication overhead in the network, as happens in the cluster all in one scenarios (CA-SDB and CA-CDB).

Figure 7 – Ratio of Errors in “Detail Idea” page

Figure 8 - Ratio of Errors in “List Ideas” page

With the obtained results, we have been able to validate the impact that the different deployment models have on the EMPATIA platform. We observed a lot of communication overhead in the scenarios with load balancing proxies, which are perceived by clients either in the form of error ratios or with high latency values. Currently, further developments are being performed to enhance the performance of the EMPATIA platform in this aspect.

The validation of the EMPATIA platform using large amounts of heterogeneous data (Goal #4) is also associated with Goal #5, that includes the validation of the EMPATIA components in different

0

5000

10000

15000

20000

25000

30000

35000

40000

45000

50000

55000

60000

65000

70000

75000

80000

85000

90000

95000

100000

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBDeployment Models

Late

ncy

(ms)

Users 50 100 500 1000

0

5000

10000

15000

20000

25000

30000

35000

40000

45000

50000

55000

60000

65000

70000

75000

80000

85000

90000

95000

100000

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBDeployment Models

Serv

er P

roce

ssin

g Ti

me

(ms)

Users 50 100 500 1000

0

10

20

30

40

50

60

70

80

90

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBScenarios

% R

eque

sts

with

erro

r

Users 50 100 500 1000

0

10

20

30

40

50

60

70

80

90

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBScenarios

% R

eque

sts

with

erro

r

Users 50 100 500 1000

Page 12: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 12 of 40

operations. For instance, voting requires fetching data from the database to check the votes that a user can perform and also storing the votes performed by the user. Figure 9 and Figure 10 depict the Latency and the Server response time for the voting process. The same results occur in the Voting process, with the single all in one deployment (SA-SDB and SA-CDB) model presenting the worst results, while the cluster all in one (CA-SDB and CA-CDB) have the best performance due to the load balancing support.

Figure 9 – Latency in voting process

Figure 10 - Server processing time in voting process

When observing the error ratios in the voting process, the SA deployments also introduce more errors, as depicted in Figure 11, in particular for 500 and 1,000 simultaneous users.

Figure 11 – Ratio of Errors in voting

As stated previously, the voting process includes several operations, like verifying users’ permissions to vote, number of available votes and type of vote that can be performed. Such operations require a lot of requests between the Vote and Empatia components, leading to high communication overheads. As such, in the voting scenario error ratios can be observed even with a low number of simultaneous users (e.g. 50), due to this overhead. In fact, the error ratios with a high number of users drops to the same levels as with low number of users, putting in evidence the impact of the communication overhead between components.

The voting process is considered as one of the main functionalities required in the Participatory Budget platforms, since proposals can be chosen based on the ideas that received more votes.

0

5000

10000

15000

20000

25000

30000

35000

40000

45000

50000

55000

60000

65000

70000

75000

80000

85000

90000

95000

100000

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBDeployment Models

Late

ncy

(ms)

Users 50 100 500 1000

0

5000

10000

15000

20000

25000

30000

35000

40000

45000

50000

55000

60000

65000

70000

75000

80000

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBDeployment Models

Serv

er P

roce

ssin

g Ti

me

(ms)

Users 50 100 500 1000

0

10

20

30

40

50

60

70

80

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBScenarios

% R

eque

sts

with

erro

r

Users 50 100 500 1000

Page 13: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 13 of 40

Through the experimentation performed in this project we were able to identify major performance bottlenecks in the EMPATIA platform not noticed before – even in real world usage scenarios with midsized municipalities. At the moment of writing this report, we are implementing a significantly revised version of the EMPATIA platform in order to address the performance and scalability issues discovered in this experimentation. Early (and rather incomplete) experiments already suggest this new version will bring major improvements, but nevertheless an exhaustive evaluation will only be possible once this improved version of the platform becomes fully ready.

B.3 BusinessimpactDescribe in detail how this experiment may impact your business and product development.

B.3.1 Value perceived

What is the value you have perceived from this experiment (return on investment)? E.g. gained knowledge; acquired new competences; practical implementation solutions such as scalability, reliability, interoperability; new ideas for experiments/products; etc.

Gained knowledge, mainly in terms of deployment models of the platforms for participatory budget. Practical implementation solution by characterizing the performance of EMPATIA platform in high load environments, in distinct deployment models and with distinct operations that such platforms must support.

What was the direct or indirect value for your company / institution? What is the time frame this value could be incorporated within your current product(s) range or technical solution? Could you apply your results also to other scenarios, products, industries?

The value of the conducted evaluation for OneSource is threefold. First, it allowed the optimization of the deployment strategies of EMPATIA platform, improving the quality of the services already provided to OneSource’s customers. Second, it provided clues to the suitable deployment model according to the client needs of the EMPATIA platform. Last but not least, it provided potential new customers with a more complete perspective on the performance of EMPATIA platform, thus supporting their purchase decision process. This impact can be exploited in a short time frame. An improved version of the platform is already being developed based on the findings of this experimentation.

If no federation of testbed infrastructure would be available, how would this have affected your product / solution? What would have been the value of your product / solution if the experiment was not executed within Fed4FIRE? What problems could have occurred?

With no federation of the testbed infrastructure it would be harder to have near-real EMPATIA load conditions, or a high number of concurrent users performing distinct operations. Without this proximity to real conditions, the results would not have a direct added value to the service offering portfolio of OneSource.

Page 14: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 14 of 40

The experiment could be partially executed without using Fed4FIRE. For instance, it could be based on Amazon Web Services. However, the heterogeneity that can be found in the Fed4FIRE testbeds, adds value to the characterization of EMPATIA platform performance. First, by having near-real conditions. Second, by using the federation that allows to characterize the performance of clients in Internet environment. A few unexpected problems occurred during the experiment, namely the findings regarding the performance of the platform. A case with 1,000 users leads to a higher than expected number of errors and high values of latency in serving the pages. These results lead us to prioritize the improvements of the platform instead of looking to new functionalities (e.g. experimental mechanisms to profile fraud attempts in the voting process, as initially planned). In this way, we have not used the facilities provided by the Tengu testbed.

Are there any follow-up activities planned by your company/institution? New projects or funding thanks to this experiment? Do you intend to use Fed4FIRE facilities again in the future?

Yes, the follow-up includes assessing the improved version of the platform, as well as using the Machine learning mechanisms to detect fraud detection in voting or proposals submission. Indeed, the evaluation of the improved version of EMPATIA is a target, if it is conducted in Fed4FIRE it will allow us to have real world conditions.

Page 15: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 15 of 40

B.3.2 Funding

Was the allocated budget related to the experiment to be conducted high enough (to execute the experiment, in relation to the value perceived, etc.)?

Overall the total costs were in line with the planned budget and in line with the effort required to conduct the experiment, despite some minor variations regarding the originally planned cost breakdown:

- Total effort was less than expected (3.7 PM, instead of the planned 4.0 PM). - Travel costs were higher than expected (even not counting with the planned participation

in the final FEC3 event, March 2018), due to the following missions: o 14-16 March 2017, Ghent, Belgium (participation in the FEC1 event) o 4-6 October 2017, Volos, Greece (participation in the FEC2 event with preliminary

results of the EMPATIAxxl project) The budget was also appropriate considering the value impact of the experiment results, allowing a significant multiplicative effect for the performed investment.

Did you receive other funding for executing this experiment besides the money from the Fed4FIRE open call (e.g. internal, national, …)?

OneSource did not receive any other funding for this experiment, besides the money from the Fed4FIRE open call.

Would you (have) execute(d) the experiment without receiving any external funding?

Without any external funding OneSource would probably not have conducted the experiment. At most, OneSource would have conducted much less extensive experiments, addressing specific requests from specific customers in a case-by-case basis.

Would you even consider to pay for running such an experiment? If so, what do you see as most valuable component(s) to pay for (resources, support, …)?

Without external funding it would be difficult for OneSource to finance its own effort and the usage of the testbed reports. At most OneSource would conduct specific and more limited experiments in response to specific requests from its customers. Nonetheless, the whole testbed federation of Fed4FIRE does hold a significant commercial value, which OneSource would be willing to pay for in different circumstances. In that context, we would prefer a resource-based cost model (with a basic level of support implicitly included), complemented by optional advanced support services.

Page 16: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 16 of 40

Section C Feedback to Fed4FIRE This section contains valuable information for the Fed4FIRE consortium and describes your experiences by running your experiment on the available testbeds. Note that the production of this feedback is one of the key motivations for the existence of the Fed4FIRE open calls.

C.1 Resources&toolsused

C.1.1 Resources Describe the testbeds you have been using and specify the resources used.

Infrastructures Used? Specify the type and amount of the resources used

Wired testbeds • Virtual Wall (iMinds) yes Depending on the scenario, but can be between 3 to

15 nodes. These nodes are based on pcgen1 nodes. No public IP addresses were required.

• PlanetLab Europe (UPMC) • Ultra Access

(UC3M, Stanford)

Wireless testbeds • Norbit (NICTA) • w-iLab.t (iMinds) • NITOS (UTH) • Netmode (NTUA) • SmartSantander (UC) • FuSeCo (FOKUS) • PerformLTE (UMA)

OpenFlow testbeds • UBristol OFELIA island • i2CAT OFELIA island • Koren testbed (NIA) • NITOS testbed

Cloud computing testbed • EPCC and Inria cloud sites

(members of the BonFIRE multi-cloud testbed for services experimentation)

• iMinds Virtual Wall testbed for emulated networks in BonFIRE

Community testbeds • C-Lab (UPC)

Page 17: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 17 of 40

Did you make use of all requested testbed infrastructure resources, as specified in your open call proposal? If not, please explain.

The original planning included the Tengu testbed, which was not employed since the issues identified in the performance of the platform were given higher priority. The original planning also included the Exogeni UvA testbed, which was not employed, as the limitations regarding the concurrent number of users did not allow to scale to accommodate a high number of simultaneous users (in the scale of millions).

What was the ratio between time reserved vs time actually used for each resource? Why does it differ that much (e.g. for interference reasons, other)?

Not applicable – in the testbeds used for experimentation, namely vWall (vWall1 and vWall2), no reservation policies exist. Despite, the absence of reservation policies, no constraints regarding time to deploy resources were found or even to renew experiments.

C.1.2 Tools

Describe in detail the tools you have been using, resources used, how many nodes, …

Tools Used? Please indicate your experience with the tools. What were the positive aspects? What didn’t work?

Fed4FIRE portal yes The Fed4FIRE portal was used to create the accounts necessary for the experiments, to register keys and to setup all the requirements for the experiments. In addition, the Fed4FIRE portal was used as the entry point for the documentation required to perform the experiments. As positive aspect of this tool, we point out the good organization of the documentation and the easiness of finding instructions that were specific to the used testbeds.

JFed yes JFed was the tool that allowed us to setup all the experiments. We underlined the following positive aspects of JFed:

• possibility to resume experiments • intuitive design of experiments, through an interface of easy

use • detailed logging of activity • connectivity tests • detailed expiration of resources

In our opinion some of the improvements can include: • JFed misses an overview of the running experiments and used

resources. For instance, to know which experiments are running at a given moment we need to recover. It would help to have information regarding the experiments that are running and the used resources in the general tab.

Omni SFI

Page 18: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 18 of 40

BonFIRE portal BonFIRE API Ofelia portal OMF NEPI JFed timeline OML

Page 19: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 19 of 40

C.2 Feedbackbasedondesign/set-up/runningyourexperimentonFed4FIRE

Describe in detail your experiences concerning the procedure and administration, set-up, Fed4FIRE portfolio, documentation and support, experimentation environment, and experimentation execution and results. This feedback will help us for future improvement.

C.2.1 Procedure / Administration

How do you rate the level of work for administration / feedback / writing documents / attending conference calls or meetings compared to the timeframe of the experiment?

The administrative overhead was quite optimized, considering the nature of the experiment. The work spent writing documents was also quite optimized, since the effort was spent mostly on technical documentation (experiments report) which is a core part of the outcome of the experiment.

C.2.2 Setup of the experiment

How much effort was required to set up and run the experiment for the first time? Did you need to install additional components before your were able to execute the experiment (e.g. install hardware / software components)?

The amount of effort was significant but in line with the complexity of the experiment and involved resources. The initial effort was related with the creation and deployment of the nodes to configure the diverse EMPATIA components. During this phase we verified some unavailability of resources in vWall2, leading us to use vWall1. Creating the experiences with jFed following the examples and documentation was quite straight forward, although there was some confusion about the different types of machines available since they all had the same result (probably it depends from testbed to testbed). Besides jFed, no additional components were installed, since putty/pageant was already deployed.

How do your rate the experience as user that you only had to deal with a single service provider (i.e. single point of contact and service) instead of dealing with each testbed itself?

Our experiment ended up using only one testbed, but in any case, we strong believe that dealing with a single service provider makes the interactions and requests for support easier and leads to a better organized and concise experiment.

C.2.3 Fed4FIRE portfolio

Was the current portfolio of testbeds provided by the federation, with access to a large set of different technologies (sensors, computing, network, etc.) provided by a large amount of testbeds, sufficient to run your experiment?

Page 20: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 20 of 40

The portfolio of resources was appropriate to the requirements of our experiment.

Was the technical offering in line with the expectations? What were the positive and negative aspects? Which requirements could not be fulfilled?

The technical quality of the testbed resources and associated support services was adequate and in line with expectations.

Could you easily access the requested testbed infrastructures?

Available testbed resources could be easily accessed.

Could you make use of all requested resources at the different testbeds as was proposed in the description of the experiment? If not, how many times did this fail? What were the main reasons it failed (e.g. timing constraints, technical failures, etc.)?

Sometimes there were not enough resources available when starting an experiment in the vWALL testbed (possibly due to other experiments running in parallel) but this issue was easy to solve and had no significant impact in the experiments.

Did you use a lot the combination of resources over different testbeds? Did it all work out nicely? Were they interoperable?

Due to their nature, the experiments were based on relatively simple combinations, using mainly vWALL.

C.2.4 Documentation and support

Was the documentation provided helpful for setting up and running the experiment? Was it complete? What was missing? What could be updated/extended?

The provided documentation was helpful and well organized, no major technical issues were found. The support in both testbeds was adequate and extreme professional.

Did you make use of the first level support dashboard?

During the experiment, all the support was provided by email, and the first level support dashboard was not necessary.

Did you contact the individual testbeds for dedicated technical questions?

Page 21: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 21 of 40

Yes, Imec for the vWALL testbed.

Page 22: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 22 of 40

C.2.5 Experiment environment

Was the environment trustworthy enough for your experiments (in terms of data protection, privacy guarantees of yourself and your experiment)?

Our experiments do not raise concerns about privacy or data protection.

Did you have enough control of the environment to repeat the experiment in an easy manner?

Yes.

Did you experience that the Fed4FIRE environment is unique for experimentation and goes beyond the lab environment and enables real world implementation?

Yes.

Did you share your experiment and/or results with a wider community of experimenters (e.g. for greater impact of results, shared dissemination, possibility to share experience and knowledge with other experimenters)? If not, would you consider this in the future?

As part of the objective 5.3 (“Publish scientific papers disseminating the results of the experiments”), a research paper combining the description of the EMPATIA platform with the analysis of the EMPATIAxxl experiments is currently being prepared for submission to the IFIP Networking 2018 Conference (http://networking.ifip.org/2018/), which will take place in Zurich, Switzerland, May 2018. A preliminary version of this paper submission is presented in Annex B.

C.2.6 Experiment execution and results

Did you have enough time to conduct the experiment?

Yes, although we point out that additional time is required to analyse the extensive tests we have performed, and possibly evaluating deeper the enhanced version of EMPATIA which should become available around December 2017.

Were the results below / in line with / exceeding your initial goals and expectations?

The achieved results were relevant for us. We were able to identify several performance issues, in the platform. Such issues were not expected on our side, but the experimentation was quite important to proceed with enhancements. The results put in evidence that the performance depends on the deployment model, and also regarding databases load balancing mechanisms. The high number of nodes used in Fed4FIRE were crucial to perform a realistic evaluation.

Page 23: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 23 of 40

What were the hurdles / bottlenecks? What could not be executed? Was this due to technical limits? Would the federation or the individual testbeds be able to help you solving this problem in the future?

As stated previously, we did not accomplish all the all goals we defined for our experiments. The reasons for this are mainly related with the performance issues found in the EMPATIA platform, which lead us to reinforce the effort/experiments in this topic and to pay less attention to experiments with fraud detection in voting process.

C.2.7 Other feedback

If you have other feedback or comments not discussed before related to the design, set-up and execution of your experiment, please note them below.

The support of Fed4FIRE could use Slack channel, where the feedback or questions could be shared between participants.

C.3 WhyFed4FIREwasusefultoyouDescribe why you chose Fed4FIRE for your experiment, which components were perceived as most valuable for the federation, and your opinion what you would like to have had, what should be changed or was missing.

C.3.1 Execution of the experiment

Why did you choose Fed4FIRE for your experiment? Was it the availability of budget, easy procedure, possibility to combine different (geographically spread) facilities, access to resources that otherwise would not be affordable, availability of tools, etc.? Please specify in detail.

The easy procedure to enrol and to perform an experimentation is for sure one of the main advantages of Fed4FIRE. The combination of different facilities spread in different geographic areas and with heterogeneous configurations is also a relevant factor in the choice of Fed4FIRE. The availability of budget was also important, but the availability of resources, which can allow us to simulate real world scenarios is the most important criteria when choosing Fed4FIRE. When running our experiments, we never felt that a question, a comment or a request for support would stay without a reply. The support is indeed one of the main features that works quite well in Fed4FIRE!

Could you have conducted the experiment at a commercially available testbed infrastructure?

The experiments could run at commercial testbed infrastructure, but would require more work, since they do not have tools like jFed to orchestrate the experiment and the number of available resources would not be so high as in Fed4FIRE.

Page 24: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 24 of 40

C.3.2 Added value of Fed4FIRE

Which components did you see as highly valuable for the federation (e.g. combining infrastructures, diversity of available resources, tools offered, support and documentation, easy setup of experiments, etc.)? Please rank them in order of importance.

We rank the added value of Fed4FIRE in the following order:

• Infrastructures • Available resources • Support and documentation • Easy setup of experiments • No management burden-activities

Which of these tools and components should the federation at least offer to allow experimentation without funding?

In our opinion the following tools should be provided to allow experimentation without funding:

• Some infrastructures/resources (vWall2 and generic nodes) • jFed • Support and documentation

C.3.3 What is missing from your perspective?

What would you have liked to have had within Fed4FIRE (tools, APIs, scripts, …)? Which tools and procedures should be adapted? What functionality did you really miss?

Fed4FIRE has its own tools that offer a lot of flexibility in terms of running experiments. Nonetheless, in your opinion some tools offer better functionalities:

• Possibility to modify running experiments (though APIs, scripts, commands, tools). For instance, to add additional nodes in a dynamic way.

• Zabbix support. This tool is extensively used as a monitoring solution. • Aggregated view of running experiments and used resources. Also for a time slot it is

relevant to know if further resources can be used, without trying to schedule new experiments.

Which (types of) testbed infrastructures (and resources) would have been very valuable for you as experimenter within the Fed4FIRE consortium?

Cloud resources could be interesting to test models with dynamic deployment models, that deploy additional components considering the load in a given moment. This would allow, for instance, the test of horizontal scaling mechanisms.

Is there any other kind of support that you would expect from the federation, which is not available today e.g. some kind of consultancy service that can guide you through every step of the process of

Page 25: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 25 of 40

transforming your idea into an actual successful experiment and eventually helping you to understand the obtained results?

The offered support was ok. The sessions provided in the FEC events are relevant to contribute to the knowledge of the Fed4FIRE federated tools and testbeds.

Page 26: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 26 of 40

C.3.4 Other feedback

If you have further feedback or comments not discussed before how Fed4FIRE was useful to you, please note them below.

C.3.5 Quote We would also like to have a quote we could use for further dissemination activities. Please complete the following sentence.

Thanks to the experiment I conducted within Fed4FIRE we were able to identify in advance performance issues in the EMPATIA platform which allows its improvement before being publicly released in larger real-world scenarios where they would create serious issues in the Public Budgeting processes supported by EMPATIA.

Page 27: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 27 of 40

AnnexA:ComplementaryExperimentResultsIn this Section we present a selected set of complementary results of the EMPATIAxxl experiment which were not included in the main part of the report, for sake of readiness and conciseness, but are nonetheless useful for the understanding of the experiment. The full set of results can be provided on demand.

Table 7 presents the latency measured observed for the SA-SDB scenario for the first run, considering the set of users in the tests. One can see that the first requests have higher latency, since they include the required time to establish a connection.

Table 7 – Latency results for the Homepage in SA-SDB scenario

1 user

10 users

50 users

100 users

500 users

1,000 users

Another metric that was also assessed is the throughput over time for the “Detailed Idea” page of authenticated users, as depicted in Table 8 for 500 users in the scenarios SA-SDB, SA-CDB, SS-SDB, SS-CDB, CA-SDB and CA-CDB. As one can observe the scenarios with the best throughput are the SS and CA scenarios, where higher throughput values are achieved and is kept constant during the test.

Page 28: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 28 of 40

Table 8 – Throughput over time (bytes/sec) of 500 users in the “Detailed Idea” page and authenticated users.

SA-SDB

SA-CDB

SS-SDB

SS-CDB

CA-SDB

CA-CDB

Considering the CA-CDB scenario, Table 9 depicts the throughput results of the Voting page for the different users, while Table 10 depicts the latency results of the voting page.

Page 29: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 29 of 40

Table 9 – Throughput (bytes/sec) results for the Voting page in CA-CDB scenario

1 user

10 users

50 users

100 users

500 users

1,000 users

The latency includes diverse values, since for voting it is required to get the login details (brown line), to verify the login credentials (green line) and to perform the vote (yellow line).

Page 30: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 30 of 40

Table 10 – Latency results for the Voting page in CA-CDB scenario

1 user

10 users

50 users

100 users

500 users

1,000 users

Page 31: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 31 of 40

AnnexB:ResearchpaperbasedontheexperimentAs part of the objective 5.3 (“Publish scientific papers disseminating the results of the experiments”), a research paper combining the description of the EMPATIA platform with the analysis of the EMPATIAxxl experiments is currently being prepared for submission to the IFIP Networking 2018 Conference (http://networking.ifip.org/2018/), which will take place in Zurich, Switzerland, May 2018.

A preliminary version of this paper submission is presented in this Annex.

Page 32: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 32 of 40

An ICT platform for multiple channels participationin democratic innovations

Pedro Valente1, Bianca Flamigni2, Luca Foschini2, Bruno Sousa1,3,Vitor Fonseca1, Paulo Simoes3, Luis Cordeiro1

1OneSource, Consultoria Informatica Lda. Coimbra, Portugal2DISI, University of Bologna, Italy

3CISUC-DEI, University of Coimbra, Portugal

Abstract—The participatory budgeting is currently one of themost adopted democratic innovations taken by municipalitiesworldwide. ICT platforms are key enablers of the democraticinnovation, by supporting citizen engagement in the participa-tory process. These tools support the establishment of differentparticipation channels with citizens to vote, monitor the projectsthat best suit the needs of their neighborhood. Despite thebenefits of such platforms in supporting social activism, theyhave drawbacks associated, as they assume that users have ICTskills to use the available tools, and commonly only provideone channel for participation. The EMPATIA platform supportsmultiple channels for participatory budget, by providing tailoredhardware components that promote social inclusion and bysupporting different deployment models. The results achieved inthe evaluation demonstrate that the platform provides differentperformance values according to the deployment model.

Index Terms—Participatory budget, ICT platform,

I. INTRODUCTION

The Participatory Budgeting (PB) is currently one of themost adopted democratic innovations taken by municipalitiesworldwide to further improve living conditions and wellbeingof communitites [1]. The PB process involving municipalities,organizations and citizens, includes distinct phases: the brain-storming phase that envisions the discussion of ideas and theidentification of potential projects; the project selection phase,which leads to the selection of projects that will have budget;and the final phase that includes the monitoring of the projectexecution, which is commonly known as the 2nd cycle of aparticipatory project.

ICT platforms are key enablers of the democratic innova-tion, by supporting social activism and citizen engagementin the participatory process [2]. The tools, applications in-cluded in the platform establish channels that allow citizensto vote, monitor the projects that best suit the needs oftheir neighborhood and/or their municipality. In this context,multiple tools and platforms exist to enable the participatorydemocracy. For instance, Your Priorities [3] aims to foster theparticipation of citizens, OpenDCN [4] is designed to supportparticipatory budget and OpenBudgets [5] enables the trackingand analysis of financial information. The usage of such toolshas been proved to promote the engagement of citizens inthe democratic innovations, as they provide the opportunityfor citizens to express their opinion and to better shape theliving conditions of their neighborhoods [6]. However, these

platforms often fail to promote social inclusion since they aredesigned for specific groups of people, typically young, orpersons with high levels of education.

EMPATIA is an european project [7] promoting multiplechannel participation, by defining and implementing newtools, and integrating interfaces and best practices for citizenengagement. Putting particular emphasis on the simplicity andcapacity of being used by a differentiated range of actors withdifferent cultural skills and degrees of ICT literacy.

On another perspective, ICT platforms, tools and applica-tions supporting the activities of communities need to provideflexible and self-managed mechanisms to consider the involvedusers (from thousand to millions) [8]. The characteristics ofactivities introduce diverse technical and social requirementsthat ideally must be fulfilled. In this scope, the architecture ofEMPATIA supports multiple deployment models: managingfrom bare metal servers with all the components (all-in-one), to cloud infrastructures compliant with the x Softwareas a Service (xSaaS) paradigm, and containers adhering toDevOps practices that facilitate customization and add-onof new functionalities. This way, community networks andmunicipalities are not bound to closed platforms, or toolsthat do not accommodate the customization requirements ofparticipatory budget projects. EMPATIA supports a modulararchitecture, where additional functionalities can be addedby plugging-in new components following a simple, efficientdesign with scalable APIs.

The main contribution of this paper is the design and valida-tion of an ICT platform enabling multiple channel participationin democratic innovations. The EMPATIA platform has beendesigned to support multiple deployment models consideringthe requirements of different municipalities and communities.The proposed platform has been validated through large scaleexperiments in Fed4FIRE+ testbeds [9] and has been deployedin several European cities to support participatory processes.

The remainder of this article is organized as follows:Section II discusses tools, platforms to support democraticinnovations, Section III presents the EMPATIA architecturewithin its diverse components to enable multiple channelparticipation. The evaluation methodology to validate theEMPATIA platform is described in Section IV, while theachieved results are discussed in Section V.

Page 33: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 33 of 40

II. RELATED WORK

Participatory democracy is an envisioned goal at Europe andseveral municipalities have been empowered with platformsto foster the engagement of citizens. This section overviewsthe platforms and tools more relevant considering their de-ployment model support, the supported PB functionalities, thesupport for privacy and personal data protection and the opendata support, such as data export functionality.

Your Priorities [3] is a platform developed by a non-profitcitizens foundation of Iceland, to enable the participation ofcitizens. This platform supports the paradigm of Software asa Service and the all-in-one deployment models. Through thisplatform citizens can vote on proposals, receive notifications,and provide ideas, proposals. Additionally, it integrates withsocial platforms but lacks APIs that can be used by otherservices, by mobile applications or by other components.

On the other side, New Zealand and Wales have used theLoomio [10] platform to engage the participation of users inthe process of improving the municipality where they live.Loomio includes more functionalities, when compared withYour Priorities, by supporting analytics to monitor the partic-ipation of citizens, and the possibility to run questionnairesin the platform. It also supports two factor authentication viaHTTPS protocol and provides APIs to enable the access tofuncionalities by third party services or tools. Nonetheless,Loomio runs in free and paid models, where the free versionhas some limitations, like open data model support where noexport functionalities are supported.

Other platforms focus on specific functionalities, like bud-geting. In this context the CitizenBudget [11] platform aims tobuild better community relationships and to educate users re-garding their budget consultation. In this aspect, CitizenBudgetsupports the 2nd cycle process, where projects can be mon-itored after being approved by citizens and the municipality,but on the other side does not provide API support and haslimited support for exporting data.

Several municipalities run the participatory democracy pro-cesses through closed source and proprietary platforms, likethe LiberOpinion [12]. Such platform also supports the 2ndcycle and the transparent execution of projects, where userscan follow the progress of a project. The qualitative monitoringis also supported with events/milestones or through groups ofstakeholders involved in a proposal. Nonetheless, this platformhas limited support for exporting data or limited exposure offunctionalities through dedicated APIs.

Open source solutions like OpenDCN [4] aim to providethe foundations for the participatory budget functlonalitiesby supporting different vote mechanisms, informed discussionamong others. In fact, this platform is used by other projectslike the BiPart [13] to manage and monitor the differentsteps of a participatory process, in order to reduce the costsof municipalities. This solution lacks support for relevantfunctionalities like the 2nd cycle phase of a project, despitethe extensive support of OpenDCN for participatory projectrequirements, such as informed discussion, brainstorming.

Several platforms have been funded by EU to promote thecollaboration of citizens in distinct aspects of their community,like D-CENT [14] and MyNeighbourHood [15]. In particularto promote the discussion, sharing and promotion of eventsthat might be of a community interest. Other projects like theOpenBudgets [5] provided the OpenSpending platform [16]to enable the tracking and analysis of public financial in-formation. In this context, OpenSpending allows to exportdata in distinct formats and through dedicated APIs. In thesame line, the Opentender platform [17] designed in DigiWhistproject [18] also provides open data support and accountabil-ity, therefore promoting transparency of processes.

The participatory budget process is not fully supported, oris supported with limitations within the existing projects andplatforms. Such limitation include the targeted audience, wherepeople interacting with such platforms need have a certainproficiency of the ICT technologies. Additionally, a singlechannel is provided for participation. Nowadays, municipal-ities or communities running participatory processes need touse diverse platforms due to the limitations of platforms insupporting all the requirements to run a PB process.

EMPATIA [7] is highly innovative and novel clear distinc-tion, when compared with related platforms, tools in enablingthe support of multiple channels through specific hardwarecomponents and being simple to use for voting processes,including tools and mobile applications commonly found insuch kind of platforms.

III. EMPATIA PLATFORM

This section presents the EMPATIA platform to supportparticipatory budget.

A. Platform

The architecture and each component of the EMPATIAplatform have been designed to be scalable, redundant tosupport fault tolerance and different deployment models. Thedeployment models include: all-in-one deployment (e.g. de-ployment in a single server); cloud deployment (e.g. publicclouds like Amazon web services, Windows Azure, Openstackor others); and virtualization approach based on VMWare, XenCitrix and docker containers. The architecture design aims toenable EMPATIA as a Service (EMPATIAaaS).

The main requirements of participatory processes are sup-ported by the core concepts of the platform, which include:

• Spaces, distinct management of areas for users (e.g.citizens), management (e.g. municipalities) and admin-istration (e.g. platform provider).

• Community Building, to promote collaboration betweencitizens and municipalities and/or communities withindifferent means like forums, ideas/problems, proposals,discussions and brainstorming.

• Multichannel Participation, support several participa-tion models, either in community building, voting, ques-tionnaires and budgets. Supporting the participation via

Page 34: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 34 of 40

applications, tools or specific hardware components de-signed to be inclusive and without requiring high levelsof ICT literacy.

• Analytics, informed reports and infographics for thedifferent entities using the platform.

• Notification, email, SMS, push notifications or otherformat that allow an user to receive alerts regardingcomments on her ideas/proposals, or selection of a projectshe has voted positively.

• Social Media integration, Facebook, Google plus andtwitter integration.

• External tools interoperability, like free coin, D-CENTtools or others that can be in deployed at municipalities.

• Voting, to support multiple voting methods such asmultiple, negative, positive and weighted votes, accordingto the definitions of the PB process.

• Translation, to enable translation services using transla-tion APIs provided by third party services.

The high level architecture of EMPATIA is highlighted inFig. 1, with the diverse components and with two levels ofAPIs: a public interface and a private interface for inter EM-PATIA components communication (or with external tools).APIs comply with REST and JSON technologies. Throughthe public APIs the components of the EMPATIA platformcan provide additional functionalities that can be used by otherservices (e.g. user interface or other platforms).

The architecture includes several components, following themodular principle, each one with specific functionalities andinterfaces. The EMPATIA component aggregates functionali-ties of several sub-components (i.e., Authentication, PAD, andothers), as they enable the core concepts of the platform. Thesesub-components are described isolatedly, but there is a singlepublic API and private API in the EMPATIA component.Worth mentioning, the authentication component assures theauthentication of users and devices in the platform, and controlthe access to resources, while the Orchestrator component isresponsible for managing the interactions between the differentcomponents.

Web User Interface

EMPA

TIA

API

(pub

lic in

terfa

ce)

EMPA

TIA

API

(inte

r com

pone

nts)

EMPA

TIA

Exte

rnal

tool

s

Auth

entic

atio

n

PAD

Anal

ytic

s

Que

stio

nnai

res

Votin

g

Con

tent

M

anag

emen

t

Des

ign

File

s

Mon

itorin

g

Not

ifica

tions

Even

ts

Orc

hest

rato

r

Proj

ects

Ope

nDat

a

Fig. 1: EMPATIA architecture

The entry point of the platform is performed through theWeb User Interface (WUI) component, which also exposesspecific APIs for particular components like the Kiosk or formobile applications.

Each component of the architecture is designed to functionindependently, and work in a modular fashion, with specific

and well identified functionalities, as pictured in Fig. 2. Tosupport different deployment models, in a single machine(all-in-one) or in several virtual machines (public or privateclouds), the components are designed to be scalable and fullyredundant. Each component is composed of: one NoSQL Redisdatabases to manage component session information and infor-mation caches (for performance optimizations), one MySQLpersistent database, and when required, a high performanceand high availability shared filesystem (like Hadoop).

Component

MyS

QL

Dat

abas

e

RedisDatabase

Sessions

Persistentdata

HadoopFiles

SecureAPI

RedisDatabase

RedisDatabase

MyS

QL

Dat

abas

e

MyS

QL

Dat

abas

e

Component

Component

Load Balancer

Load Balancer

PublicAPI

Only on files component

Fig. 2: Architecture of EMPATIA components

The architecture of EMPATIA components aims to addressscalability, redundancy and fault tolerance goals, which areachieved through load balancers that receive the public APIrequests (e.g. through multiple Apache Proxy or High Avail-ability servers), that forward these requests to one of the mul-tiple component servers (through a secure connection). Redis,MySQL and Hadoop already include scalability, redundancyand fault tolerance mechanisms.

The number of load balancers, components servers andRedis, MySQL and Hadoop nodes depends on the requiredlevels of scalability, redundancy and fault tolerance. Alsothese components can be strategically placed in differentgeographical areas to address service placement requirements(e.g. deploy each replica of each server in different cloud data-centers to ensure fail over redundancy in case of geographicalcatastrophe).

B. EMPATIA components

This section details the components of the EMPATIA archi-tecture, highlighting their main functionalities and features.The components of the EMPATIA platform have been de-signed to support the main functionalities required in Partic-ipatory Budget Platforms to support participatory processes.More specifically, the voting manages the vote process, whilethe 2nd phase component allows users to monitor the executionof projects, promoting transparency.

The Analytics component supports real-time and batch/de-ferred analytics. For such, this component extracts data fromother components and from the logging component, wheremonitored Key Performance Indicators (KPIs) are kept. This

Page 35: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 35 of 40

component performs data extraction, aggregation, transfor-mation, presentation in dashboards and streaming to othercomponents.

The Authentication, a core component (included in theEMPATIA component), manages the personal data and assuressecurity processes to perform authentication. This componentmanages the validation of JSON Web Token (JWT) which arerequired in all the transactions after a successful authenticationof users. OAuth support is also assured by this component.Authentication also manages the authorization process, re-quired in the management of spaces (i.e. user, manager andadministrator), as well as in specific proposals, where an usercan participate if she is allowed to.

The Community Building (CB) or PAD, as included in theEMPATIA component, assures the functionalities related withthe creation of participation, collaboration and communities,such as forums, ideas/problems and discussions. For such, thiscomponent manages different configurations like the defini-tions of moderators, creation of topics, posts, forums and otheritems envisioned for the collaboration in communities.

The Content Management assures the functionalities relatedwith static content, such as, sites, pages, menus, news, andother types of content. The translation support is also assuredby this component.

The Events component provides all the necessary mecha-nisms to organize and manage an event, such as a conference,a workshop a brainstorming session. To do this, it allows tocreate events, sessions and associate speakers and sponsorswhile also giving the possibility manage the participants inthe events and to accept registrations and payments fromparticipants.

The Multichannel Participatory (MP) component is respon-sible for the management of a participation model. Throughthe creation and configuration of CBs, voting phases, votingmethods, other EMPATIA component features, and specificMP parameters (e.g., budget), participation models (e.g. Partic-ipatory Budgeting) are transposed to the EMPATIA platform.

The Orchestrator is a core component that manages the in-teraction between all components (that are built to be indepen-dent), correlates data between different components, providesmanagement facilities for EMPATIA related configurations,and authorizations.

The Questionnaire (Q) component allows the creation ofdynamic questionnaires with different kinds of questions/an-swers while also presenting them and storing the user answers.This component provides an integrated approach to implementsurveys or opinion gathering processes from users withinthe platform. The Q component has similar functionalities tocomponents like LimeSurveys [19]

The Vote component provides the means to store all kindsof votes, whether they are likes on forums posts or voteson proposals which are only available through certain con-figurable periods and from certain users. The componentsupports different types of votes, which include negative votes,positive votes, weighted votes, votes by SMS, In-Person votesand paper blind votes. The component, according to the

configurations in the PB can also support anonymous votingand the voting can be performed through different channels,through the platform (i.e., web interface) or through dedicatedequipment like kiosks. The voting process is relevant in thePB process, as such the participation of persons with differentlevels of literacy can be performed in the voting phase throughkiosks that provide simple but effective interfaces for voting,typically with two buttons one red for negative votes and agreen for positive votes.

The Web User Interface (WUI) is the frontend of theEMPATIA platform providing the interfaces to interact with allthe backend components and perform all support actions. Theinterface is divided in two main areas, public and private, beingthe private area only accessed by managers and administrator.Each area has specific functionalities, as required by the user,manager and administration spaces. Typically the adminis-tration allows to manage entities, languages, currencies, themanagers area allows the management of users, contents, andconfiguration of PB process (see section III-D) while theuser space is employed for authenticating users and providinginformation for participation.

The Logs and Files work as auxiliary components that allowthe storing of logging information and management of files,respectively. The Notification component is also employed toallow the management of communication channels (e.g. email,SMS, push and real-time notifications).

C. Technologies, tools and protocols

The EMPATIA platform uses the Laravel framework [20] inversions 5.3 and 5.4, to implement the different components.A component skeleton has been designed to implement thedesigned architecture of EMPATIA components (recall sec-tion III-A).

To store the platform data, two different database technolo-gies are used, MySQL, which is a Structured Query Language(SQL) relational database where all the persistent data isstored, and Redis, which is an in-memory data structure storestoring the data in a key/value approach that is used to storetemporary data, such as Sessions and Cache.

The EMPATIA platform is cloud-ready and can be de-ployed bare metal, using Docker containers or virtual ma-chines (VMWare/VirtualBox appliance). EMPATIA platformis released as open source [21].

D. PB configuration process

One advantage of the EMPATIA platform includes theflexible management of a PB process. The manager of theplatform can configure the details of the PB process in simplesteps, as detailed bellow. The first step includes the creationof the entity in the platform, with a short description of themunicipality. A second step includes the creation of a websitewhere they aesthetic aspects are defined (e.g. colors, logos)or social media integration. More specifically, if the websitesupports login using social platforms like facebook. The thirdsteps is associated with the requirements of participatoryprocesses, like the definition of terms of service and privacy

Page 36: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 36 of 40

policies. Within this step the website is also customized withthe information to use at the homepage (i.e., CMS page)to promote the PB process. The fourth step includes theconfiguration of the schedules associated with the PB like startand end dates of the proposal submission, for the technicalreview and for the vote. The final step includes configurationsassociated with the vote process, where the types of vote areassociated to the PB. The type of votes that can be associatedinclude:

• Likes - unlimited votes, one for proposal;• Multi vote - limited number of votes and are only

employed if associated to the PB process;• Negative vote - limited number of votes and are only

employed if associated to the PB process;• Budget vote - every user has one budget that can be

distributed for multi or one proposal;

IV. EVALUATION METHODOLOGY

The validation of the EMPATIA platform has been per-formed in a cloud deployment, specifically using iMindsvWall1 and vWall2 testbeds from Fed4Fire project [9], whichallow a large scale evaluation.

A. Evaluation Scenarios

To evaluate the EMPATIA platform we have implementedsix distinct cloud configurations, in order to identify the bestone for each component, database and server, but also todetermine any bottlenecks or performance limitation.We used three different configurations for the EMPATIAplatform:

• SA – “single all-in-one”, one server with all the EMPA-TIA components deployed;

• CA – “cluster all-in-one”, a cluster of SA server in-stances;

• SS – “single splitted”, one server per each EMPATIAcomponent.

The nodes with the databases are configured in two flavors:• SDB – a single instance with all the databases;• CDB – a cluster of nodes with the databases synchro-

nized, configured with load balancing capabilities.Table I lists all the deployments with the respective archi-

tecture components, considering the different configurations ofthe EMPATIA platform and databases.

Scenarios Web Servers Databases ProxiesSA-SDB 1 1 0SA-CDB 1 3 1CA-SDB 3 1 1CA-CDB 3 3 2SS-SDB 10 1 0SS-CDB 10 3 1

TABLE I: Evaluation scenarios

The proxies reported in Table I correspond to high-availability load balancer nodes, which are in charge ofdispatching all the TCP/HTTP requests in the cluster of serversor in the cluster of database nodes. Proxies use the HAProxysolution [22] to perform an efficient load balancing, and can

be configured to provide high availability. Fig. 3 illustratesthe deployment details of the CA-CDB scenario, highlightingthe load balancer nodes for the nodes with the databases andwith the EMPATIA components. The number of client nodesdepends on the simultaneous users that are configured in thetests.

Fig. 3: Deployment details of CA-CDB scenario

The validation has been performed in a cloud context, morespecifically using the Imec vWall1 testbed. The EMPATIAcomponents and database nodes were configured as pcgen1nodes, which have 2x Dual core AMD opteron 2212 (2GHz)CPU, 8GB RAM and four 80GB hard disk. The client nodeswere also deployed in the same testbed to avoid impairmentsof connections between multiple Fed4FIRE testbeds.

B. Evaluation Parameters

EMPATIA platform is highly customizable to accommodatethe specificities of each municipality or participatory pro-cesses. The settings used in the evaluation correspond to asingle entity (i.e. municipality) with 30k users registered inthe platform. This entity has around 1k ideas/proposals thatcan be voted, discussed by citizens. Users perform around 60kvotes, that is the configuration of the PB process allows twovotes per user.

Despite the flexibility of the platform in supporting differentcommunities, in the conducted evaluation, users are configuredto belong to the same community, as such a single CommunityBuilding is employed. A single site is used in the platform tolist all the ideas.

The number of simultaneous users accessing the platforminclude 1,10, 50, 500 and 1000. These users can performoperations in anonymous (ANO) or in authenticated (AUTH)mode, as presented in Table II for the different tested pages.

Page Description ModesHome Page Main page of the EMPATIA platform ANO, AUTHList of Ideas Page with list of ideas ANO, AUTHDetails of Ideas Page with details of idea ANO, AUTHVote Page to vote in specific idea AUTH

TABLE II: Evaluated Pages

Each test runs in a time frame of 10 minutes, with severalrepetitions.

Page 37: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 37 of 40

0

150

300

450

600

750

900

1050

1200

1350

1500

1650

1800

1950

2100

2250

2400

2550

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBDeployment Models

Late

ncy

(ms)

Users 50 100 500 1000

Fig. 4: Latency of Home Page

0100200300400500600700800900

1000110012001300140015001600170018001900200021002200230024002500

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBDeployment Models

Serv

er P

roce

ssin

g Ti

me

(ms)

Users 50 100 500 1000

Fig. 5: Server Processing Time

Fig. 6: Throughput at Home page with SDB and 1000 users (run 1) Fig. 7: Throughput at Home page with CDB and 1000 users (run 1)

C. Performance Metrics

To measure the performance of the diverse EMPATIAcomponents, the Apache JMeter version 3.2 [23], an open-source desktop Java application, was employed. JMeter ispresented as a simple tool able to simulate a heavy load ona server by replicating the execution of requests over a widevariety of protocols (Eg. HTTP, HTTPS, SOAP/REST).

The test plans configured in the scripts executed by JMeter,include diverse steps, like information of the Thread Groups,which corresponds to the number of users we need to simulate.Additionally, it it also configured the rate on which requestsare performed, if all simultaneously or all the requests areperformed in a certain duration after a ramp up period. Theevaluation includes a navigation path, in order to evaluate thecomponents with higher priority: home page, list ideas withanonymous and authenticated users, detail of an idea withanonymous and authenticated users and vote.

The authentication of the different users used an auxiliaryCSV file with the credentials of users. From each user theJWT tokens were obtained to perform operations requiringauthentication. The voting page uses the recording optionsupported by JMeter to mimic the user behavior (e.g., pressingthe button for voting).

the defaults values of the HTTP requests component inJmeter have been considered. No changes have been performedin the connection timeout neither in the request timeout thresh-olds. The performance metrics measured in the evaluation aresummarized in Table III.

V. RESULTS

This section presents the obtained results during the valida-tion and performance assessment of the EMPATIA platform.

Metric Unit Description

ConnectionTime ms

Time to establish the connection with the WUIcomponents, this includes TCP and SSL hand-shake.

Latency ms Time between the first request and the receptionof the first response.

ServerProcessingTime

ms Time taken by a server to reply to a request.SPT = Lat� Contime.

Errors % Percentage of requests with codes associatedwith error (e.g. 40X, 50X).

ElapsedTime ms Time between the first request and the reception

of the last response.ResponseCodes – HTTP codes of response messages sent by the

EMPATIA components.Size of mes-sages bytes Size of messages and replies in bytes.

Bytesthroughput bytes/s Bytes throughput over time.

TABLE III: Performance Metrics

A. Support for deployment models

The results validating the support of diverse deploymentmodels of the EMPATIA platform are analysed together,considering the metrics of latency, and error rates. The latencyand the server processing time, depicted in Fig. 4 and Fig. 5for the home page vary according to the concurrency ofusers. All the scenarios present a higher latency when thenumber of concurrent users is higher (i.e., 1000) (e.g., 01-SA-SDB, 02-SA-CDB). The performance distinction betweenthe deployment models is not so evident in this page (asfor instance in the Detail Idea), since this page has a lowvolume of data. In fact, due to the small size of the page, theload balancing introduced in the cluster scenarios introducesmore overhead, leading to higher delays in retrieving the

Page 38: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 38 of 40

0

5000

10000

15000

20000

25000

30000

35000

40000

45000

50000

55000

60000

65000

70000

75000

80000

85000

90000

95000

100000

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBDeployment Models

Late

ncy

(ms)

Users 50 100 500 1000

Fig. 8: Latency of Detailed Idea

0

5000

10000

15000

20000

25000

30000

35000

40000

45000

50000

55000

60000

65000

70000

75000

80000

85000

90000

95000

100000

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBDeployment Models

Serv

er P

roce

ssin

g Ti

me

(ms)

Users 50 100 500 1000

Fig. 9: Server Processing of Detailed Idea

content. For instance, considering the 500 simultaneous users,all the DB clusters (x-CDB) scenarios have higher valuesfor the latency. With these results, the load balancing of thedatabases might decrease the performance when consideringthe time required to retrieve content. Nonetheless, the gainsare evident in terms of throughput. With 1000 simultaneoususers, the use of DB clusters allow to have higher throughputas demonstrated in Fig. 6 and Fig. 7, where the cluster databasenodes are able to provide higher and constant throughput levelsduring the tests.

On the other hand, when considering pages with morecontent, with high volume of data, like the Detail of anIdea the observed values of these Latency and Server Pro-cessing metrics differs, as highlighted in Fig. 8 and Fig. 9,respectively. At a first glance, the difference of performancebetween the load balancing database scenarios (x-CDB) isnow more evident, since the load balancing is able to providecontent in reduced time intervals and with a constant andhigh throughput. Considering the case with 1000 simultaneoususers, the CA-SDB scenario has a latency around 15s, whilein the CA-CDB this value falls to 5s. The main difference

between the Detail of Idea page and the home page is that theformer has more content, including images, more informationand also requires additional lookup in the database to retrievethe information.

Due to the processing inherent to the detail idea page, thenumber of errors is also higher. For instance, when loading thehome page, no errors were verified by the different clients.Fig. 10 and Fig. 11 depict the error ratio for Detail Ideaand List of Idea pages, respectively. The errors include allthe replies that have HTTP code different from 200 (the okstatus). As pictured, the higher number of concurrent clients,introduces more errors, in fact all the scenarios with 1000simultaneous users have representative error ratios (above 15%in the Detail Idea). Such errors occur due to high number ofrequests that are performed simultaneously. The deploymentwith all the components in a single server have the lowestperformance, by introducing more errors, and in high rates(around 90%). The logic of splitting the functionalities of thediverse EMPATIA components lead to better performance, butintroduce more communication overhead in the network, ashappens in the cluster all in one scenarios (CA-SDB and CA-

0

10

20

30

40

50

60

70

80

90

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBScenarios

% R

eque

sts

with

erro

r

Users 50 100 500 1000

Fig. 10: Errors ratio in Detail Idea

0

10

20

30

40

50

60

70

80

90

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBScenarios

% R

eque

sts

with

erro

r

Users 50 100 500 1000

Fig. 11: Errors ratio in List Ideas

Page 39: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 39 of 40

0

5000

10000

15000

20000

25000

30000

35000

40000

45000

50000

55000

60000

65000

70000

75000

80000

85000

90000

95000

100000

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBDeployment Models

Late

ncy

(ms)

Users 50 100 500 1000

Fig. 12: Latency in Voting

0

5000

10000

15000

20000

25000

30000

35000

40000

45000

50000

55000

60000

65000

70000

75000

80000

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBDeployment Models

Serv

er P

roce

ssin

g Ti

me

(ms)

Users 50 100 500 1000

Fig. 13: Server Processing Time in Voting

Fig. 14: Throughput for Voting in SA-CDBscenario with 500 users (run 1)

Fig. 15: Throughput for Voting in SS-CDBscenario with 500 users (run 1)

Fig. 16: Throughput for Voting in CA-CDBscenario with 500 users (run 1)

CDB).With the obtained results, we have been able to validate

the impact that the different deployment models have on theEMPATIA platform. We observed a lot of communicationoverhead in the scenarios with load balancing proxies, whichare perceived by clients either in the form of error ratios orwith high latency values. Currently, developments are beingperformed to enhance the performance of the EMPATIAplatform. Such enhancements include the de-normalization oftables in the database and avoiding the calculation of statisticsassociated with votes, topics, or comments in every request.

B. Support for heterogeneous operations

This subsection discusses the results regarding the validationof the EMPATIA platform using large amounts of heteroge-neous data and the validation of the EMPATIA components indifferent operations. For instance, the voting requires fetchingdata from the database to check the votes that a user (numberand type of votes) can perform and also storing the votesperformed by the user. Fig. 12 and Fig. 13 depict the Latencyand the Server response time in the voting process. The sameresults occur in the Voting process, with the single all inone deployment (SA-SDB and SA-CDB) model presenting theworst results, while the cluster all in one (CA-SDB and CA-CDB) have the best performance due to the load balancingsupport. In terms of throughput the performance is better forthe scenarios where there is load balancing (CA and SS),as depicted in Fig. 14, Fig. 15 and Fig. 16 for SA, SSand CA scenarios with 500 simultaneous users and with theCDB deployment. The low performance and heterogeneous

throughput in the SA scenario is inline with the Latency anderror ratios for the voting, since the throughput levels fluctuateand are not constant as happens in the CA or SS deployments.

When observing the error ratios in the voting process, theSA deployments also introduce more errors, as depicted inFig. 17, in particular for the 500 and 1000 simultaneous users.

0

10

20

30

40

50

60

70

80

01−SA−SDB 02−SA−CDB 03−SS−SDB 04−SS−CDB 05−CA−SDB 06−CA−CDBScenarios

% R

eque

sts

with

erro

r

Users 50 100 500 1000

Fig. 17: Error ratios in Voting

As stated previously, the voting process includes severaloperations, like verifying users permissions to vote, numberof available votes and type of vote that can be performed.Such operations require a lot of requests between the Voteand the remaining EMPATIA components, leading to highcommunication overheads. As such, in the voting scenario

Page 40: Fed4FIRE Experiment Report Large Scale EMPATIA Evaluation ... · The EMPATIA platform is an ICT platform developed in the context of this project, to support the transparency, trustworthiness

Innovative Experiments Call identifier: F4Fp-01

Experiment Report

Large Scale EMPATIA Evaluation (EMPATIAxxl) Page 40 of 40

error ratios can be observed even with a low number of users50, due to this overhead. In fact, the error ratios with a highnumber of users drops to the same levels as with low numberof users, putting in evidence the impact of the communicationoverhead between components.

The voting process is considered as one of the mainfunctionalities required in the Participatory Budget platforms,since proposals can be chosen based on the ideas that receivedmore votes. With this in mind the deployment model for thevoting support must be carefully addressed. As the results havedemonstrated, the load balancing support is crucial, either inthe frontend components of the EMPATIA platform, either inthe backend like the database nodes. A poor user experiencein the voting (i.e. high latency, or requiring the user to performthe same operation several times due to errors) lead to alow acceptance of the platform in communities engaging inparticipatory processes.

VI. CONCLUSION

The participatory budget is a powerful realization of theparticipatory democracy. Citizens can elect, prioritize, com-ment, improve projects proposal enhancing living conditionsand monitor their execution. The engagement of users is onlypossible with ICT platforms, integrating several tools andapplications, that are able to provide acceptable performancelevels.

This paper presented and validated the EMPATIA platform,designed to support multiple channels for participatory budgetand accommodating distinct deployment models. Therefore,considering the requirements associated with participatoryprocesses to support core functionalities like voting, ideationand proposals management. The evaluation of the EMPATIAplatform demonstrated the flexibility of the platform to supportdifferent deployment models (cloud, all-in-one), and the ac-ceptable performance levels for voting or detailing the ideas ofprojects, even with a high number of simultaneous users. Theevaluation conducted in this paper, highlights the support ofthe EMPATIA platform for all-in-one, and cloud deploymentmodels, enabling EMPATIAaaS.

The EMPATIA platform has been validated in several pi-lots, where feedback from users, municipalities, participatorycommunities has been gathered in order to enhance the per-formance of the platform. Different deployment models havebeen used to support the pilots, some using bare-metal serversusing the all-in-one, while others deployed the components inseveral virtual machines (e.g. similar to SS deployment hereinevaluated).

Our next steps include optimization of in some componentslike the PAD component, to keep in the redis database theinformation of the top projects, a deeper evaluation of somecomponents, not targeted in this evaluation like the Projectscomponent, which enable the technical analysis of the partici-patory process by municipalities and the monitoring of projectexecution by users.

ACKNOWLEDGEMENTS

This work was carried out with the support of the EMPATIAproject (grant agreement No 687920) and the Fed4FIRE+project (grant agreement No 732638), funded by the EuropeanCommission.

REFERENCES

[1] P. Spada and G. Allegretti, “Integrating multiple channels of engagementin democratic innovations: Opportunities and challenges,” Handbook of

Research on Citizen Engagement and Public Participation in the Era of

New Media, p. 20, 2016.[2] A. Pathak, V. Issarny, and J. Holston, “AppCivist - A Service-Oriented

Software Platform for Socially Sustainable Activism,” Proceedings -

International Conference on Software Engineering, vol. 2, pp. 515–518,2015.

[3] “Your Priorities,” https://yrpri.org/domain/3 [Last Visit: 05-January-2018].

[4] “OpenDCN,” http://www.opendcn.org/ [Last Visit: 05-January-2018].[5] “OpenBudgets,” http://openBudgets.eu [Last Visit: 05-January-2018].[6] M. Hall and S. Caton, “Online engagement and well-being at higher

education institutes: A German case study,” in 2016 IFIP Networking

Conference (IFIP Networking) and Workshops, IFIP Networking 2016,2016, pp. 542–547.

[7] K. K. Kapoor, A. Omar, and U. Sivarajah, “Enabling Multichannel Par-ticipation Through ICT Adaptation,” International Journal of Electronic

Government Research (IJEGR), vol. 13, no. 2, pp. 66–80, 2017.[8] L. Maccari, “On the Technical and Social Structure of Community

Networks,” 2016 Ifip Networking Conference (Ifip Networking) and

Workshops, pp. 518–523, 2016.[9] Fed4Fire., “Federation for Future Internet Research and Experimenta-

tion,” http://www.fed4fire.eu/testbeds/, 2016.[10] “Loomio,” https://www.loomio.org/ [Last Visit: 05-January-2018].[11] “CitizenBudget,” http://citizenbudget.com/ [Last Visit: 05-January-

2018].[12] “LiberOpinion,” https://liberopinion.com/ [Last Visit: 05-January-2018].[13] “Bilancio Partecipativo - Bipart,” http://www.bipart.it/ [Last Visit: 05-

January-2018].[14] “D-CENT,” http://dcentproject.eu/ [Last Visit: 05-January-2018].[15] “MyNeighbourHood,” http://www.my-n.eu/ [Last Visit: 05-January-

2018].[16] “OpenSpending,” https://openspending.org [Last Visit: 05-January-

2018].[17] “Opentender,” https://opentender.eu/ [Last Visit: 05-January-2018].[18] “DIGIWHIST: The Digital Whistleblower,” http://digiwhist.eu/ [Last

Visit: 05-January-2018].[19] LimeSurvey, “LimeSurvey,” https://www.limesurvey.org/ [Last Visit: 05-

January-2018].[20] “Laravel framework,” https://laravel.com/ [Last Visit: 05-January-2018].[21] “EMPATIA Project repository,” https://github.com/EMPATIA [Last

Visit: 05-January-2018].[22] “HA Proxy The Reliable, High Performance TCP/HTTP Load Balancer,”

http://www.haproxy.org/ [Last Visit: 05-January-2018].[23] Apache Software Foundation, “Apache JMeter,”

http://jmeter.apache.org/ [Last Visit: 05-January-2018].