Interoperable Trust Assurance Infrastructure · INTER-TRUST – ICT FP7- G.A. 317731!! ©...
Transcript of Interoperable Trust Assurance Infrastructure · INTER-TRUST – ICT FP7- G.A. 317731!! ©...
INTER-TRUST – ICT FP7- G.A. 317731
© INTER-TRUST Consortium 1 / 69
Interoperable Trust Assurance
Infrastructure
Gap and Standards Analysis First Version
Type (distribution level) Public
Contractual date of Delivery 28-‐02-‐2013
Actual date of delivery 28-‐02-‐2013
Deliverable number D2.2.1
Deliverable name Gap and Standards Analysis First Version
Version V1.0
Number of pages 69
WP/Task related to the deliverable
WP2/T2.2
WP/Task responsible UMU
Author(s) Fernando Pereñíguez, Jose Santa, Antonio F. Skarmeta
Partner(s) Contributing UMU, URV, INDRA, SCTYL, UMA
Document ID <
INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 2 / 69
Abstract This document identifies gaps in the current state-‐of-‐the-‐art in the technology areas that are fundamental to the Inter-‐Trust framework. These include computer network threat space as well as mitigation approaches; particularly computer security advisory and support services for the computer security users. Additionally, the document develops a detailed gap and standards analysis related not only to the Inter-‐Trust objectives but also to the e-‐voting and V2X/ITS domains where the Inter-‐Trust outcomes will be tested. This document will be continuously updated to inform the requirements engineering about new advances in the state-‐of-‐the-‐art.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 3 / 69
Executive summary
This document is a public deliverable (D2.2.1) of Inter-‐Trust (Interoperable Trust Assurance Infrastructure), a FP7 European Project, which aims at developing a dynamic and scalable framework to support trustworthy services and applications in heterogeneous networks and devices, based on the enforcement of interoperable and changing security policies, addressing the needs of developers, integrators and operators. The purpose of this document is to analyse existing research works, standards and technologies fundamental to the Inter-‐Trust project in order to identify gaps to be covered by the developed security framework. The study carried out in this document includes an exhaustive analysis of the current state-‐of-‐the-‐art related to the Inter-‐Trust objectives, namely: security in heterogeneous and pervasive systems, modelling of secure interoperability policies with time constraints, trust negotiation, use of AOP to enforce security requirements, and monitoring and testing techniques. Additionally, for the sake of completeness, the gap analysis is not only limited to the project objectives but also covers the two scenarios where the Inter-‐Trust framework will be tested: e-‐voting and V2X/ITS. For each use case, the document presents the communication architecture, participating entities, security threats and the security objectives to be satisfied, according to the existing standard specifications for these fields. As a result of this exhaustive analysis, one of the most relevant contributions of this document relies on the identification of the specific security requirements to be satisfied for the different uses cases.
This document will be continuously updated to inform the requirements engineering about new advances in the state-‐of-‐the-‐art. Additionally, it will serve as input to the standards collaboration activity developed as part of the dissemination and exploitation of the Inter-‐Trust project. Updates to the presented gaps and standards analysis will be reported in a new version of this deliverable to be published in December 2014.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 4 / 69
Table of Contents
1 INTRODUCTION ........................................................................................................................................ 6 1.1 SCOPE OF THE DOCUMENT ............................................................................................................................... 6
1.2 APPLICABLE AND REFERENCE DOCUMENTS .......................................................................................................... 6
1.3 REVISION HISTORY ......................................................................................................................................... 6
1.4 NOTATIONS, ABBREVIATIONS AND ACRONYMS .................................................................................................... 7
1.5 GLOSSARY .................................................................................................................................................... 9
2 GAPS AND STANDARDS ANALYSIS RELATED TO THE INTER-‐TRUST OBJECTIVES ....................................... 11 2.1 SECURITY IN HETEROGENEOUS AND PERVASIVE SYSTEMS ..................................................................................... 12
2.2 MODELLING OF SECURE INTEROPERABILITY POLICIES WITH TIME CONSTRAINTS ........................................................ 13
2.3 TRUST NEGOTIATION .................................................................................................................................... 14
2.4 USE OF AOP TO ENFORCE SECURITY REQUIREMENTS .......................................................................................... 15
2.5 MONITORING TECHNIQUES ............................................................................................................................ 16
2.6 ACTIVE TESTING, FUZZ TESTING AND FAULT REMOVAL TECHNIQUES ...................................................................... 17
2.7 INNOVATION IMPACT ON USE-‐CASES ............................................................................................................... 18
2.7.1 Remote Voting ................................................................................................................................ 18
2.7.2 Security and Privacy in V2x ............................................................................................................. 19
3 GENERAL THREATS TO NETWORK AND INFORMATION SECURITY ........................................................... 21 3.1 ASSETS AND NETWORK THREATS .................................................................................................................... 21
3.2 COMMON NETWORK ATTACKS ....................................................................................................................... 22
3.3 COMPUTER SECURITY ADVISORY SERVICES ........................................................................................................ 24
4 GAPS AND STANDARD ANALYSIS FOR COOPERATIVE ITS ........................................................................ 27 4.1 OVERVIEW OF COOPERATIVE ITS .................................................................................................................... 27
4.1.1 Cooperative ITS Applications ........................................................................................................... 29
4.1.2 Cooperative ITS Facilities ................................................................................................................. 29
4.2 ITS ARCHITECTURE ...................................................................................................................................... 30
4.3 ITS APPLICATION CLASSES AND USE CASES ....................................................................................................... 33
4.4 SECURITY REQUIREMENTS OF ITS APPLICATIONS ................................................................................................ 38
4.4.1 ITS Security Objectives ..................................................................................................................... 38
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 5 / 69
4.4.2 Privacy Commitment ....................................................................................................................... 39
4.4.3 Analysis of Security Requirements for ITS Applications ................................................................... 40
4.5 SECURITY SERVICES ...................................................................................................................................... 42
5 GAPS AND STANDARDS ANALYSIS FOR ELECTRONIC VOTING ................................................................. 45 5.1 OVERVIEW OF ELECTRONIC VOTING ................................................................................................................. 45
5.1.1 Electronic Voting Systems ............................................................................................................... 45
5.2 ELECTRONIC VOTING ARCHITECTURE ............................................................................................................... 46
5.3 SECURITY REQUIREMENTS OF ELECTRONIC VOTING ............................................................................................. 49
5.3.1 Electronic Voting Security Objectives .............................................................................................. 49
5.3.2 Electronic voting security risks ........................................................................................................ 50
5.3.3 Analysis of Security Requirements for Electronic Voting Applications ............................................ 51
6 USE CASES SECURITY ANALYSIS REQUIREMENTS .................................................................................... 57 6.1 E-‐VOTING USE CASE .................................................................................................................................... 57
6.2 VEHICLE-‐TO-‐INFRASTRUCTURE USE CASE: DYNAMIC ROUTE PLANNING .................................................................. 58
6.3 VEHICLE-‐TO-‐VEHICLE USE CASE: CONTEXTUAL SPEED ADVISORY ........................................................................... 59
7 CONCLUSIONS ........................................................................................................................................ 61 8 REFERENCES ........................................................................................................................................... 62 ANNEX.A ITS APPLICATIONS COMMUNICATION BEHAVIOUR .................................................................... 69
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 6 / 69
1 Introduction
1.1 Scope of the document
The requirements of the INTER-‐TRUST framework are elicited from various sources:
• User needs (User Pull) included in the deliverable D2.1.x Requirements Specification.
• State-‐of-‐the-‐Art (Technology Push) included in the deliverable D2.2.x Gap and Standards Analysis.
• Market needs and gaps (Market Pull) included in the deliverable D2.4.x Market analysis
• Socio-‐economical constraints included in the deliverable D2.5.x Legal, Social and Economical Constraint.
These four deliverables provide a complete view of the Inter-‐Trust requirements.
Inter-‐Trusts adopts an incremental approach, all the deliverables will be issued in two versions: initial, with the requirements defined in the first project cycle, and final, with the definitive, full scope, requirements elicitation.
The present deliverable D2.2.1 takes into consideration the second source and aims at identifying gaps in the current state-‐of-‐the-‐art in the technology areas that are fundamental to the Inter-‐Trust framework. These include computer network threat space as well as mitigation approaches; particularly computer security advisory and support services for the computer security users. Additionally, the document develops a detailed gap and standards analysis related not only to the Inter-‐Trust objectives but also to the e-‐voting and V2X/ITS domains where the Inter-‐Trust outcomes will be tested. This document will be continuously updated to inform the requirements engineering about new advances in the state-‐of-‐the-‐art.
1.2 Applicable and reference documents
This document refers to the following documents:
D2.1.1 Requirements Specification First Version
1.3 Revision History
Version Date Author Description
0.1 08/01/2013 UMU Initial document draft including ITS application security requirements
0.2 31/02/2013 UMU Extended draft version including UMU contribution (sections 2, 3, 4.2 and 6.3)
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 7 / 69
and URV contribution (section 4.4.2)
0.3 12/02/2013 UMU Added contribution from SCYTL. Completed conclusions section.
0.4 22/02/2013 UMA We reviewed the entire document and added some text to the section that discusses the AOP frameworks for security gaps
0.5 25/02/2013 SL Reviewed and provided comments for the document (some suggestions were added to Section 3 about Threats, and Section 4.4 about Security requirements)
1.0 27/02/2013 UMU Updated version following comments received from UMA and SL (final version).
1.4 Notations, Abbreviations and Acronyms
Abbreviation Full Name
AOP Aspect-‐Oriented Programming
APPS Application Server
AR Access Router
BSA Basic Set of Applications
CAM Cooperative Awareness Message
CEN European Committee for Standardization
C-‐ITS Cooperative Intelligent Transport Systems
DDoS Distributed Denial of Service
DENM Decentralized Environmental Notification
DoS Denial of Service
DPI Deep Packet Inspection
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 8 / 69
ETSI European Telecommunications Standard Institute
HMAC Hash Message Authentication Code
IDPS Intrusion Detection and Prevention Systems
IEEE Institute of Electrical and Electronics Engineers
IP Internet Protocol
ISO International Organization for Standardization
ITS Intelligent Transport Systems
ITS-‐S ITS Station
LBAC Location Based Access Control
MAC Message Authentication Code
ME Management Entity
MH In-‐Vehicle Host
MR Mobile Router
NIC Network Interface Card
OBU On-‐Board Unit
OSI Open Systems Interconnection
PC Personal Computer
PIN Personal Identification Number
PKI Public Key Infrastructure
RSU Road Side Unit
SA Security Association
SAP Service Access Point
SE Security Entity
SLA Service Level Agreement
SSL Secure Socket Layer
TCP Transport Control Protocol
TVRA Threat, Vulnerability and Risk Analysis
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 9 / 69
V2I Vehicle to Infrastructure
V2V Vehicle to Vehicle
V2x/ITS Vehicle to Vehicle and Vehicle to Infrastructure
VANET Vehicular Ad-‐hoc Network
VoIP Voice over IP
XACML eXtensible Access Control Markup Language
XML eXtensible Markup Language
1.5 Glossary
For the purposes of the present document, the following terms and definitions apply:
Access Control: security objective providing a limited access to information in such a manner that only authenticated and authorized users can access these resources.
Anonymity: basic level of privacy provisioning that allows an entity to remain unidentified. Typically, this is achieved by concealing any identification related data like identities.
Anonymization: the act of removing personal identifiers from information so that it cannot be associated with an entity in any manner.
Attacker: malicious entity intended to develop a security attack.
Authentication: security service confirming the truth of an attribute of an entity (e.g. identity in case of identity authentication).
Authorization: security service ensuring that users are allowed to perform those actions they have the right to do.
Confidentiality: security service ensuring that information access is limited to authorized parties.
Credential: physical/tangible object, a piece of information or a facet of a person’s physical being, that enables an entity to identify against a system (e.g. a service). A credential can be something we know (e.g. password), something we have (e.g. identity card) or something we are (e.g. biometric feature).
Digital Certificate: piece of information that, by means of a digital signature, binds a public key with an identity (e.g., identity). The certificate is useful to verify that a public key belongs to an certain entity.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 10 / 69
Digital Signature: cryptographic operation that allows demonstrating the authenticity of a message. The entity receiving a digitally signed message can validate that a known sender created the message and that the message has not been altered.
Encryption: process of encoding information in such a way that attackers cannot read it and only authorized parties can.
Integrity: security service ensuring that information has not been change inappropriately.
Privacy: security objective aimed at preserving the users’ right to protect personal information from unauthorized access
Pseudonym: identity of an entity different from the original one. Pseudonyms are widely used in privacy to hide the real identity of an entity.
Pseudonymization: procedure by which identification data are replaced by fictitious identifiers (e.g. replace an identity with a pseudonym).
Public Key Cryptography: cryptographic system based on the existence of two keys associated to an entity referred to as private key and public key. While the public key is published and can be know to other entities, the owner keeps the private key in secret. This system works in such a manner that what one key encrypts, the other one decrypts, and vice versa.
Security Attack: attempt to destroy, expose, alter, disable, steal or gain unauthorized access/use of a system asset.
Security Association: establishment of security attributes necessary to secure the communication between two network entities. For example, a security association negotiates parameters such as cryptographic algorithm or keys to be used.
Security Threat: possible danger that might exploit a vulnerability of a communication system in order to corrupt security and cause a possible harm.
Timestamp: piece of information that allows determining when a certain event occurred. For example, when attached to a message, a timestamp is useful to determine when a message has been generated by an entity.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 11 / 69
2 Gaps and Standards Analysis Related to the Inter-Trust Objectives
Inter-‐Trust brings together renowned security, monitoring and testing experts and practitioners, as well as industrials active in security applicative domains, to address existing problems encountered in the design and development of secure interoperable pervasive systems. On the one hand, there is a lack of sufficiently rich techniques to tackle the problem of security policies modelling, interoperability, deployment, enforcement and supervision. On the other, there is also a lack of techniques that allow dynamically adapting security mechanisms to changes in the requirements and in the environment.
The Inter-‐Trust project will define a new architectural approach to solve the aforementioned problems. In particular, Inter-‐Trust aims at achieving the following objectives:
• Security in heterogeneous and pervasive systems: Computing capabilities are becoming pervasive as they are increasingly embedded into mobile connected devices. Users expect to access resources and services from anywhere. This raises serious security issues as devices are constantly interacting with each other. The context in which these interactions take place is used to define the security requirements which must be fulfilled.
• Modelling of secure interoperability policies with time constraints. Modelling of secure interoperability policy refers to finding out whether two different security policies of two different entities of a heterogeneous system can work together. It can be viewed as a set of contracts, negotiated between the different entities that are applied to control their interoperation.
• Trust negotiation. Trust negotiation is an approach for gradually establishing trust between interacting parties, strangers to each other. It is based on a bilateral disclosure of information by the negotiating parties, called digital credentials, used to prove their trustworthiness. The essential elements of a trust negotiation process are: digital credentials, disclosure policies and negotiation strategies.
• Use of AOP to enforce security requirements. Aspect Oriented Programming (AOP) is a programming technique that improves modularity by separating crosscutting concerns (e.g. security-‐related concerns) in a new module called “aspect”. AOP allows enforcing security requirements by “weaving” these aspects into the applications, changing their configuration and behaviour so that they respect these requirements.
• Monitoring techniques. Monitoring involves examining the execution traces of a system in order to verify expected properties. Monitoring is based on passive testing, i.e. the observation of the system traces without interfering with the system’s normal operation [3] [4].
• Active testing, fuzz testing and fault removal techniques. Active testing is essentially focused on verifying the conformity of a given implementation to its specification. The implementation under test is executed with specific inputs and the observed outputs are checked against the specification. Interoperability testing refers to finding out whether two
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 12 / 69
different components of a heterogeneous system can work together. Fuzz testing involves the injection of faults and other unexpected inputs in form of syntactically correct, but semantically invalid messages in order to check the robustness of the tested system. This technique is a particular form of active testing. Fault removal aims to detect the presence of design and implementation faults, to locate and remove them.
• Innovation impact on use-‐cases. Despite the project outcomes will be cross-‐domain, Inter-‐Trust will show the benefits of the developed framework when applied to the e-‐voting and V2X/ITS domains.
o Remote Voting. Electronic voting, also known as e-‐voting, is a term covering several different types of voting, embracing both electronic means of casting a vote and electronic means of counting votes. In general, two main types of e-‐voting can be identified [5] [6]: e-‐voting which is physically supervised by representatives of governmental or independent electoral authorities (e.g. electronic voting machines located at polling stations); remote e-‐Voting where voting is performed within the voter's sole influence, and is not physically supervised (e.g. voting from one's personal computer, mobile phone, television via the internet).
o Security and privacy in V2x. Vehicle-‐to-‐Infrastructure (V2I) and Vehicle-‐to-‐Vehicle (V2V) Communication (V2x in general) represent the wireless exchange of data between vehicles and roadway infrastructure, intended to make use of those services available to the users. In V2x, users expect to access new on-‐board telematic services connected to a remote infrastructure and between vehicles in an almost anywhere way.
In the following we present a comprehensive study about the state of the art related to the different objectives of the project. This analysis will present not only research works and standards related to each domain but also the problems needing attention from the research community and, in particular, from the Inter-‐Trust project. In this sense, we would like to clarify that after conducting an exhaustive search in the different key areas related to Inter-‐Trust, this study mainly analyses investigations and results developed by individuals since little standardization work has been found dealing with the project objectives.
2.1 Security in Heterogeneous and Pervasive Systems
Traditionally, user authentication and access control provide security in computer networks. However, the way in which these mechanisms are currently designed and implemented are inadequate for the new set of challenges and problems raised by the pervasive computing systems. Problems encountered include heterogeneity, the lack of centralised control and the dynamic environment made of changing mobile devices. Wireless technologies are playing a major role to enable devices to form spontaneously short-‐range, short-‐term ad-‐hoc networks. Some research works have focused in allowing applications built for wired networks to be run in a wireless context. Such works have been developed by the research community in contexts such as web access from mobile platforms [27] and database access [28]. However, these approaches have not been conceived for being used in pervasive environments and are unable to provide the required level of security in pervasive systems where devices are peers and can be both providers and consumers of services. Given the dynamic nature of pervasive systems, the context in which communications
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 13 / 69
between devices take place is an important data to define the security policy which applies [29]. This contextual information can be very rich and include information such as the device location. In [30], a Location Based Access Control Model (LBAC) is used to tackle the problem of privacy and security in pervasive systems. However it focuses mainly on the enforcement of policies depending on the spatial position of a user to ensure the privacy of its personal information. Several research works have investigated the problem of security in a pervasive environment for specific applications like sensors network [31] but the context of a pervasive environment with heterogeneous devices with for example processing power restrictions has not been tackled yet.
With respect to privacy, different methods and techniques have been proposed for preserving privacy. These methods may be classified in a number of categories as follows: Perturbative masking methods [32] [33] [34] [35]; Non-‐perturbative masking methods [34]; Synthetic and hybrid data methods [36] [37]; and, Cryptographic methods. In this last category, some cryptographic techniques also apply to privacy preserving data mining, for example encryption, secret sharing or multiparty computation [38]. These techniques have the advantage of offering a well-‐defined model for privacy that may be subject to proof and quantification, and also of counting with an extensive set of tools and algorithms. Although cryptographic methods may ensure that the transformed data is exact, they are often not very efficient.
Summarizing, the INTER-‐TRUST project is required to solve these problems by covering the following gaps:
• Develop novel models to take into account generic contextual information to address heterogeneous pervasive environments. This information is necessary to ensure various security requirements such as privacy, anonymity and availability of services, for example.
• Develop a framework to support trustworthy interoperability in heterogeneous networks and devices based on the enforcement of dynamic security policies. This framework will enable the use of different existing privacy and anonymity techniques by introducing security policies that regulate them and including: delegation schemas to support devices with limited resources; and, data sharing control during inter-‐operating systems.
2.2 Modelling of Secure Interoperability Policies with Time Constraints
A security policy is a set of rules that defines security objectives that need to be respected by a system. It includes defining permissions, prohibitions, obligations, delegations, etc. Permissions or authorisations can be capability based. Scalable solutions, which can be more easily deployed on devices with different computing capabilities, can be obtained using capability-‐based security approaches [39] [40] [41] [42].
A security policy can describe local rules that are to be applied internally by a system or rules that define interactions or communication with external systems. In this last case we can refer to it as a security Service Level Agreement (SLA) (e.g. [43] proposes this concept for clouds and [44] for Web Services). SLAs are usually static but in some cases they are dynamic as, for instance, [45] where they are used for on-‐demand packaging of resources in the cloud.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 14 / 69
Previous work on security policy specification has shown how each entity involved in a given interaction must be able to negotiate its interoperability security policy with other parties. Several languages have been designed and extended to cover interoperability issues. The modelling of secure interoperability policy issue was addressed by [46]. This work defines an extension of the OrBAC (Organisation Based Access Control) model called O2O (Organisation to Organisation), which provides means to different parties wanting to interoperate to define their secure interoperability policy [47].
However, even if the work performed focused on access control, it does not address the usage control requirements as suggested in the UCON (Usage Control) model [49] and further formalised using obligations with deadline in the NOMAD model [50]. Thus, the definition of interoperable security policies with access control and usage control requirements is still an open issue. Also the deployment of interoperable security policies is a challenge.
Furthermore, access control policies express conditions which must be met at the time of access to a resource. Policies with time constraints allow the expression of temporal conditions which must be met during the time a resource is used. This kind of policies belongs to the set of usage control policies. Existing models can be used to model such constraints but frameworks enabling the implementation and integration of such policies in applications are still to be defined.
To solve the aforementioned problems, the INTER-‐TRUST project is expected to cover the following gaps:
• To provide secure interoperability policy formalisms and techniques for security experts and software developers and other parties involved in software development.
• To provide a new paradigm for interoperability security policy specification, based on O2O (Organisation to Organisation) but including usage and time constraints.
• To enable the use of capabilities to manage access control to device’s features and services, including the definition of contextual rights delegation, fine-‐grained access control rights, and mechanisms to preserve privacy and personal information.
• To provide secure interoperability policy formalisms with a formal semantics and syntax to support tool automation, that is scalable for large complex systems. This modelling formalism will be able to express a certain number of distinct interoperability security policies, security SLAs contracts and relationships beyond those expressed by existing formalisms (UCON, OrBAC, NOMAD, O2O, and XACML).
• Definition of new tools to express policies with time constraints (i.e. security policies editor). These tools will be used to express and write the policies for the e-‐voting and V2x use cases. In particular, the second use case introduces safety-‐related applications that have very strict time constraints. Define APIs to interpret and integrate the policies written with these tools.
• Allow the existence of dynamically negotiated and deployed security SLAs. There is not research, to our knowledge, that studies the use of “dynamic security SLAs”.
2.3 Trust Negotiation
Different approaches have been proposed to consider trust in information systems. Trust negotiation is one of the most used concepts. The most known trust negotiation models are TrustBuilder [54] and
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 15 / 69
Trust-‐χ. TrustBuilder is more adapted for negotiation of trust in dynamic coalitions. Trust-‐χ is another framework for trust negotiation. Authors claim that this framework is especially conceived for a peer-‐to-‐peer environment. There are also approaches based on access control mechanisms combined with trust. Access Control (AC) model with a trust establishment assumes that the user is known in advance to the system, but trust negotiation models can be used as a part of access control models.
However, these approaches do not distinguish between trust management and access control management. The disclosure policies used during the trust negotiation process are access control policies. In such a configuration, satisfying a disclosure policy allows the access to the resource. Besides, the disclosure policies are expressed in an attribute based model. This limits the integration of the trust negotiation to systems in which access control is based on attributes, as for instance, the ABAC (Activity Based Access Control) model. A system that is not trust-‐aware should be drastically modified if it needs to integrate trust negotiation.
Consequently, the INTER-‐TRUST is challenged to propose a generic system where access control management is independent from the access control model. The trust negotiation mechanism must be integrated without major changes in the access control system that nevertheless needs to become a trust-‐aware system. To accomplish this objective, the modules for trust negotiation (in charge of collecting attributes defined by attribute based policies) will be different from access control policies.
2.4 Use of AOP to Enforce Security Requirements
The central idea of AOP (Aspect Oriented Programming) consists of adapting or augmenting the behaviour of a base program by the addition of “aspects” that specify additional requirements (e.g. security requirements). These additional requirements are then composed (called “woven” in AOP terminology) at certain execution points called join points (e.g. the reception of a message). The main novelty of AOP with respect to other modularisation techniques is that the weaving is performed according to the “obliviousness principle” introduced by Filman [55]. This principle says that the base code affected by an aspect is oblivious to the fact that its behaviour is being modified by an external module (the aspect “advice”). AOP can now be considered a quite mature software technology, and there are several existing AOP approaches, where the AspectJ language [56] is the most well-‐known.
In the community of aspect-‐orientation, security is a classic example which is always modelled as an aspect. In [57] a thorough study that analyses the feasibility and benefits of applying AOP to model all the security related concerns is presented as one of the results of the NoE AOSD-‐Europe project. This document gives for each security sub-‐concern (e.g. availability, integrity, confidentiality...), a coherent set of guidelines on how to apply aspects successfully. Based on these recommendations, we can claim that AOP can address security-‐related issues better than other software development technologies. In the security community, AOP has already been suggested as a technique for security policy enforcement in a given application [58] [59]. At the design level, in [60] an aspect-‐oriented methodology for incorporating security mechanisms in an application is presented. In this work, the functionality of the application is described using the primary model and the security mechanism, modelled as security aspect, is composed with the primary model to obtain the security-‐treated model.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 16 / 69
Although the benefits of using AOP to model and implement security separately from the systems that need to be secured has been extensively demonstrated by the aspect-‐oriented community, few efforts have been made on providing a generic and ready-‐to-‐use aspect-‐oriented policy-‐based security solution, as it is the goal of the INTER-‐TRUST security framework. The first gap is the lack of a framework of security aspects that can be easily reused in different systems. The second gap is that few proposals provide support to dynamically enforce the security requirements, possible at runtime, in order to adapt the security of a system to dynamic changes on its security policy (as result, for instance, of a security policy negotiation).
Regarding the second gap, an AOP weaver to be able to dynamically change the aspects that are weaved with a system at runtime must be used. Such weavers can be dynamic weavers (e.g. Spring AOP1 or JBoss AOP2) or static weavers capable of enabling/disabling the weaving of aspects at runtime (e.g. AspectJ). Recently, some works already explore the benefits of run-‐time addition and verification of security properties, but all of them propose formal languages and not ready-‐to-‐use solutions. An expressive approach proposed by [65] uses AOP for run-‐time verification of security properties that addresses: i) formal specification of properties; and ii) mechanisms to enforce the formally specified properties. Similarly to ours, this solution supports automatic generation of enforcement code from formal specifications. [66] and [67] use aspect-‐oriented extensions of coordination languages (i.e. AspectKE, AspectKE*) for mobile/distributed systems to enforce and reason about security policies.
2.5 Monitoring Techniques
The goal of security monitoring is to obtain improved visibility of the studied communicating or inter-‐operating systems. Many techniques can be applied but they all suppose the definition of a monitoring architecture (including the selection of observations points to collect the traces) and the description of the system security requirements using, for instance, formal specification languages. Two types of monitoring will be considered in Inter-‐Trust: i) monitoring of network communication between inter-‐operating systems (that we will refer to as black-‐box monitoring); and, ii) the monitoring of application execution (that we will refer to as white-‐box monitoring).
Regarding black-‐box monitoring, different techniques for network monitoring exist in the literature and can be based on, for instance, SNMP [68] [69], Deep Packet Inspection (DPI) [70] and invariants [82]. DPI is a technique that is used for completely analysing communication packets (both headers and payloads) and can be useful for security analysis of network traffic and detecting and preventing intrusions (IDPS, Intrusion Detection and Prevention Systems)3. Most techniques used depend on
1 http://static.springsource.org/spring/docs/3.0.x/reference/aop.html
2 http://docs.jboss.org/jbossaop/docs/2.0.0.GA/docs/aspect-‐framework/userguide/en/html/index.html
3 http://csrc.nist.gov/publications/nistpubs/800-‐94/SP800-‐94.pdf
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 17 / 69
pattern marching to detect intrusions or attacks (e.g. SNORT4) but very few use correlation of events to generate alarms (e.g. BRO5).
White-‐box monitoring is similar to run-‐time verification where the application is analysed during its execution. Several techniques exist, including code instrumentation using: just-‐in-‐time compilation (e.g. Valgrind6), AOP [71] [72], debugging tools (e.g. GDB7), etc. [71] proposes the use of aspect-‐oriented monitoring approaches for validation and testing of a software against constraints specified on an associated UML design model. In Inter-‐Trust, we will use AOP techniques to obtain information on the studied system and its environment using well-‐determined pointcuts. This information will be analysed to detect security violations generating alarms and triggering reaction strategies.
Summarizing, the INTER-‐TRUST project will provide the following functionality to cover the aforementioned deficiencies associated to security monitoring:
• Methods and tools to validate secure interoperability based on fuzz testing techniques, as well as suitable models able to guide test vector generation and fuzz testing strategies, combined with monitoring and active testing techniques. Consequently, when a new secure interoperability property is introduced, the fuzz testing tools will be able to automatically use the descriptions and test for the new security properties without the need for updating the tool software itself.
• By complementing the monitoring and testing tools with fuzzing techniques, INTER-‐TRUST will create mechanisms and methods to assist software developers in eliminating security flaws during the pre-‐deployment/test phase of product development.
2.6 Active Testing, Fuzz Testing and Fault Removal Techniques
Active testing [73] is essentially focused on verifying the conformity of a given implementation to its specification. The implementation under test is executed with specific inputs and the observed outputs are checked against the specification. Interoperability testing refers to finding out whether two different components of a heterogeneous system can work together [74]. While numerous techniques for fuzz testing exist [75], fuzz testing for the purpose of ensuring secure interoperability is still an immature area. Conventional software fuzz testing can be used to evaluate a system’s robustness to vulnerabilities, which some consider to be an overall indicator of security; obviously, the technique is not specific to any particular security property and is too vague in general. Design of fault models for security vulnerabilities is also an area not tackled yet.
4 http://www.snort.org/
5 http://bro-‐ids.org/
6 http://valgrind.org/
7 http://www.gnu.org/software/gdb/
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 18 / 69
2.7 Innovation Impact on Use-Cases
2.7.1 Remote Voting
One of the scenarios in which pervasive computing is growing in interest is Remote voting [76]. This is motivated by the requirement of providing, to all voters, a universal and equal access to the voting process. This requirement increased the interest in providing multiple remote electronic channels. Originally, remote electronic voting platforms were conceived to use voting terminals with enough processing resources for implementing cryptographic operations. These performance requirements were motivated by the strong security requirements of any voting process, which only can be achieved by means of advanced cryptographic protocols. Therefore, the main supported voting platform was initially a standard computer with an Internet connection. This restriction generated the so called “digital divide” on the voting process: only those voters having the proper infrastructure (computer and Internet connectivity) could have access to the remote e-‐voting process.
To solve this issue, some voting systems were designed based on cryptographic protocol approaches focused on supporting devices without any processing capacity, such as landline phones. These cryptographic protocols, commonly known as Polsterless [77], focus their approach on delivering to voters some sort of pre-‐encrypted ballots that dispense them from performing the encryption steps in their voting device. This approach consists of generating a unique code pair for each candidate (one for casting a vote and another for verifying the proper vote casting) based on a unique identifier of the ballot. For each voter, a printed ballot with the unique code pair of the candidates and the ballot identifier is printed and delivered through a physical channel (e.g., postal service). To cast a vote, the voter only needs to send the first candidate code and the ballot identifier to the vote collector server. The vote collector server stores the received candidate code and returns the second code of this candidate to the voter (the received ballot identifier is used to find this second code). The voter verifies whether the received return code corresponds with the second candidate code, to be sure that the voting transaction finished successfully.
The simplicity of this system makes it suitable to use on any device: land phones (by introducing the code using the touch tones), mobile phones (by sending the codes through SMS), etc. However, from the pervasive point of view, these systems have a big problem: they support multiple voting channels by forcing them to use the same voting process of sending return codes. In other words, in the case that a voter wants to cast a vote through a computer, the voter is required to introduce manually the code instead of selecting a candidate though a graphical user interface.
The fast evolution of mobile phone technology in terms of processing capacity facilitated the use of these devices in the same way as computers, i.e. they could process cryptographic operations. However, there are still performance limitations on these devices for implementing complex cryptographic processes. A workaround to bypass this limitation is to delegate part of the cryptographic processing to an external server. Therefore, the voting process can be implemented with the support of this external processing facility that implements the cryptographic processes not supported by the device. Mori and Sako [78] made a proposal based in this cryptographic processing delegation approach. In this case a mobile phone executes a voting application using some resources from a site located on a secure network when it required executing heavy cryptographic computations processes (e.g., several ZKP’s). However, this proposal was focused on mobile terminals only and did not consider the use of the same application for computers (only the same protocol).
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 19 / 69
To solve this weakness, the INTER-‐TRUST project is required to develop a security framework allowing the use of the same voting application independently of the environment in which they will be executed. The objective is to use the proposed negotiation of the secure policies and the adaptability of the code to decide if cryptographic operations can be delegated or not to an external server. Therefore, the same code adapts to the security constraints of the execution environment for implementing the cryptographic protocol.
This novel approach will facilitate the uptake of e-‐voting in pervasive environments without having different specialised platforms. Thus, the INTER-‐TRUST approach will open the way for new e-‐voting applications that will facilitate the participation and improve trust of citizens in the e-‐voting process.
2.7.2 Security and Privacy in V2x
ITS and cooperative mobility systems8 involve the exchange of information among vehicles and RSUs (Roadside Units) and ensuring both security and privacy of V2V and V2I communications is currently recognised as a key requirement for a successful deployment of such systems in Europe [79]. Therefore, a wide number of ITS related projects have dealt with such issues recently or have ongoing activities in this field, like NoW (Network on Wheels), SEVECOM (SEcure VEhicular COMmunications), EVITA (E-‐safety Vehicle Intrusion Protected Applications), PRECIOSA (Privacy Enabled Capability in Co-‐operative Systems and Safety Applications), OVERSEE (Open Vehicular Secure Platform) and PRESERVE (Preparing Secure Vehicle-‐to-‐X Communication Systems). All of such projects focused on preventing privacy violations, denial of service attacks against the system, V2V authentication broadcast with embedded cryptographic mechanisms and, in order to preserve the driver privacy, the use of changing and revocable pseudonyms is assumed. Also IEEE P1609.2 standard as part of the DSRC (Dedicated Short Range Communications) standards for vehicular communications, proposes the use of asymmetric cryptography to sign safety messages with frequently changing keys so that anonymity is preserved.
Applications implemented for wired networks that have to run in a wireless context, such as web access from mobile platforms and database access, are not suitable for the pervasive and mobile environments, due to the own nature of wireless communications, which will reveal several security issues in a V2x scenario [80]: Jamming; Forgery; In-‐transit Traffic Tampering; Impersonation; Privacy Violation; On-‐board Tampering. All of these vulnerabilities in vehicular networks expose the necessity to develop new secure frameworks to assure: privacy, confidentiality, integrity, DoS defence, and non-‐repudiation.
To accomplish with all the security requirements and to manage those protocols two main approaches exist. The first one is based on the PKI (Public Key Infrastructure) to assess the reliability of the node, by means of the messages signature as described in [81]. The second one is based on the concept of trust, which is a more decentralized model as can be seen in [83]. But these proposals also need to define security policies to grant access to the infrastructure services / resources using new access control model such as LBAC (Location Based Access Control Model) to tackle the problem of privacy and security in systems like the one related to VANETs, where depending on the spatial position of a user it could be relevant to ensure the privacy of its personal information. Languages
8 Cooperative Mobility means the interconnection of vehicles and infrastructure, to create and share new kinds of information, leading to a better cooperation amongst drivers, vehicles and roadside systems.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 20 / 69
like XACML (eXtensible Access Control Markup Language) are used to define security policies. Several models have been defined for usage control policies (NOMAD [50], UCON [49], etc.). However, a framework allowing the implementation and integration of such policies in applications is still to be defined and implemented.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 21 / 69
3 General Threats to Network and Information Security
The technological evolution that computer networks are expected to experiment in the forthcoming years will enable the development of a pervasive environment where every device (smartphones, vehicles, clothes, electrical appliances, etc.) will be always connected to the Internet and constantly exchanging information with remote entities. Nevertheless, a fully operational environment will be achieved as long as the research community demonstrates its ability to solve the different challenges posed by future computer networks. In particular, the security threats facing computer networks will become more technically sophisticated, better organized and harder to detect. At the same time, the consequences of not being able to frustrating these attacks have disastrous consequences. Among others, we can mention the economic consequences of financial fraud, the impact on the reliability of critical infrastructure and the compromise of national security.
The objective of this section is to provide the reader with a general overview of the assets needing protection in computer networks as well as defining the threat space. This information will be used as input to outline not only relevant security attacks in computer networks but also how they can be mitigated through a well-‐defined set of computer security services.
3.1 Assets and Network Threats
As we can observe, security is a critical issue affecting to users, service providers, network operators and, in general, to any network entity. For this reason, security mechanisms must be put into practice to protect the assets of the business and electronic services. More precisely, we identify the following:
• The data belonging to companies, organizations and citizens using electronic services.
• Resources enabling the electronic business or service activity itself such as equipment, network infrastructure, information, etc.
• Data and information destined to control the correct operation of equipment and network systems.
• Information owned by users accessing to electronic services. In this group, authentication credentials are one of most critical assets to protect. Nevertheless, others worth considering are the user’s safety, health, reputation and money.
Different security threats can be identified to the aforementioned assets. In the following we provide a classification of the most important threats attending to the affected system (application or network infrastructure). This classification is based on the security report [1] (see chapter 5) developed by the European Committee for Standardization (CEN). If the reader is interested in a more complete description and analysis of these threats, please refer to this standard specification.
System and Application Threats
[T1] Network communications can be intercepted and the electronic information copied or modified. This can cause damage through invasion of the privacy of individuals or through
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 22 / 69
the exploitation of data intercepted. Additionally, the modification of intercepted data could also threaten the health and life of patients.
[T2] Attackers can also try to get unauthorized access into both computer and computer networks with the malicious intent to copy, modify or destroy data stored in network equipment and to mobile devices such smartphones, tablet PCs, laptops, etc.
[T3] Attackers can also try to disable computers or mobile devices, delete or modify data or reprogram equipment through the development of malicious software, which is typically installed in the user’s equipment without authorization. Some recent virus attacks have been extremely destructive and costly.
[T4] Impersonation of people or entities is another security threat causing substantial damages. For example, an attacker masquerading a trusted network service may cause users to download malicious software. Similar, network users might be subject to identity theft, giving attackers the opportunity of not only receiving confidential information but also taking malicious actions such as to repudiate contracts or sent confidential information to the wrong person.
[T5] Apart from the threats directly caused by malicious attackers trying to corrupt the integrity of the network communication system, there exists unforeseen and unintentional security incidents that can result in loss or damage of assets due to hardware or software failures, human error, and unexpected behaviour from users or natural disasters (floods, storms, and earthquakes).
[T6] Threats related to illegal content distribution (i.e. copying and/or forwarding information on the Internet to unauthorized parties) threaten copyrights and content distribution services available on the network.
Infrastructure Threats
[T7] Some infrastructure threats aim at disrupting the provision of services at the national or international infrastructure level. This includes supply of services such as those relating to telecoms and networks, medical and healthcare, financial, transport, utilities (e.g. water, electricity and gas), emergency facilities (e.g. police, fire fighting) and food supply chain. These activities can occur under the umbrella of natural disasters, acts of terrorism, strikes and other disruptive activities such as criminal incidents and epidemics.
[T8] Network technologies used to implement electronic communication over the Internet are a basic tool for attackers to develop disruptive attacks. For example, the transition from the traditional telephone network to the Voice over IP (VoIP) gives malicious entities the opportunity to conduct new kind of attacks intended to collapse the network infrastructure, such as VoIP spamming or denial of service (DoS).
3.2 Common Network Attacks
The different threats identified in previous section 3.1 can be materialized in a set of security attacks that may be intentionally developed by malicious entities with the aim of compromising the integrity
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 23 / 69
of a communication system. In the following we provide a list of the most relevant attacks that may occur in existing networks. Further details about this and other attacks can be found by the reader in [2].
Identity Spoofing
Any device connected to Internet necessarily exchanges information with remote entities by sending messages over the network. These messages have the necessary information so that they can be properly routed to their destination. For example, in current IP networks data packets carry the sender’s IP address as well as application-‐layer data. If an attacker obtains control over the software running on a network device (e.g. a router), they can then easily modify the information put in the packets. For example, a typical attack could be to change the source address located in the IP packet header. This attack, known as identity spoofing, makes any message appear to come from an erroneous source.
Sniffing
Packet sniffing refers to the interception of data packets traversing a network. Nowadays, we can easily find free sniffer programs able to pick up communication packets travelling through the communication link. These kinds of programs work at the link layer in combination with a network interface card (NIC) able to listen and capture information transmitted over the link. An attacker performing this attack on any backbone device o network aggregation point will be able to monitor the communications of a considerable number of users.
Eavesdropping
Before selecting and implementing a specific attack, attackers would like to know the characteristics of the target network like, for example, IP address of network entities, operating systems they use and network services being offered. This information will help attackers to design more sophisticated attacks specially oriented to exploit the weaknesses detected in the network. This process of gathering information is referred to as eavesdropping.
Eavesdropping is a critical attack since the majority of network communications sends the information in clear, which allows an attacker to capture and interpret data traffic. How to avoid this attack is an important security problem that network administrators have to face.
Hijacking
This attack arises as a consequence of a weakness present in the TCP/IP protocol stack about the way headers are constructed and information is routed through the network. The hijacking attack happens when an attacker seizes control of a network communication. Depending of the target resource, there exist different types of hijacking attacks: session hijacking (seize control of a session), page hijacking (seize control of a web page on a server), etc.
Attackers are highly interested in taking control of communications by implementing the well-‐known man-‐in-‐middle attack. This kind of attack is a form of active eavesdropping in which the attacker makes independent connections with the victims and relays messages between them, making them believe that they are talking directly to each other over a private connection, when in fact the entire conversation is controlled by the attacker.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 24 / 69
Trojans
An alternative strategy employed by attackers relies on the use of special programs called Trojans. These programs look like ordinary software, but actually perform unintended or malicious actions behind the scenes when launched. Trojan-‐based attacks are very effective since attackers develop these programs in such a manner that they look, operate and appear to be the same as the compromised program.
DoS
A denial of service attack is a special kind of attack aimed at decreasing the availability of network services by interrupting the activity in network servers. This type of attack is typically implemented by flooding the network with useless traffic. For example, a common DoS attack consists in performing illegitimate requests to a server in order to cause a work overload and prevent the network server to respond to real petitions. The consequences of a DoS attack are to slow network performance, provoke the unavailability of a particular web site or prevent users from accessing network services. A DoS attack can be performed in a number of ways. According to the target resource, we distinguish three basic types of attack.
• Consumption of computational resources, such as bandwidth, disk space or CPU time.
• Disruption of configuration information, such as routing information.
• Disruption of physical network components.
In recent years, a slight evolution on the typical DoS attack, called a Distributed Denial of Service attack has come into prominence due to its much more damaging effects. While the standard DoS attack is thought of as originating from a single machine or small cluster, a DDoS attack will originate from a very large number of machines. Most of the owners of the machines involved in a DDoS attack are often not aware that their machines are being used for malicious intent, having been previously hacked and hijacked (through the use of a Trojan or other backdoor attack) and made to be part of what is known as a “botnet”. As a black market industry of botnet leasing proliferates it is expected that these attacks rise in number and effects.
Social Engineering
The human element has been referred to as the weakest link in network security. Social engineering attacks try to exploit this vulnerability by placing the human element in the network-‐breaching loop and use it as a weapon. Attackers developing social engineering attacks gain access to information systems through persuasion. For example, the attacker can impersonate a director or manager in a company that, alleging some problem caused by a business travel, he/she pressures the help desk to get his/her password reset.
3.3 Computer Security Advisory Services
The threats T1 to T8 described in previous section 3.1 can be mitigated by applying a set of computer security advisory and support services. Each of these security services will comprise a number of technical, procedural and policy security controls. For the purposes of this deliverable, the security services are defined as follows. It is worth mentioning that these security series are adapted from the security report developed by the European Committee for Standardization (CEN) [1].
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 25 / 69
• Registration, Authentication and Authorization Services. These services provide the means to ensure that users are uniquely and unambiguously identified and granted access only to those assets for which they have been authorized. The overall security of the business and electronic services and their assets (described in section 3.1) rely ultimately on the capability to authenticate users of the service. The service also includes the authentication of all entities other than a person, such as organizations, systems, devices, applications/services, or components.
• Confidentiality and Privacy Services. These services provide the means whereby business information is stored and transferred securely (including possibly the identities of participants). They also ensure that private information (such as an individual's medical information) is protected in accordance with legislation such as data protection.
• Trust Services. These services are required to ensure that communication transactions are properly traceable and accountable to authenticated individuals and cannot be subsequently disavowed. They are the services that enable business service providers and business clients to make commitments in electronic form. These services might also provide anonymization and pseudonymization, as well as directory services.
• Network and Information Security Management Services. These services are required to ensure that appropriate management controls, processes and procedures are in place in addition to the technical security measures to protect the system and network infrastructure. The security controls in this section include policies, organizational controls, controls to achieve asset management, human resources security, physical security, controls to achieve operational and communications controls, controls against malicious code, the secure design and configuration of applications, incident management and business continuity.
• Assurance Services. These services provide network users with confidence that all technical (hardware and software applications) and non-‐technical (physical, personal and procedural) security measures have been designed, configured and are being operated in a secure manner in accordance with the relevant standards, and provide protection against the assessed risk to the services. Following a process of independent audit or evaluation, the result can be an improved security management system or a more secure product; this might also be indicated by a certificate (note that the use of certificate in this context is not the same as a 'digital certificate' that is used to prove ownership of a public key).
Table 1 shows how these security services can be used to address the security threats T1 to T8 previously identified. The symbol ✔ is used to denote when a certain threat is mitigated by a security service. On the contrary, when a security threat is not solved by a certain security service, the symbol ✖ is used instead. As observed, while some threats (e.g. T1) can be solved by ensuring different security services, others (e.g. T3) can be only defeated by applying a specific security service. As the reader may notice, assurance services are not included in the table since they try to define what confidence can be placed in the security measures contained in the other sections.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 26 / 69
Security Services
Threats Registration, Authentication
and Authorization Confidentiality and Privacy
Trust Network and Information Security Management
T1 ✖ ✔ ✖ ✔ T2 ✔ ✔ ✖ ✔ T3 ✖ ✖ ✖ ✔ T4 ✔ ✖ ✔ ✖ T5 ✖ ✖ ✖ ✔ T6 ✖ ✖ ✖ ✔ T7 ✖ ✖ ✖ ✔ T8 ✖ ✖ ✖ ✔
Table 1. Relation between threats and security services
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 27 / 69
4 Gaps and Standard Analysis for Cooperative ITS
This section develops a gaps and standards analysis of the V2X/ITS scenario, which is one of the use cases where the results achieved by the INTER-‐TRUST project will be validated. This study takes as input the INTER-‐TRUST requirements previously described in section 2 and tries to identify the concrete needs in terms of security that must be satisfied by the INTER-‐TRUST framework.
This section is structured as follows. First, we provide a brief overview of cooperative ITS systems and present the different entities and communications that may appear in ITS scenarios. Next, we identify the ITS application classes and the security requirements demanded by these applications to achieve a secure driving experience. Finally, these requirements are analyzed to identify the challenges posed by ITS applications in relation with INTER-‐TRUST.
4.1 Overview of Cooperative ITS
Cooperative ITS constitutes a research area that has been extensively explored. We find research efforts driven by different bodies such as academic research forums, standardization organizations and research projects developed by world leading power in research located in Europe, USA and Japan [7]. This enormous interest demonstrates that we are not facing some far-‐fetched goals of a group of researchers or companies. Instead, C-‐ITS represent a relevant research topic expected to provide numerous advantages such as assistance to vehicle operation, vehicle traffic management or road safety provisioning.
Cooperative ITS improves the traditional standalone ITS system by developing a new vehicular environment where ITS stations share information and provide advice in order to improve safety, traffic efficiency and comfort. As described in [8], cooperative ITS joins together the following features:
• A common reference architecture. • Sharing of information not only between any ITS station but also between applications within a
single ITS station. • Sharing of resources (communication, positioning, security,...) by applications within an ITS
station. • The authorized use of information for purposes other than the original intent. • The support of multiple applications running simultaneously.
In order to define a unified framework and harmonize concepts related to ITS, the European ITS community has specified a common communication architecture for C-‐ITS under the action of research projects funded by European programs FP6/FP7, the C2CC and international standardization bodies (mainly ISO TC204 WG16 and ETSI TC ITS). The developed ITS communication architecture is suitable for a variety of communication scenarios like vehicle-‐based, roadside-‐based and Internet-‐based. The architecture can be also applied through a diversity of access technologies like 802.11p, infra-‐red, 2G/3G and satellite. Furthermore, the architecture has been specially conceived to support a variety of ITS application types like road safety, traffic efficiency and comfort/infotainment, deployed in various continents or countries and ruled by distinct policies. The communication
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 28 / 69
architecture relies on the concept of ITS station reference architecture [9] [10]. The ITS Station (ITS-‐S) represents a generic entity that can implement different functionalities on the ITS network.
Figure 1. ITS station reference architecture (extracted from [9])
As depicted in Figure 1, the ITS station follows a layered communication architecture integrated by different modules already present in the traditional OSI model: access, networking & transport, facilities and applications. Additionally, two cross-‐layer entities are defined: Management Entity (ME) and Security Entity (SE). While the former is in charge of managing communications in the ITS station, the latter is intended to provide security services to the different modules. These functional blocks are interconnected through Service Access Points (SAPs) that allow the exchange of information between layers and entities.
Depending on the played role, there exist different types of ITS stations [11] [12] (see Figure 2):
• Personal ITS station: ITS station located in a hand-‐held device (e.g. smartphones). • Vehicle ITS-‐S: ITS station located in a vehicle. • Roadside ITS-‐S: ITS station located in a roadside device (e.g., electric charging station). • Central ITS-‐S: ITS station located in the road operator’s infrastructure (e.g., road operator
control center).
Figure 2. ITS station types (extracted from [9])
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 29 / 69
4.1.1 Cooperative ITS Applications
Cooperative ITS applications can be classified in three different categories:
• Road safety: are primarily employed to decrease the probability of traffic accidents and the loss of life of the vehicle’s passengers. Road safety applications primarily provide information and assistance to drivers to avoid collisions with other vehicles and obstacles present in the road. These include both applications such as emergency braking or lane departure notification which require short range time-‐critical for immediate actions from the vehicles, and longer-‐range applications such as road hazard events (black ice, vehicle in the wrong direction) or road work which require non time critical communications.
• Traffic efficiency: are focused on improving the vehicle traffic flow, traffic coordination and traffic assistance. These applications typically provide updated local information to drivers as well as maps to optimize the journey. These include road itinerary planning, green wave, road diversion, and require constant exchange of information between vehicles, the roadside infrastructure and some traffic information servers.
• Other applications: These are not necessarily ITS-‐specific applications, but must be supported in other to provide a better transportation experience to the road users. For example, here we can find applications intended to improve the comfort of the occupants of vehicles. This category is also often referred to as value added applications. For example, infotainment services like Internet access or media downloading are widely used nowadays.
The main difference between Cooperative ITS applications and conventional ITS applications is that Cooperative ITS applications rely on a common communication architecture between all connected entities allowing them to exchange all types of information.
4.1.2 Cooperative ITS Facilities
The ITS station facilities is an intermediate layer between the ITS station networking & transport layer and the applications, offering them access to information collected from other ITS stations (vehicles, roadside) and freeing them from the necessary message signalling to transmit and process data exchanged between ITS stations in a broadcast fashion. The immediate benefit of the existence of the facilities layer is the sharing of data between various applications. Without the existence of the facilities layer, ITS applications would broadcast potentially similar information, therefore increasing consumption of network resources and processing power.
The current facilities have been defined by the ETSI standardization body and includes two different protocols: (1) Co-‐operative Awareness Messages (CAM) [20] 1-‐hop broadcast messages transmitted in the immediate vicinity mainly for time-‐critical road safety purposes; and (2) Decentralized Environmental Notification Messages (DENM) [21] multi-‐hop broadcast messages transmitted in a given geographical area. Recently, the CEN/TC278/WG16 and ISO/TC204/WG18 working groups have declared their intention of extending the facilities layer with the definition of seven additional message sets intended to facilitate the exchange of information between the vehicle and the infrastructure:
• Map data (MAP): used to provide vehicles with full geometric layout of items like complex intersections, high speed curve outlines and segments of road-‐ways.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 30 / 69
• Probe Data Management (PDM): used to control the type of data collected and sent by the vehicle ITS station to the local roadside ITS station or the central ITS station.
• Probe Vehicle Data (PVD): used to communicate status about a vehicle to roadside ITS-‐S or central ITS-‐S to allow the collection of information about typically vehicle travelling behaviours along a segment of road.
• Signal Phase and Timing (SPaT): used to convey the current status of one or more signalized intersections. The receiver of this message (vehicle ITS-‐S) can determine the state of the signal phasing and when the expected next phase will occur.
• Signal Request Message (SRM): is sent by a vehicle ITS-‐S to the roadside ITS-‐S in a signalized intersection or sent to a central ITS-‐S. It is used for either a priority signal request or a preemption signal request.
• Signal Status Message (SSM): is sent by a roadside ITS-‐S in a signalized intersection or a central ITS-‐S. It is used to relate the current status of the signal and any collection of pending or active preemption or priority events acknowledged by the controlled. This message allows other users to determine their ranking for any request they have made.
• In-‐Vehicle Information (IVI): sent by roadside ITS-‐S or central ITS-‐S to vehicle ITS-‐S. It is used to inform drivers on qualifies road and traffic conditions, in a consistent way with road authorities/operators requirements, and in a manner consistent with the information that would be displayed on a road sign or variable message sign. Additional, this protocol is intended to deliver contextual speeds information to road users to improve safety and support traffic management and reduce green-‐house gas emissions.
4.2 ITS Architecture
Before analysing the security needs of ITS applications, we are going to describe the vehicular communication scenario and specify what are: the participating entities, the role developed by each entity and the relationships that may exist among them in terms of communication interfaces.
Figure 3 depicts a typical C-‐ITS network. According to the ISO/ETSI station reference architecture, three types of ITS stations (ITS-‐S) could exist: vehicle ITS-‐S, roadside ITS-‐S and central ITS-‐S. Without loss of generality, to simplify the security analysis carried out in subsequent sections, we simplify the number of nodes integrating each ITS-‐S type. In particular, we assume the following scenario:
• In the vehicle ITS station, there exists a node executing the ITS-‐S router functionality called Mobile Router (MR), which historically has been known as On-‐Board Unit (OBU). The MR is in charge of providing network connectivity to internal ITS-‐S host or in-‐vehicle hosts (VH). Through the MR, in-‐vehicle hosts will access to ITS services offered by the network operator such as route planning or infotainment applications.
• The roadside ITS station is only integrated by an ITS-‐S router called Access Router (AR), which historically has been known as Roadside Unit (RSU). The AR plays an essential role in the C-‐ITS communication scenario since it is responsible for providing connectivity to the road operator network to the vehicles. In case the roadside ITS-‐S needs to implement some
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 31 / 69
functionality at either facilities or application layer, we assume an ITS-‐S host collocated within the AR (i.e. the AR is able to provide services beyond networking layer).
• In the central ITS station we consider the existence of an ITS-‐S host called Application Server (APPS) hosting all the services offered by the road operator to provide an improved driving experience to vehicles. These services are intended to improve road safety, traffic efficiency as well as services such as Internet access or multimedia downloading. In the central ITS-‐S we obviate the remaining ITS stations providing routing capabilities (i.e. ITS-‐S router, ITS-‐S border router and ITS-‐S gateway) since this activity is not relevant from a security standpoint.
Figure 3. ITS architecture. Involved entities
The aforementioned entities establish different communication relationships to enable the exchange of information in a secure manner. This is depicted in Figure 4, where a simplified architecture is represented in functional terms by the overlay. As observed, there exists different multiple communication interfaces which are identified through reference points. Considering the type of service supported by the communication interface, the communication interfaces can be classified in two groups:
• Deployment of C-‐ITS services. In this group we find reference points that are necessary to implement specific services in vehicular networks.
o Reference point “A” allows the communication between MRs/OBUs and the APPS server located in the central ITS station.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 32 / 69
o Reference point “B” allows the communication between in-‐vehicle hosts (VH) and the APPS server located in the central ITS station.
• Communication services. This group brings together those reference points not directly used to implement C-‐ITS services but necessary to achieve a fully operational vehicular communication.
o Reference point “C” refers to the communication relationship between the in-‐vehicle host and the OBU. This communication link is typically based on a wireless communication technology like IEEE 802.11b/g/n.
o Reference point “D” refers to the communication relationship between vehicles. This communication link is implemented through a short/medium range wireless communication technology like IEEE 802.11p.
o Reference point “E” refers to the communication relationship between a vehicle and roadside stations which is realized by the MR/OBU and AR/RSU, respectively. Similarly to reference point “D”, this communication link is implemented through a short/medium range wireless communication technology like IEEE 802.11p.
o Reference point “F” refers to the communication relationship between the AR/RSU and the APPS server located in the central ITS station. This communication link is typically based on some wired technology providing high data transmission (e.g. Gigabit Ethernet).
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 33 / 69
Figure 4. ITS architecture. Communication interfaces with reference points.
4.3 ITS Application Classes and Use Cases
As described in section 4.1.1, the application layer of the ITS station reference architecture is in charge of implementing ITS services with the assistance of the facilities layer. Despite Cooperative ITS potentially allows the instantiation of multiple applications aimed at providing safer, greener and more comfortable transportation systems, some priority must be established among the ITS services. In other words, it is necessary to identify those applications needing urgent attention from a deployment viewpoint. In this sense, the ETSI standardization body has taken the initiative and has identified the so-‐called Basic Set of Applications (BSA) using either V2V or V2I communications. The BSA, which is specified in the ETSI TR 102 638 standard [13], is intended to guide research efforts for the development of future vehicular communication systems.
Table 2 describes the BSA. As we can observe, the BSA includes 7 relevant applications and covers 32 use cases dealing with road safety, traffic efficiency and infotainment services. For a detailed
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 34 / 69
description of each use case, please refer to Annex C of ETSI TR 102 638 standard [13]. These use cases will be considered in the development of cooperative ITS systems.
From the information depicted in Table 2, according to the communication model, we can identify the following ITS application categories:
1) Cooperative awareness. The purpose of cooperative awareness messages is to allow ITS users to provide other users with information regarding their status and environment in order to improve road safety.
2) Static road hazard warning. Static local hazard warning messages are broadcasted by fixed roadside ITS stations usually to provide continuous information regarding a specific static condition which is relevant to road users.
3) Interactive local hazard warnings. Interactive local hazard warnings messages support a direct cooperation in specific hazardous situations. The basic model for these applications is that station A receives a cooperative awareness message from station B and then returns a message to station B requesting that it takes a particular action. Based on this there may be additional data exchanges.
4) Area hazard warnings. Area hazard warning messages are broadcast and then forwarded by the receiving stations using a position-‐based routing protocol (e.g. Geonetworking). They are sent event-‐driven to inform about a specific event or a specific condition to improve road safety.
5) Advertised services. Advertised services refer to services where a provider unit sends out a message of a particular type advertising that the service is being offered and an ITS-‐S with the corresponding user application connects to the service. Advertisements are not application messages themselves, though they may contain information allowing the user application to decide whether to connect. For example, a service advertisement for entertainment services might contain an identifier for the media provider.
6) Local high-‐speed unicast services. Local high-‐speed unicast services are provided directly to vehicles that may be moving at a high speed.
7) Local multicast services. Local multicast services are similar to local unicast service but using multicast communication.
8) Low-‐speed unicast services. Low-‐speed unicast services are non time-‐critical services consumed at low (vehicle) speeds.
9) Distributed (networked) services. Distributed services are non-‐time critical subscription services that are intended to be consumed by the user over long periods such as the duration of a journey or even the lifetime of a vehicle.
In the remaining of the document we will refer to this classification since analysing group of applications using the same communication model facilitates the identification of common security requirements.
INTER-TRUST – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 35 / 69
Class Application Use case Description
Active road safety
Co-‐operative awareness
Emergency vehicle warning An emergency vehicle periodically broadcasts its position, speed and heading as well as whether it has its siren on and/or its blue light (or equivalent) in use.
Slow vehicle indication A slow vehicle periodically broadcasts its presence and thus encourages other vehicles to overtake. The broadcast information contains an indication that this is a special type of vehicle, called "slow vehicle".
Motorcycle approaching indication A motorcycle periodically broadcasts its presence. If other vehicles detect the imminent danger of collision, a warning is issued. The broadcasted information contains an indication that this is a special type of vehicle, called "motorcycle".
Road Hazard Warning
Emergency electronic brake lights Warn vehicles behind of a sudden slowdown.
Wrong way driving warning Warn vehicles in front of a detected violation of a one-‐way road.
Stationary vehicle – accident A stationary vehicle at a potentially dangerous location periodically sends out a warning to other vehicles. The information may also be forwarded by available RSUs to a traffic management centre. Stationary vehicle – vehicle problem
Traffic condition warning Warning other vehicles about a detected potentially dangerous traffic condition.
Signal violation warning When a signal violation is detected, all potentially affected vehicles are warned. The detection is done in the RSU.
Roadwork warning A mobile road infrastructure component distributes messages to warn affected vehicles about road works.
Collision risk warning from RSU Detect potential collisions of vehicles that cannot directly communicate and warn the drivers.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 36 / 69
Decentralized floating car data Detect potential local dangers and send out warning message to potentially affected vehicles. Different warning reasons can be precipitation, road adhesion, visibility, or wind.
Cooperative traffic efficiency
Speed management
Regulatory / contextual speed limits notification
Road side infrastructure broadcasts speed limits that can be regulatory (i.e. set by an authority) or contextual (e.g. reduced limit due to weather conditions).
Traffic light optimal speed advisory A traffic light broadcasts timing data associated with its current state (e.g. time remaining before switching between green, amber and red)..
Co-‐operative navigation
Traffic information and recommended itinerary
Broadcast traffic conditions. This action may cause vehicles to download the recommended itinerary.
Enhanced route guidance and navigation
RSU periodically sends service announcements containing links to "known navigation support servers".
Limited access warning and detour notification
Broadcast access restrictions, e.g. due to road works.
In-‐vehicle signage Traffic sign information is broadcasted to be displayed in the vehicle
Other applications
Location based services
Point of Interest notification RSU periodically sends information about local services. The vehicle may establish a unicast connection to request more information
Automatic access control and parking management
RSU periodically broadcasts the presence of an access controlled area. Vehicles requiring access provide credentials authorizing the access in unicast communications.
Local electronic commerce RSU periodically broadcasts the presence of local electronic commerce. Vehicles requiring access provide credentials authorizing the access in unicast communications
Media downloading RSU periodically broadcasts the presence of the possibility to download media files.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 37 / 69
Vehicles establish a unicast connection and download required media data.
Communities services
Insurance and financial services RSU periodically broadcasts the presence of the insurance and financial services. Vehicles establish a unicast connection and uses the services.
Fleet management RSU periodically broadcasts the presence of fleet management service access. Vehicles establish a unicast connection and uses the services.
ITS station life cycle management
Vehicle software / data provisioning and update
RSU periodically broadcasts the presence of vehicle software access. Vehicles establish a unicast connection and uses the services.
Vehicle and RSU data calibration Vehicle and RSU get synchronized to enable an optimal operation (e.g. local time).
Table 2. ITS application classes and use cases
INTER-TRUST – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 38 / 69 38 / 69
4.4 Security Requirements of ITS Applications
4.4.1 ITS Security Objectives
Over the centuries, security in communications has been an issue of paramount importance. From traditional postal letters to actual communication networks, the involved parties in a transaction require certain level of confidence and demand the accomplishment of certain security objectives to assure, for example, that the transmitted information has not changed and comes from the rigorous party.
Cooperative transportation systems are not agnostic to this need and researchers are encouraged to assure the complete security of vehicular communications. In fact, security is a critical aspect for vehicular applications to assure, for example, the trustworthiness of the nodes alerting of an accident or providing traffic information. An adequate security level in vehicular communications is essential to protect against tampering or impersonation security attacks, which may have disastrous effects in vehicular environments.
The intention of this section is to identify the security objectives that must be satisfied for each ITS application in order to achieve a secure vehicular communication environment. In the following we indicate the different security objectives that have been considered in our analysis. It is worth mentioning that we present a list covering a set of security objectives initially identified by ETSI in [15] that has been extended with other well-‐known security objectives in information systems [16].
• Confidentiality: this security objective ensures that information is kept secret from all but those who are authorized to see or to access it. In vehicular communications, this security objective assures that the information sent or received by an ITS station is not revealed to unauthorized parties.
• Integrity: this security objective ensures that information has not been altered by unauthorized or unknown means. In vehicular communications, this security objective assures that the information sent or received by an ITS station is protected from unauthorized modification of manipulation during the transmission.
• Message authenticity: this security objective corroborates the source of the information. In vehicular communications, thanks to this security objective an ITS station is unable to impersonate another in a communication. Consequently, ITS stations only receive and process information from authenticated entities.
• Access control: this security objective is defined as the process of limiting the access to resources in such a manner that only authorized users can access these resources. In vehicular communications, access control allows to ensure that application services will be accessed only by authorized ITS stations. The application of access control implies the following processes:
o Entity authentication: intended to ensure that a certain entity is really who she claims to be.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 39 / 69
o Entity authorization: intended to determine what resources can be accessed by a certain entity and under what conditions. This process consists of the following actions:
§ Obtain information about the entity requesting access to the resource. Typically, this information is stored in the form of attributes by authorization centres.
§ Determine the access rights of the entity. This step relies on the so-‐called security policies which, taking as input the user’s information previously gathered, it is determined if the user is granted or denied access to the resource.
§ Enforce the application of access rights. This final action ensures that the specific conditions that limit the user’s access are applied.
• Timeliness: this security objective certifies that the information is not obsolete. In vehicular communication, timeliness is useful to ensure that the information received by an ITS station (e.g. a collision warning) is meaningful for the current time.
• Privacy: this security objective is concerned with the users’ right to protect personal information from unauthorized access. In vehicular communications, this is an essential security requirement to protect the private sphere of vehicle drivers and occupants. Privacy is a complex security objective including:
o Anonymity: it is a basic requirement when dealing with privacy in order to allow an entity to remain unidentified. This is typically achieved by concealing any identification related data like identities.
o Unlinkability: even when the user remains anonymous, eavesdroppers may be able to correlate the different messages that integrate a network transaction. Unlinkability tries to solve this problem by preventing attackers from tracing the user’s activity and, ultimately, to distinguish the presence of (anonymous) users.
4.4.2 Privacy Commitment
Privacy is a security objective that does not have direct consequences on the road safety or traffic efficiency. However, user concerns about the privacy of their data are becoming more important to service providers, and thus a crucial objective that has to be fulfilled in order to gain the confidence of the users and prevent potential invasions of their private lives.
As detailed above, in order to achieve security the cryptographic mechanisms of the ITS are required to guarantee message authenticity, access control, and timeliness. However, in principle these objectives could seriously compromise the privacy of the user. Hence, solving the inherent conflict between authentication and privacy poses a significant cryptographic challenge. Most of the solutions that have been proposed so far are based on anonymous certificates, but it is worth to mention that some complex solutions are based on group signatures:
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 40 / 69
• Anonymous certificates [84] [85]. Anonymous certificates provide pseudonyms to hide the real identities of users. Even though anonymous certificates do not contain any publicly known relationship to the true identities of key holders, privacy can still be compromised by logging the messages containing a given key and tracking the sender until her identity is discovered. Therefore anonymous certificates have to be replaced and managed in a proper way.
• Group signatures [86] [87] [88]. In a group signature, there is a group manager (whose role can be separated into two parts: issuer and opener) who administrates the group, and members may join or leave the group dynamically. After registering to the group, the member can anonymously sign any message on behalf of the group. A verifier can check the group signature with only the group public key but cannot know which registered user is the message generator.
Privacy objectives in V2I and V2V systems have some differences. If the communication is V2I, the system must guarantee that an eavesdropper cannot decide whether two different messages come from the same vehicle. Else, if the communication is V2V, the system must guarantee that deciding whether two different valid messages were generated by the same vehicle is computationally hard for everyone except the Central ITS. Moreover, privacy in V2V and V2I communication systems depends not only on the explicit messages exchanged between the entities during a communication, but also on how the Central ITS treats the information generated by the ITS and the kind of data analysis it performs. Specific privacy-‐preserving technologies, such as privacy-‐preserving data mining methods, would be required to the ITS in order to guarantee the privacy of the user.
The liability of the ITS relies on the revocability of the messages. Hence, privacy should be conditional. That is, user-‐related information such as license plate, current speed, current position, identification number, and the like, should be kept private from other users in the system while authorized users (e.g., police officers) should have access to it. These kinds of privacy concerns are similar to those related to location-‐based services.
4.4.3 Analysis of Security Requirements for ITS Applications
Once the relevant ITS applications have been identified as well as the security objectives to be considered, in this section we analyse the security needs of the different ITS applications in order to ensure a secure operation in future Cooperative ITS. It is worth mentioning that the analysis we present in the following is based on the conclusions reached by ETSI working group 5 in the field of Cooperative ITS security. In particular, among the different specifications developed by this working group, we can highlight the ETSI TR 102 893 [14] and ETSI TS 102 940 [15] standards. The ETSI TR 102 893 document analyses vulnerabilities and threats of ITS applications considering the conditions particular to vehicular communications and outlines countermeasures to solve them. This information is used as input by ETSI TS 102 940 to conduct a basic security analysis.
Table 3 presents a security analysis of the different application classes described in section 4.3. More precisely, for the different security objectives mentioned in previous section 4.4.1, we indicate if the security objective is mandatory required (M), optional required (O) or not required (N).
Cooperative awareness applications are intended to improve safety of transportation systems. The information exchanged by this kind of applications must be authentic, integrity protected and recent. Additionally, access control is required in order to restrict access to legitimate ITS stations (e.g. only
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 41 / 69
authorized emergency vehicles are allowed to send warnings to inform neighbouring vehicles about their presence). Since this kind of applications typically sends broadcast messages, there are no confidentiality requirements. Finally, privacy is a concern to be considered in cooperative awareness applications since they exchange status information about vehicles (e.g. location, speed, direction) that can reveal personal information if is systematically collected by eavesdroppers.
Static road hazard warnings are similar to cooperative awareness with the difference that messages are produced by RSUs. In terms of integrity, message authenticity, timeliness and access control, the same requirements apply. Nevertheless, given the nature of this kind of applications where an RSU is continuously sending warning to neighbouring vehicles, neither confidentiality nor privacy requirements apply.
Interactive local hazard warning applications bring together those kinds of ITS applications providing direct cooperation in hazardous situations. For example, in a pre-‐crash sensing warning, a vehicle A predicting a high possibility of crashing with another vehicle B in the vicinity, will ask vehicle B to take some action to avoid the impact or mitigate its consequences. Similarly to the previously analysed applications, integrity, authenticity and timeliness of exchanged information are essential requirements. Security requirements on confidentiality and privacy of information, as well as access control, are optional since they depend on the specific type of application and the nature of the information exchanged.
Area hazard warning applications cover those use cases where the preservation of road safety requires the transmission of certain information in a specific geographical area. Like other road safety applications, integrity, authenticity and timeliness of exchanged information are unavoidable security objectives. Since these applications transmit information about road events (e.g. accident or roadwork), confidentiality, privacy and access control security is not required.
ITS applications advertising services (e.g. public transport information, traffic information, parking management, etc.) have low security requirements. Information must be integrity protected and the source validated. Timeliness is also an appreciable requirement so that the ITS station receiving the service announcement is sure the service is active. Nevertheless, this kind of ITS applications do not require strict control of timeliness unlike safety applications where the significance of information heavily depends on the delivery time. Finally, the remaining security requirements are not necessary when announcing services.
Finally, the remaining types of ITS application classes cover the different services offered to vehicles but not related with the provision of road safety and traffic efficiency such as electronic commerce, vehicle software update or financial services. Depending on the communication model and speed of the vehicle, we distinguish local high-‐speed unicast services, local multicast services, low-‐speed unicast services and distributed services. In general, these types of application will require integrity and authenticity of information as basic security objectives. The enforcement of other security objectives is optional and will depend on the security needs of the specific service. For example, while media downloading will required access control of the ITS user, financial services additionally will demand confidentiality.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 42 / 69
Confidentiality Integrity Message
authenticity Timeliness
Access control
Privacy
Cooperative awareness
N M M M M M
Static road hazard warning
N M M M M N
Interactive local hazard warnings
O M M M O O
Area hazard warnings
N M M M N N
Advertised services
N M M M N N
Local high-‐speed unicast services
O M M O O O
Local multicast services
O M M O O O
Low-‐speed unicast services
O M M O O O
Distributed services
O M M O O O
Table 3. Security requirements for ITS applications
4.5 Security Services
Using as input the security requirements demanded by the ITS applications to achieve a secure operation (see section 4.4), this section identifies the specific security services to effect the desired security protection. This analysis is based on the ETSI TS 102 731 specification [17] that sheds some light on the security architecture to be adopted for future vehicular communication systems.
Table 4 details the necessary security services to implement the different security objectives that may be necessary when securing ITS applications. These security services are of paramount importance and will be considered when developing the Inter-‐Trust security architecture. As we can observe, security services deal with basic security operations during the communication. For example, when providing confidentiality to an ITS application, the encryption and decryption of application messages are basic security services. Similarly, the preservation of application message integrity is based on the manipulation of an integrity check value (e.g. hash computed over the message).
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 43 / 69
Regarding the different security services, we would like to clarify the following aspects:
• The reader may notice that Table 4 includes a new security objective for the security association management. Despite this is not a security objective that will be explicitly demanded by an ITS application, it is a basic security objective necessary to implement others. For example, confidentiality and message authenticity require the existence of a security association to establish some keying material necessary to encrypt/decrypt and authenticate messages, respectively.
• The assurance of timeliness of messages has been traditionally controlled by means of sequence numbers or timestamps. For this reason, the associated security services consider both methods.
• Entity authentication is based on the acquisition of authentication credentials used by ITS station to get authenticated against another ITS station. Authorization follows a similar behaviour based on the use of authorization credentials that, when obtained, authorizes an ITS station to access a specific service.
• The security services associated to privacy only consider a basic form of anonymity provision based on the use of pseudonyms. The use of pseudonyms is a technique widely explored by existing works in the field of privacy provision and is being also considered for the provision of anonymity to ITS users. In fact, currently there exist different standard specifications proposing the use of pseudonyms in vehicular communications [18] [19].
Security Objective Security Services
Security Association Management Establish security association Update security association Remove security association
Confidentiality Encrypt outgoing message Decrypt incoming message
Integrity Calculate integrity check value Insert integrity check value Validate integrity check value
Message authenticity Calculate authenticity check value Insert authenticity check value Validate authenticity check value
Timeliness
Insert message timestamp Validate message timestamp Insert message sequence number Validate message sequence number
Access control
Entity Authentication Obtain authentication credential Update authentication credential Authenticate ITS station
Authorization Obtain authorization credential Update authorization credential Insert authorization credential to message
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 44 / 69
Validate authorization credential
Privacy Obtain pseudonym Update pseudonym
Table 4. Security requirements and associated security services (extracted from ETSI TS 102 731 [17])
The European standardization organizations are aware of the importance of clarifying how these security services will be implemented in vehicular networks. In this sense, the most active work has been carried out by the members of the ETSI Working Group 5 who have developed a security architecture that will facilitate the implementation of the different security services. This security architecture, specified in ETSI TS 102 940 [22] is based in two pillars:
• Two new entities are integrated in the ITS communication model called Enrolment Authority and Authorization Authority. While the former is in charge of authenticating ITS stations, the latter provides ITS-‐S with authoritative proof to access ITS services.
• Security services are implemented using security mechanisms defined in the IEEE 1609.2 standard [23]. This specification assumes the existence of a Public Key Infrastructure (PKI) to provide vehicles with digital certificates that are used to protect ITS application messages (e.g. sign a message with the ITS-‐S private key).
This security architecture has already been used to define the implementation of some basic security services such as privacy [24] and access control [25]. Nevertheless, as recognized in a standard recently published [26], the IEEE 1609.2 security mechanisms are not able to implement all the security services demanded by Cooperative ITS applications.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 45 / 69
5 Gaps and Standards Analysis for Electronic Voting
This section develops a gaps and standards analysis of the e-‐voting scenario, which is one of the use cases where the results achieved by the INTER-‐TRUST project will be validated. This study takes as input the INTER-‐TRUST requirements previously described in section 2 and tries to identify the concrete needs in terms of security that must be satisfied by the INTER-‐TRUST framework.
This section is structured as follows. First, we provide a brief overview of e-‐voting systems and present the different typical communication architecture. Next, we identify the security requirements demanded by these applications to achieve a secure voting process. Finally, these requirements are analyzed to identify the challenges posed by e-‐voting applications in relation with INTER-‐TRUST.
5.1 Overview of Electronic Voting
E-‐voting refers to an election or referendum that involves the use of electronic means in at least the casting of the vote. The introduction of e-‐voting raises some of the same challenges as are faced when applying electronics to any other subject, for example e-‐government. Politicians or administrators may perhaps expect that a paper version of a certain service or process can simply be taken and put on the Internet. Unfortunately, the reality is more complex, and nowhere more so than with e-‐voting.
There have been many developments in the application of e-‐voting since the Council of Europe Recommendation on legal, operational and technical standards for e-‐voting (Rec(2004)11) was adopted by the Committee of Ministers in 2004.
For this reason, when choosing a specific scheme for remote voting, it is important to evaluate the security of the system by taking into account its security risks. The security measures implemented by the system must be identified and their effectiveness on mitigating these risks evaluated. Moreover, it must be ensured that these security measures are designed and implemented properly, evaluating if the measures properly address the security issues. If they are not implemented in a proper way, the security level provided drops dramatically. For instance, the fact that a voting platform is using a cryptographic mechanism does not ensure that this is properly implemented.
5.1.1 Electronic Voting Systems
There are several ways of classifying e-‐voting systems, but usual the classification is related on how the vote is cast and how this vote is stored.
Based on how the vote is cast, it is possible to distinguish between:
• Supervised voting. When the voter requires to physically attend to a specific voting place supervised by the elections officials. This is the usual voting method used in traditional voting, where voters need to approach a polling place for being authenticated by election officers before putting the vote in a ballot box.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 46 / 69
• Unsupervised voting. When voters do not need to attend to a specific voting place for casting a vote. This is the scenario is usual on electronic voting, since the voter can be authenticated by electronic means before casting a vote (e.g., using a voter credential or electronic ID). Therefore, there is not required a supervision of election officers to allow the voter casting a vote.
When considering if the cast votes are stored in the same casting device or in a remote location, we can distinguish between:
• Pollsite voting. Votes are stored locally in the same polling place where voters cast their votes. Usually votes are also count in the same place and results transmitted to the central offices of election managers.
• Remote voting. In this case, cast votes are not stored in the same place where cast, but in a remote ballot box. Therefore, votes need to be transmitted through a communication channel (postal service in traditional voting or a communication network in case of e-‐voting) to a central office of the election managers. Therefore, votes are counted in this central place.
The following table summarizes the usual voting systems based on the previous classification:
Supervised Unsupervised Pollsite DRE (Direct Recording Electronic) Voting Kiosk
Remote N/A Internet voting
Phone voting
In this project, we will consider the system that will be implemented in unsupervised environments, since are the main ones where pervasive environments are deployed. To understand the constraints of supporting different voting systems at the same time, we will analyse the general architecture of a voting system.
5.2 Electronic Voting Architecture
The Architecture of an Electronic Voting application is composed of a set of software components that can be distributed within a huge range of server systems and client devices with different architectonic requirements. These components can be replied as many times as it is necessary in order to managing scalability of the census. The principal components of an electronic voting application are shown in Figure 5. Each component of the application is described below.
• Voting Client: Voters interact with Voting Application through adequate software (i.e., a browser, Front-‐end Client), executed in the client device of the voter (typically a PC, but it can be also a smartphone, tablet, …). The Voting Client device is an API that implements the security protocol from the voter side. The content of the votes is encrypted at the moment of being casted in the voting client, making impossible to attackers to read the vote content.
• Voting Application: this component is the front part of the election, the web portal that provides information about the election to the voters, and that, once the voter has been
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 47 / 69
authenticated through the voting client, will show the proper ballot to him/her. The ballots with the candidates are typically web pages generated dynamically depending on the circumscription where the voter belongs. For this reason, the voter only will be able to see (and vote) the corresponding candidates.
• Voting Proxy: this component is the gateway that isolates the Voting Service from the Voting Application. All the messages sent between the API of the Voting Client and the Voting Service are sent through the Voting Proxy. Voting Proxy receives simple HTTP or HTTPS petitions. It also supports webservice technologies. In a typical election, all the ballot casted by the voter arrives to the Voting Proxy through the Voting Application, which only sends the petitions between the Voting Proxy and the Voting Client.
• Voting Service: these components are implemented as native distributed applications. One simple voting service can play one or more roles. Note that the role functionality is provided by two components, the Voting Proxy and the corresponding Voting Service. In the simplest election, one only Voting Proxy and Voting Service binomial can play the roles of voting authentication, ballot managing and encrypted votes storage. For bigger elections, it is needed more than two instances to develop each role.
o Authentication component: this component will manage the voter authentication. The voter identity and the corresponding evaluation of the permission to vote in a given election must be determined by some mechanism. These mechanisms can be digital certifications or login/password authentication.
o Ballot managing component: This component checks that the vote has been casted before sending it to the vote storage component and sends a receipt to the voter.
o Vote storage component: This component is in charge of receiving and storing the encrypted votes by the Voting Client until the election closing time, after that the mixing process is done.
• Mixing Service: once the casting votes period is closed, each Voting Service will have stored several electronic votes within a digital ballot box, which contents need to be tallied after the data is decrypted. Mixing Service is needed to break the correlation between the votes and the voters, in order to obtain the decrypted voters and transfer them to the Tallying Application. The result of the Mixing process is an XML file containing the decrypted votes that are used to be tallied by the Tallying Application or exported to third results consolidation systems (i.e. a system used by an electoral authority to sum all the received votes, either through paper or electronic channel). Also, the Mixing Service generates another XML file containing the vote receipts associated to the decrypted votes. Typically the Mixing Server is installed in an isolated server (with no net connection) to increase security.
• Tallying Application: once the votes has been decrypted and mixed by the Mixing Service, these can be tallied by means of the election proper algorithm (i.e. Hondt law).
• Mixing and Voting Service Manager: in order to consolidate the Voting and Mixing services, there is a component called Services Manager, which allows configure and manage each service.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 48 / 69
• Log Viewer: the voting and Mixing services store (in a database) and protect (by means of cryptographic protocols) all the actions carried out during the election into what is called immutable logs. Log viewer component is the graphic interface used by the system administrators to review the logs and check their integrity during and after the election.
• Admin Console: this component is responsible of generating the electoral key and to split it into shares, one for each member of the electoral board. It is possible to import different data types (as the census) in any of the next formats: EML / XML / CSV.
• External components: besides of the voting platform components explained above, some external elements are required to assure the correct operation of the system:
o Two relational databases. One of them can be shared by all the Voting Services and the other one must be placed in the server where the Mixing process is carried out.
o PKI to generate and manage all the cryptographic keys (and the digital certificates associated) needed to configure and carry out an election. It can be use an Open SSL based PKI.
o Informative web portal containing the list of vote receipts. Once these receipts have been published, voters will be able to check that their vote has been counted in the final tally.
Figure 5. Principal components of an Electronic Voting Application
DB
Voting Client Voting Application
Voting Proxy
Voting Service
Voting Svc Manager
Publishing App
Log Viewer
Tallying App
Mixing Service
Mixing Svc Manager
Admin Console
DB
PKI
Voters
Ballot Box Servers
Mixing Server
Electoral Board
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 49 / 69
5.3 Security Requirements of Electronic Voting
5.3.1 Electronic Voting Security Objectives
In recent years it has become clear that an e-‐voting system can only be introduced if voters have confidence in their current electoral system. If it is trusted, voters are very likely to have confidence in new e-‐enabled elections. However, confidence should not be taken for granted and states need to do their utmost to ensure that it is preserved, all the more so as once trust and public confidence are eroded, they are exceedingly hard to restore. A trusted system gives scope for citizens and other stakeholders to ask critical questions.
Fostering transparent practices in member states is a key element in building public trust and confidence. Transparency about the e-‐voting system, the details of different electoral procedures and the reasons for introducing e-‐voting will contribute to voters’ knowledge and understanding, thereby generating trust and confidence among the general public.
From a security point of view, the following requirements must be fulfilled.
• Eligibility. Only authorized voters should be able to vote. This means that the channel must provide a robust way to remotely identify voters and detect any impersonation attempt. One of the main issues of remote voting is that voters cannot be identified in person. Therefore we can distinguish two different levels of impersonation: voluntary and involuntary. Involuntary impersonation is related to the impersonation of the voter without his/her knowledge (e.g., the theft of the voter credentials required to cast a vote). Voluntary impersonation requires the participation of the voter, who cooperates with the person that will impersonate such voter by providing his/her voting credentials. With the aim to simplify the comparison, we considered the risks of voluntary impersonation in the coercion and vote buying resistance security requirement.
• Privacy. The voting system has to protect voter privacy, concealing the relation between voter and his/her cast vote, and ensuring that the voter’s choice will remain anonymous. This requirement must be fulfilled once the voter has cast his/her vote and must be preserved during the counting process.
• Integrity. A voting system has to protect the vote against manipulation once it is cast and until it is counted. Therefore the channel must to provide measures to prevent and/or detect any attempt to change the voter’s intent once the vote has been cast.
• Voter verifiability – cast as intended. According to Neff and Adler [89] voter verifiability can be divided as: “cast as intended verification” and “counted as cast verification”. In cast as intended, voter must have the possibility to check that his/her vote has been accurately recorded. In the case of remote voting, this implies the availability to check if the vote received by the Election Officials and stored in the remote Ballot Box (in a physical or electronic manner) is the same as cast by the voter. It is important to note that this requirement cannot inflict with the others ones (i.e., coercion and vote buying).
• Voter verifiability -‐ counted as cast. In the counted as cast verification, voters must have the possibility to verify the inclusion of his/her vote in the final tally. This is not a requirement
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 50 / 69
currently demanded by traditional voting methods. However we consider it as a security improvement.
• Prevention of intermediate results. It is important to prevent the disclosure of intermediate results before the election is closed. This way, all the voters have the same information during the voting stage. This implies that the secrecy of the vote must be preserved until the tally process.
• Ballot box accuracy. Protection of the ballot box against the addition of bogus ballots or the elimination of valid ballots is needed. In the case that multiple voting is allowed, this measure must guarantee that only one vote per voter will be counted.
• Coercion and vote buying resistance. As introduced before, one of the main concerns of remote voting channel is that it facilitates coercion or vote buying. Therefore it is important to verify if the channel facilitates these practices or includes countermeasures to prevent them. The voting channel must mitigate the voluntary impersonation, in which eligible voters cooperates with the coercer or buyer to access the voting system.
• Channel reliability. Most government efforts are focused to increase the reliability of their current remote voting channels. Voters wish to know if their vote has been received by the electoral authority on time to be tallied. As introduced above channel availability is not only related to the delivery speed of the channel. Other factors, as the risks of denial of service attacks, influence the availability of the channel. Therefore in this criterion we will balance the availability to detect such delays in an appropriate timeframe (e.g., the detection of a denial of service) and the availability to react to them (e.g., use a contingency channel to cast the vote).
5.3.2 Electronic voting security risks
Once all the security objectives have been identified, in this section general security risks of electronic voting are defined, without considering a specific voting channel. The intention is to identify the risks that must be considered to build a secure Electronic voting application independently of the technology used by the channel. Said risks are explained below.
• Unauthorized voters casting votes: non-‐eligible voters could try to cast a vote for a specific election. The voting channel must provide a robust way to remotely identify voters.
• Voter impersonation: a voter or an attacker could try to cast a vote on behalf another person. The voting channel must provide a robust way to detect any impersonation attempt.
• Ballot stuffing: an attacker can try to add in the ballot box votes from voters that did not participate in the voting process. The voting channel must prevent the acceptance of votes that have not been cast by their intended voters.
• Voter privacy compromise: an attacker could break the voter privacy, identifying the voter with her voting options and, thereby, breaking the vote secrecy. The voting system must ensure that the voter’s intent remains secret during the voting and counting phases.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 51 / 69
• Voter coercion and vote buying: one person or organization could buy or force a voter to vote for specific voting options. The voting channel must prevent a voter from proving to a third party in an irrefutable way her voting intent.
• Vote modification: vote contents could be modified to change the election results. The voting system must detect any manipulation of valid cast votes.
• Vote deletion: an attacker could try to delete valid votes from the ballot box. The ballot box must be protected against unauthorized changes.
• Publication of non-‐authorized intermediate results: the intermediate results could be disclosed before the election is closed, influencing those voters that have not exercised their right to vote yet. The voting system has to preserve the secrecy of the cast votes until the tally process to prevent any partial results disclosures.
• Voter distrust: a voter does not have any means for verifying the correct reception and count of her vote. Therefore, the voter could have a negative feeling about the voting process. The voting platform must allow the voter to check if the vote has been correctly received at its destination, and if it has been present in the tallying process.
• Election boycott-‐denial of service: an attacker could disrupt the availability of the voting channel by performing a denial of service attack. The voting platform must detect the eventual congestion of the election services in order to react against them as soon as possible, e.g. by using contingency channels.
• Inaccurate auditability: not enough election traceability or easy to tamper with audit data may allow attackers to hide any unauthorized behaviour. The voting channel should provide means to implement an accurate audit process and to detect any manipulation of the audit data.
5.3.3 Analysis of Security Requirements for Electronic Voting Applications
Secure Electronic Voting technology is often based on the use of advanced cryptography to achieve the unique security requirements of voting from a remote location. Many of the technologies employ the use of proven cryptographic primitives such as hash functions, digital signatures, and public key cryptography. The most recent and revolutionary techniques utilize homomorphic properties present in certain cryptosystems to achieve end-‐to-‐end verifiability. This concept provides both voters and universal auditors with the ability to verify the accuracy of an electronic return system without violating any other requirements, such as voter privacy.
When evaluating an Electronic Voting platform, it is important to evaluate the efficiency of the measures implemented to manage the security risks. In this section we will introduce some security methods implemented in voting platforms and evaluate their efficiency on achieving the security objectives demanded in a secure election.
5.3.3.1 Authentication methods
One important issue in Internet voting is how voter identity can be proved in a remote way. A usual approach consists on providing a username and a password to the voter at the time of registration, and request for them at the time of casting the vote, to ensure the identity of the voter. Following
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 52 / 69
this approach, the username/password values have to be stored in the voting server in order to verify the identity of the voter. Therefore, in case an external attacker gains access to it, these credentials could be stolen from or modified in this server, in order to impersonate valid voters. Moreover, these credentials are vulnerable to eavesdropping attacks that intercept the passwords when submitted. Alternative proposals consist on using strong authentication methods, such as one-‐time passwords or digital certificates. One-‐time passwords prevent the re-‐use of intercepted credentials, since the authentication information sent (password) changes each time the voter is authenticated. The most robust solution for voter authentication is the use of digital certificates, since it provides, in addition to access authentication, data authentication: by digitally signing her vote, the voter can demonstrate that she is the owner of a specific vote. When this approach is used, the vote is encrypted before being signed. Otherwise, the digital signature could be used to correlate voters with votes. In case voters do not have digital certificates (e.g. an electronic ID card), a key roaming mechanism can be used to provide digital certificates to voters when casting their votes. The digital certificate would be protected by a PIN or password known by the voter. This password is not stored in a remote database and therefore cannot be accessed to impersonate the voter.
5.3.3.2 Vote encryption
In an e-‐voting platform, votes are vulnerable to eavesdropping practices during their transmission and storage. Therefore, vote encryption at the time of vote casting is of paramount importance to preserve vote secrecy. Some voting platforms implement vote encryption at the network transmission level, using SSL connections between the voter PC and the voting server. However, SSL encryption falls short to protect end-‐to-‐end voter privacy, since the vote is not encrypted when leaving the transmission channel: the vote is received at the voting server in clear text. Therefore, any attacker that gains access to the server system could access to the clear-‐text vote information and break the voter privacy. To solve this issue, it is strongly recommended to use data level encryption of votes, such as encrypting the votes using an election public key. That way, any attack at voting server level will not compromise voter privacy, since votes leaving the voting channel are still encrypted. The protection of the election private key is further discussed in a later section.
5.3.3.3 Vote integrity
Cast votes are vulnerable from being tampered with by attackers that gain access to the voting system. As mentioned in previous sections, an efficient approach to prevent vote manipulation after casting a vote is to digitally sign it after encryption. Alternatively, votes can be protected by applying a cryptographic MAC function (e.g., an HMAC function) and send this value as an integrity proof of the vote. However, this measure has some security risks, since the key used to calculate the MAC function must be also known by the voting server to validate the vote integrity. Therefore, an attacker who gained access to the voting server could generate valid integrity proofs of modified votes. Digital signatures issued by voters do not have this problem. Moreover, digital signatures can be used for both integrity verification and identification purposes. In addition to digital signatures, advanced cryptographic techniques, such as zero-‐knowledge proofs of origin [3], can be used to ensure that the encrypted vote has been recorded as cast by the voter. The digital signatures and zero-‐knowledge proofs can be stored jointly with the votes in the digital ballot box, in order to ensure their integrity until the moment of vote decryption.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 53 / 69
5.3.3.4 Protection of the election private key
As mentioned before, the election private key is aimed to protect voters’ privacy and intermediate results secrecy. Usually, asymmetric encryption algorithms are used: votes are encrypted using a public key, and they can only be decrypted using the corresponding private key. To prevent that an individual person could decrypt the votes, this key must be protected using a separation of duties approach. A recommended practice consists on splitting the key in several shares using threshold cryptography algorithms, and to give one share to each Electoral Board member. That way, a minimum number of Electoral Board members must collaborate to recover the election private key and decrypt the votes. It is of paramount importance to use a threshold scheme to prevent that the loss of one share could prevent the decryption of the votes.
5.3.3.5 Anonymizing votes before decryption
Most voting platforms directly decrypt the votes at the end of the election. However, if the decryption is done straight forward, it could be possible to correlate clear text votes with encrypted ones and, therefore, to original voters. It is critical to break the correlation between clear text votes from the original casting order. The most efficient methods are based on Mixnets, where votes are shuffled and decrypted/encrypted several times before obtaining the vote contents; and the homomorphic tally, where the election result is obtained without decrypting the individual votes, but decrypting the result of operating the encrypted votes. Other methods (such as randomizing votes while stored) could not fully guarantee that there is no link between votes and voting order.
5.3.3.6 Individual and Universal verification methods
One of the major concerns of remote voting in general is the lack of means for the voter to verify the correct reception and count of her vote. The introduction of remote electronic voting can provide to the voters some means to individually verify the voting process, providing more confidence and detecting possible attacks. The verification process can be split in two methods: cast as intended and counted as cast verification.
The cast as intended verification consists on ensuring that the vote received by the voting server contains the voting options originally selected by the voter. For instance, it can be used to detect if the voter computer has any malware that is changing her voting options before encryption. One way to perform this verification consists on calculating special codes (commonly called Return Codes) using the encrypted vote received at the voting server, and returning them to the voter. The voter will in turn use a special Voting Card issued for the election to verify that the received Return Codes are those assigned to the voting options she has chosen. Since the Return Codes are calculated using a secret key only known by the voting server, an attacker cannot deliver forged Return Codes to the voter without being detected.
The counted as cast verification consists on ensuring that the vote cast by the voter is included in the final tally. This verification detects manipulation or deletion of cast votes. One method to ensure that the vote has reached the counting phase is to deliver to the voter a receipt with a random identifier. If this random identifier can only be retrieved from the encrypted and tallied votes, a voter can then verify that her vote has been included in the tally. It is of paramount importance that these random identifiers cannot be correlated with clear text votes. Otherwise, the Voting Receipt could be used for vote buying or coercion practices. This measure must be complemented with the universal verification of the decryption process. Universal verification should allow auditors and observers to
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 54 / 69
verify in an irrefutable way that the decrypted votes represent the contents of the encrypted ones. In other words, that the decryption process did not manipulate the results. This can be achieved using advance cryptographic techniques.
5.3.3.7 Traceability and Auditability
Traceability is essential for an Internet voting platform: logs or proofs generated by the different modules can be used to detect and react against real-‐time attacks or malfunctions, as well as ensuring the reliability of the election results. All the sensitive operations performed in the voting platform modules have to be registered in logs, taking care of not registering information that can compromise voters’ privacy. In order to prevent an attacker from deleting or modifying these logs (to hide any attack), they can be cryptographically protected, in such a way that a specific log cannot be deleted without detection. Also, critical processes such as vote decryption should be designed to provide cryptographic proofs of correct performance, so an auditor can verify that the election results actually correspond to the values of the votes cast by the voters. It is recommended the use advanced cryptographic techniques to audit the correct performance of these processes. Therefore, both auditors and voters can participate in the audit process (universal verifiability), increasing also the voter confidence.
Depending on the approach used for implementing a voting system, security risks are managed in most efficient way. Therefore, the analysis on how these risks are properly mitigated is of paramount importance when taking a decision of implementing a remote electronic voting process. Several studies and reports discussing the risks and countermeasures of specific schemes for remote voting have been presented [91], [92], highlighting the main differences between postal voting, fax voting, e-‐mail voting and Internet voting. However, these analyses are mainly focused on comparing how the risks are managed by the different remote voting channels.
As described in section 5.1.1, there are several systems that can be used to develop and set up any official electoral process. The next table summarizes the security of each voting system (Voting Kiosk, Internet voting and Phone voting).
Comparative factor
Kiosk Voting Phone Voting Internet Voting
Eligibility
High The use of strong authentication
such as digital certificates prevents the involuntary impersonation of voters.
Low Easy involuntary
impersonation to cast a vote. Authentication is
based on username/password (no
digital certificates).
High The use of strong authentication such as digital certificates prevents the involuntary impersonation of voters.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 55 / 69
Privacy
High Votes are encrypted before being cast. Cryptographic measures,
such as mixing processes, can be implemented to break any
connection between vote and voter. Voters can protect their PC’s against malware or use secure voting kiosks, but it is a
voter’s choice.
Low Voice channels are not usually protected (only
when VoIP used). Therefore the selected voting options can be easily eavesdropped.
High Votes are encrypted before being cast. Cryptographic measures, such as mixing processes, can be implemented to break any connection between vote and voter. Voters can protect their PC’s against malware or use secure voting kiosks, but it is a voter’s choice.
Integrity
High Votes can be digitally signed, preventing any manipulation.
Furthermore, when using voting receipts, any attempt to delete a vote could be detected by the
voter when verifying the receipt. Voters can protect their PC’s against malware or use secure voting kiosks, but it is a voter’s
choice.
Low Voice channel integrity is not usually protected (only when VoIP used). Therefore the selected voting options can be easily eavesdropped.
High Votes can be digitally signed, preventing any manipulation. Furthermore, when using voting receipts, any attempt to delete a vote could be detected by the voter when verifying the receipt. Voters can protect their PC’s against malware or use secure voting kiosks, but it is a voter’s choice.
Voter verifiability -‐ cast as
intended –
High A verification process can be
implemented as an independent process from the vote selection process in the voting terminal.
Votes are protected by cryptographic means after being
cast.
Medium There are tools to track a
vote sent by phone. However, there is no
guarantee that the vote received by the Election Officials contains the same vote cast by the
voter.
High A verification process can be implemented as an independent process from the vote selection process in the voting terminal. Votes are protected by cryptographic means after being cast.
Voter verifiability -‐ counted as
cast -‐
High A voting receipt allows voters to individually verify that their votes
are present in the tallying process.
Low There are no means for checking the presence of votes in the counting
process.
High A voting receipt allows voters to individually verify that their votes are present in the tallying process.
Prevention of intermediate
results
High Votes are encrypted before they
are cast. Only the board members can decrypt them at
the end of the election.
Medium The contents of the
votes can be eavesdropped and
therefore intermediate results could be accessed during transportation.
High Votes are encrypted before they are cast. Only the board members can decrypt them at the end of the election.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 56 / 69
Ballot box accuracy
High Each encrypted vote can be digitally signed using a unique
voter digital certificate to prevent the addition of bogus votes.
Additionally, voting receipts can be provided to voters to allow
them to detect the elimination of their votes.
Low It is possible to add bogus ballots without
detection. Votes can also be eliminated after cast.
High Each encrypted vote can be digitally signed using a unique voter digital certificate to prevent the addition of bogus votes. Additionally, voting receipts can be provided to voters to allow them to detect the elimination of their votes.
Coercion and vote buying resistance
High If a voter is coerced with the
coercer’s presence, he/she can cast a new vote later if multiple-‐voting is allowed. However this practice prevents the verification if the vote is counted as cast. However, voting kiosks could
help to prevent coercion and vote buying.
Medium If a voter is coerced with the coercer’s presence, he/she can cast a new vote later if multiple-‐voting is allowed.
Medium If a voter is coerced with the coercer’s presence, he/she can cast a new vote later if multiple-‐voting is allowed. However this practice prevents the verification if the vote is counted as cast. Alternatively, voting kiosks could help to prevent coercion and vote buying.
Channel reliability
High Voters realize if their vote has not reached the election authority if an error arises when casting the vote. Therefore contingency measures (e.g., try later or use another voting channel) can be used to prevent the loss of their
votes.
High Voters realize if their
vote has not reached the election authority if an error arises when casting
the vote. Therefore contingency measures (e.g., try later or use
another voting channel) can be used to prevent the loss of their votes.
High Voters realize if their vote has not reached the election authority if an error arises when casting the vote. Therefore contingency measures (e.g., try later or use another voting channel) can be used to prevent the loss of their votes.
Table 5. Comparative of Security Requirements
It is clear that Phone voting cannot reach the same security level as the other channels because it cannot encrypt voter in the same voting terminal. However, using the Inter-‐Trust framework, we would like to negotiate a policy when a voter uses this device and delegate the cryptographic operations to a server. That way, despite the communication channel security it is not fully solved, the votes can be protected from attacks after being cast (e.g., it is not possible to manipulate a digitally signed vote). Furthermore, voter verifiability can be improved at the same level as Internet and kiosk voting systems.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 57 / 69
6 Use Cases Security Analysis Requirements
6.1 E-Voting Use Case
The proposed e-‐voting use case is based in Kiosk voting, Internet voting and phone voting (i.e., voting devices without cryptographic capabilities), consisting in dynamically and consistently configuring the security policies used by the Voting Client and Voting Server to exchange messages and cast votes.
As it has been explained in the previous section, in a given election using the Voting Platform, votes can come from different sources such as people voting at home in a standard PC, people voting from different countries or people voting at Voting Centers using specifically provided Voting Terminals. Usually, all these votes are treated the same, but there could be occasions in which this is not the case.
The Inter-‐Trust framework will be integrated with Scytl’s voting platform to allow negotiating the policy with a trusted voting terminal and changing its behaviour by aspect weaving the client and server applications. Both applications will be weaved to implement different negotiations before starting the voting process.
One objective of the e-‐voting scenario is to use Inter-‐Trust to identify if a vote has been cast by a voter from an uncontrolled voting terminal (e.g., the voter personal computer) or from a trusted voting terminal (e.g., a voting kiosk located in a polling station).
The user will be able to select either if he wants to be a trusted or a standard voting client. In case the client is able to demonstrate that it is a trusted voting terminal, by showing its position and terminal parameters, it is required an extra step of digitally signing and verifying the encrypted vote with a voting terminal digital certificate. However, if the client cannot prove that it is a trusted voting client or the data sent by the client is not correct, Inter-‐Trust framework will not allow this client to vote.
Furthermore, the user can select if he wants to vote by using a Java applet that implements the cryptographic operations in the same device or to use a standard browser and delegate the cryptographic operations in the server-‐side. In case the user wants to vote by applet, Inter-‐Trust framework will check if the client has all the Java requirements before allowing him/her to download the applet.
Using this information as input, Table 8 details the specific security protection demanded by the e-‐voting use case.
e-‐voting
Confidentiality Necessary only if privacy protection or access control is applied. Integrity Mandatory security protection to ensure that the information about the position
and terminal parameters of the voting client received by the central server is protected from unauthorized manipulation.
Message authenticity
Mandatory security protection to allow the voting server corroborate that the voting client has generated the received information about its location and terminal
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 58 / 69
parameters. Access control Mandatory security protection to allow the voting client to vote by using a Java
applet. Privacy Desirable security objective to protect sensitive information sent by the voting
client (position and terminal parameters).
Table 6. E-‐Voting Use Case: Security Requirements
6.2 Vehicle-to-Infrastructure Use Case: Dynamic Route Planning
The Dynamic Route Planning use case represents a scenario where there is an inevitable need to track, regularly and with great accuracy, the movements of a user. Furthermore, the service is also aware of the intentions of the user, in the form of the final destination and chosen route for their travels when using the service. As such, it becomes essential to enforce certain guarantees on the privacy of the data being transmitted. It also becomes necessary to guarantee that the remaining security characteristics of the system are not compromised as, in the event of a successful attack and even without a breach in privacy, there could be serious implications to both the wellbeing of the users and the correct function of the infrastructure.
The Dynamic Route Planning service is a type of “Distributed (Networked) Services” as described in Section 4 above according to the ETSI criterion for ITS services which will be followed for this use case. For a “Distributed Service”, integrity and message authenticity are mandatory requirements as per the standard. However, the access control optional requirement will also be respected so as to be able to provide a framework for the eventual monetization of these services. Privacy requirements, while also optional, will also be implemented although the anonymity requirement can be softened to a pseudonymity requirement. Using this information as input, the table below details the specific security protections demanded by the Dynamic Route Planning service. The analysis is particularized for both vehicle tracking and the route and destination message transmission, justifying the need of each security service. Concrete details about the specific security requirements demanded by this use case are provided in Table 7.
Vehicle Tracking Route/Destination Message Delivery
Confidentiality Not strictly necessary for the tracking of vehicles.
Necessary to implement access control and privacy.
Integrity Required security protection need to abide by ETSI ITS standard.
Required security protection need to abide by ETSI ITS standard.
Message authenticity
Required security protection need to abide by ETSI ITS standard.
Required security protection need to abide by ETSI ITS standard.
Timeliness Desirable but not mandatory as application-‐side contingency measures are already in place.
Desirable but not mandatory as application-‐side contingency measures are already in place.
Access control Desirable to guarantee that the right position messages are being transmitted and stored.
Necessary part of the system needed in order to guarantee a revenue source to the service providers.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 59 / 69
Privacy Essential requirement to abide by EU law.
Essential requirement to abide by EU law.
Table 7. Dynamic Router Planning (V2I Use Case): Security Requirements
6.3 Vehicle-to-Vehicle Use Case: Contextual Speed Advisory
The proposed Vehicle-‐to-‐Vehicle (V2V) use case employs two different mechanisms. On the one hand, a vehicle tracking process is used where vehicles provide information to the central application server in charge of monitoring the road traffic and determine, which is the optimal speed enabling the maximum vehicle mobility. On the other hand, the speed notification delivery relies on the use of a geo-‐dissemination message procedure. In this process, the application server initially notifies vehicles (under the coverage of a specific RSU) about the advisable speed limit in a road segment. Next, vehicles retransmit speed notifications received from the RSUs. When a vehicle receives a speed notification message, processes the content and retransmits it to other vehicles using a geographical routing protocol (e.g. GeoNetworking). In fact, this is an essential feature to achieve an effective application of the Contextual Speed Advisory service since, most probably, all the vehicles located in a specific segment of the road will not be under the coverage of an RSU.
In previous section 4.4 we developed a general security analysis requirement for ITS applications. We determined not only relevant security objectives in vehicular communications but also conducted a deeper analysis to identify the security needs of the different classes of ITS applications. Considering that the Contextual Speed Advisory Service can be considered a kind of “Local high-‐speed unicast services”, the following security requirements apply: integrity and message authenticity are mandatory security objectives; and, confidentiality, timeliness, access control and privacy are optional security objectives and their use depend on the security needs of the particular ITS service.
Using this information as input, Table 8 details the specific security protection demanded by the Contextual Speed Advisory Service. The analysis is particularized for both vehicle tracking and advisable speed notification, justifying the need of each security service.
Vehicle Tracking Advisable Speed Notification
Confidentiality Necessary only if privacy protection or access control is applied.
Necessary only if access control is applied.
Integrity Mandatory security protection to ensure that the vehicle information received by the central server is protected from unauthorized manipulation.
Mandatory security service to ensure that the speed notification received by vehicles has not been modified by unauthorized parties.
Message authenticity
Mandatory security protection to allow the central server corroborate that the authentic vehicle has generated the received vehicle information.
Mandatory security service to ensure that the speed notification is authentic and generated by the road operator (i.e. is not a fake notification).
Timeliness Mandatory security protection necessary to certify that the vehicle information received by the central ITS
Mandatory security objective to enable vehicle probe that the advisable speed notification has been recently
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 60 / 69
server is recent. generated. Access control Desirable security objective to control
that vehicle information is received by the server only from authenticated and authorized vehicles to access to the Contextual Speed Advisory service.
Desirable security objective to control that only authenticated and authorized vehicles receive advisable speed notifications.
Privacy Desirable security objective to protect sensitive information sent by the vehicle (e.g. position, plate number, etc.) and useful for attackers interested in profiling the driver’s habits.
Not required (speed notifications are not expected to contain sensible vehicle information).
Table 8. Contextual Speed Advisory Service (V2V Use Case): Security Requirements
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 61 / 69
7 Conclusions
The technological evolution that computer networks are expected to experiment in the forthcoming years will enable the development of a pervasive environment where every device (smartphones, vehicles, clothes, electrical appliances, etc.) will be always connected to the Internet and constantly exchanging information with remote entities. In this environment, the research community is challenged to articulate a set of security services able to protect the system assets against the different threats that may arise.
In this context, the design and development of secure interoperable pervasive systems is an aspect of paramount importance. Nevertheless, it presents several problems preventing their correct application in security applicative domains. On the one hand, there exists a lack of sufficiently rich techniques to tackle the problem of security policies modelling, interoperability, deployment, enforcement and supervision. On the other, the lack of techniques that allow dynamically adapting security mechanisms to changes in the requirements and in the environment.
This deliverable presents an exhaustive analysis of existing related works and investigations in this field, where the following needs have been detected to solve the aforementioned problems: definition of mechanisms enabling security in heterogeneous and pervasive systems; modelling of secure interoperability policies with time constraints; use of procedure enabling the secure establishment of trusted relationships among systems; use of AOP to enforce security requirements; application of monitoring techniques to obtain an improved visibility of the system; and, deployment of active testing, fuzz and fault removal mechanisms to verify the conformity of implementations and ensure a secure interoperability among systems.
Despite the project outcomes will be cross-‐domain, Inter-‐Trust will show the benefits of the developed framework when applied to some scenarios where pervasive computing is growing in interest: the e-‐voting and V2X/ITS domains. For this reason, this deliverable also analyses existing research works and standards in these domains. This analysis has identified a common problems the inexistence of a proper security framework to secure the communications in a flexible and efficient. On the one hand, the e-‐voting scenario demands the accomplishment of strong security requirements inherent to the voting process itself. On the other hand, the V2X/ITS case represents a highly dynamic scenario where vehicles exchange information with neighbouring vehicles and with the infrastructure. The communication is required to be secure since the transmitted information may be critical to preserve safety of vehicle drivers/occupants.
In summary, this deliverable provides a complete gap and standard analysis, overviews existing investigations related to the provision of security in pervasive systems and outlines the problems to be solved and that motivate the work conducted in frames of the Inter-‐Trust project.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 62 / 69
8 References
[1] ICT Standards Board, “Network and Information Security Standards Report”, Comité Européen de Normalisation (CEN), June 2007.
[2] Ross J. Anderson and Ross Anderson. Security Engineering: A Guide to Building Dependable Distributed Systems. Wiley, 1 edition, January 2001.
[3] E. Bayse, A. Cavalli, M. Nunez and F. Zaidi, A Passive Testing Approach based on Invariants: Application to the WAP, Computer Networks, 48, pp247-‐266, 2005.
[4] César Andrés, María-‐Emilia Cambronero, Manuel Núñez: Formal Passive Testing of Service-‐Oriented Systems. IEEE SCC 2010: 610-‐613.
[5] Buchsbaum, T. "E-‐voting: International developments and lessons learnt". Proceedings of Electronic Voting in Europe Technology, Law, Politics and Society. Lecture Notes in Informatics. Workshop of the ESF TED Programme together with GI and OCG, 2004
[6] Zissis, D.; Lekkas "Securing e-‐Government and e-‐Voting with an open cloud computing architecture".Government Information Quarterly 28 (2): 239–251, April 2011
[7] Karagiannis, G.; Altintas, O.; Ekici, E.; Heijenk, G.; Jarupan, B.; Lin, K.; Weil, T.; , "Vehicular Networking: A Survey and Tutorial on Requirements, Architectures, Challenges, Standards and Solutions," Communications Surveys & Tutorials, IEEE , vol.13, no.4, pp.584-‐616, Fourth Quarter 2011.
[8] STANDARDISATION MANDATE ADDRESSED TO CEN, CENELEC AND ETSI IN THE FIELD OF INFORMATION AND COMMUNICATION TECHNOLOGIES TO SUPPORT THE INTEROPERABILITY OF CO-‐OPERATIVE SYSTEMS FOR INTELLIGENT TRANSPORT IN THE EUROPEAN COMMUNITY, October 2009.
[9] ISO 21217:2010, “Intelligent transport systems – Communications Access for Land Mobiles (CALM) – Architecture”, April 2010.
[10] ETSI EN 302 665, “Intelligent Transport Systems (ITS); Communications Architecture”, September 2010.
[11] ITSSv6: D2.1: Preliminary System Recommendations. Public deliverable, May 2012.
[12] ITSSv6: D2.2: Preliminary System Specification. Public deliverable, May 2012.
[13] ETSI TR 102 638 v1.1.1: “Intelligent Transport Systems (ITS); Vehicular Communications; Basic Set of Applications; Definitions”, June 2009.
[14] ETSI TR 102 893: “Intelligent Transport Systems (ITS); Security; Threat, Vulnerability and Risk Analysis (TVRA)”
[15] ETSI TS 102 940: “Intelligent Transport Systems (ITS); Security; ITS communications security and security management”
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 63 / 69
[16] Alfred J. Menezes, Paul C. Van Oorschot , Scott A. Vanstone , R. L. Rivest; “Handbook of Applied Cryptography”, CRC Press, 1997.
[17] ETSI TS 102 731: “Intelligent Transport Systems (ITS); Security; Security Services and Architecture”.
[18] ISO 16788: “Intelligent transport systems – Communications access for land mobiles (CALM) – IPv6 Networking Security”, April 2012.
[19] ISO 16789: “Intelligent transport systems – Communications Access for Land Mobiles CALM) – IPv6 Networking Optimization”, April 2012.
[20] ETSI TS 102 637-‐2: “Intelligent Transport Systems (ITS); Vehicular Communications; Basic Set of Applications; Part 2: Specification of Cooperative Awareness Basic Service”, March 2011.
[21] ETSI TS 102 637-‐3: “Intelligent Transport Systems (ITS); Vehicular Communications; Basic Set of Applications; Part 3: Specifications of Decentralized Environmental Notification (DENM) Basic Service”, September 2010.
[22] ETSI TS 102 940: “Intelligent Transport Systems (ITS); Security; ITS communications security architecture and security management”, March 2012.
[23] IEEE Std. 1609.2 draft D12 (January 2012): "Wireless Access in Vehicular Environments -‐ Security Services for Applications and Management Messages".
[24] ETSI TS 102 941: “Intelligent Transport Systems (ITS); Security; Trust and Privacy Management”, January 2012.
[25] ETSI TS 102 942: “Intelligent Transport Systems (ITS); Security; Access Control”, February 2012.
[26] ETSI TS 102 867: “Intelligent Transport Systems (ITS); Security; Stage 3 mapping for IEEE 1609.2”
[27] A. Joshi. On proxy agents, mobility and web access. ACM/Baltzer Journal of Mobile Networks and Applications, 2000
[28] M. Dunham, A. Helal, and S. Balakrishnan. A mobile transaction model that captures both the data and movement behavior. ACM/Baltzer Journal of Mobile Networks and Applications, 2(2):149–162, 1997.
[29] Charles-‐Eric Pigeot , Yann Gripay, Vasile-‐Marian Scuturici, Jean-‐Marc Pierson. Context-‐Sensitive Security Framework for Pervasive Environments. ECUMN2007, IEEE ed. Toulouse. pp. 391-‐400.
[30] M. Anisetti, C.A. Ardagna, V. Bellandi, E. Damiani, S. De Capitani di Vimercati, and P. Samarati , "OpenAmbient: a Pervasive Access Control Architecture," in Proc. of ETRICS Workshop on Security in Autonomous Systems (SecAS), June, 2006.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 64 / 69
[31] Aivaloglou, E. Gritzalis, S. Skianis, C. Requirements and Challenges in the Design of Privacy-‐aware Sensor Networks. Global Telecommunications Conference, 2006. GLOBECOM '06. IEEE.
[32] C. C. Aggarwal and P. S. Yu. A survey of randomization methods for privacy-‐preserving data mining. In Privacy-‐Preserving Data Mining: Models and Algorithms, C. C. Aggarwal and P. S. Yu, Eds., New York, NY: Springer US, pp. 137-‐156, 2008.
[33] J. Domingo-‐Ferrer and J. M. Mateo-‐Sanz. Practical data-‐oriented microaggregation for statistical disclosure control. IEEE Transactions on Knowledge and Data Engineering}, 14(1):189-‐-‐201, 2002.
[34] A. Hundepool, J. Domingo-‐Ferrer, L. Franconi, S. Giessing, R. Lenz, J. Longhurst, E. Schulte-‐Nordholt, G. Seri, and P.-‐P. DeWolf. Handbook on Statistical Disclosure Control (version 1.2). ESSNET SDC Project, 2010. http://neon.vb.cbs.nl/casc
[35] G. T. Duncan, M. Elliot and J. J. Salazar (2011), Statistical Confidentiality: Principles and Practices, New York: Springer.
[36] D. B. Rubin. Discussion of statistical disclosure limitation. Journal of Official Statistics}, 9(2):461-‐468, 1993.
[37] J. Domingo-‐Ferrer and Ú. González-‐Nicolás. Hybrid data using microaggregation. Information Sciences, 180(15):2834-‐2844, 2010.
[38] B. Pinkas, Cryptographic Techniques for Privacy-‐Preserving Data Mining, SIGKDD Explorations, the newsletter of the ACM Special Interest Group on Knowledge Discovery and Data Mining, January 2003.
[39] A. H. Karp, H. Haury, M. H. Davis, “From ABAC to ZBAC: The Evolution of Access Control Models”, Tech. Report HPL-‐2009-‐30, HP Laboratories, February 2009
[40] M. Miller, Ka-‐Ping Yee, J. Shapiro, “Capability Myths Demolished”, Technical Report SRL2003-‐02, Systems Research Laboratory, Johns Hopkins University, 2003
[41] A. Lackorzynski, A. Warg, “Taming subsystems: capabilities as universal resource access control in L4”, Proceedings of the Second Workshop on Isolation and Integration in Embedded Systems (IIES '09), April 2009
[42] G. D. Skinner, “Cyber Security Management of Access Controls in Digital Ecosystems and Distributed Environments”, 6th International Conference on Information Technology and Applications (ICITA 2009), November 2009
[43] Karin Bernsmed, Martin Gilje Jaatun, Per Håkon Meland, Astrid Undheim: Security SLAs for Federated Cloud Services. ARES 2011:202-‐209
[44] Vishal Dwivedi, Providing Web Services Security SLA Guarantees: Issues and Approaches ; Chapter in Managing Web Service Quality: Measuring Outcomes and Effectiveness ed. Khaled M. Khan, IGI Global, 2009.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 65 / 69
[45] Irfan Ul Haq, Kevin Kofler, Erich Schikuta: Dynamic Service Configurations for SLA Negotiation. Euro-‐Par Workshops 2010:315-‐323
[46] A. Abou El Kalam, R. El Baida, P. Balbiani, S. Benferhat, F. Cuppens, Y. Deswarte, A. Miège, C. Saurel et G. Trouessin. Organization Based Access Control. IEEE 4th International Workshop on Policies for Distributed Systems and Networks (Policy 2003), Lake Come, Italy, June 4-‐6, 2003.
[47] Frédéric Cuppens, Nora Cuppens-‐Boulahia, Céline Coma: O2O: Virtual Private Organizations to Manage Security Policy Interoperability. ICISS 2006
[48] Jaehong Park and Ravi Sandhu. The UCONABC usage control model. ACM Transactions on Information and System Security (TISSEC), 7(1) :128_174, 2004.
[49] Jaehong Park and Ravi Sandhu. The UCONABC usage control model. ACM Transactions on Information and System Security (TISSEC), 7(1) :128_174, 2004.
[50] Frédéric Cuppens, Nora Cuppens-‐Boulahia, and Thierry Sans. Nomad : A Security Model with Non Atomic Actions and Deadlines. 18th IEEE Computer Security Foundations Workshop (CSFW'05), 2005
[51] M. A. Harrison,W. L. Ruzzo, and J. D. Ullman. Protection in Operating Systems. Communication of the ACM, 19(8) :461-‐471, August 1976
[52] E. Bell and L. J. LaPadula. Secure computer systems: Unified exposition and multics interpretation. Number ESDTR-‐73-‐306, March 1976
[53] R. Sandhu, E. J. Coyne, H. L. Feinstein, and C. E. Youman. Role-‐based access control models. IEEE Computer, 29(2) :38–47, 1996
[54] A. Lee, M. Winslett, and K. Perano. TrustBuilder2: A Reconfigurable Framework for Trust Negotiation. In Third IFIP WG 11.11 International Conference on Trust Management (IFIPTM), pages 176-‐195, West Lafayette, IN, 2009.
[55] R. Filman and D. Friedman. Aspectoriented programming is quantification and obliviousness. Proc. Advanced Separation of Concerns, OOPSLA 2000
[56] L. Ramnivas, AspectJ in Action: Practical Aspect-‐Oriented Programming. ISBN 1-‐93011-‐93-‐6. See also, www.eclipse.org/aspectj/
[57] N. Loughran, et al., A domain analysis of key concerns -‐ known and new candidates AOSD-‐Europe Deliverable D43, AOSD-‐Europe-‐KUL-‐6 http://www.aosd-‐europe.net/deliverables/d43.pdf
[58] Kung Chen, Ching-‐Wei Lin: An Aspect-‐Oriented Approach to Declarative Access Control for Web Applications. APWeb 2006
[59] D. Xu and V. Goel. An aspect-‐oriented approach to mobile agent access control. International Conference on Information Technology: Coding and Computing (ITCC), 2005
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 66 / 69
[60] Geri Georg, Indrakshi Ray, Kyriakos Anastasakis, Behzad Bordbar, Manachai Toahchoodee, Siv Hilde Houmb, An aspect-‐oriented methodology for designing secure applications, Information and Software Technology, Volume 51, Issue 5, May 2009, Pages 846-‐864
[61] Wasif Gilani, Olaf Spinczyk: Dynamic Aspect Weaver Family for Family-‐based Adaptable Systems. NODe/GSEM 2005: 94-‐109
[62] Alexandre Vasseur: AspectWerkz 2, simple, high-‐performant, dynamic, lightweight and powerful AOP for Java. aspectwerkz.codehaus.org
[63] Support for Distributed Adaptations in Aspect-‐Oriented Middleware ; Eddy Truyen, Nico Janssens, Frans Sanen, Wouter Joosen ; AOSD’08, March 2008, Brussels, Belgium
[64] G. G. Pascual, L. Fuentes and M. Pinto, Towards an Aspect-‐Oriented Reconfigurable Middleware for Pervasive Systems: Implementation and Evaluation, Adaptive and Reflective Middleware -‐ ARM 2011
[65] Kallel, S., Charfi, A., Mezini, M., Jmaiel, M.,and Klose, K. (2009) From formal access control policies to run-‐time enforcement aspects. Engineering Secure Software and Systems, LNCS 5429, 16-‐31.
[66] C. Hankin, F. Nielson, H. R. Nielson, Advice from belnap policies, in: Proceedings of the 22nd IEEE Computer Security Foundations Symposium, CSF’09, IEEE Computer Society, 2009, pp. 234–247.
[67] Fan Yang, Tomoyuki Aotani, Hidehiko Masuhara, Flemming Nielson, and Hanne Riis Nielson, Combining Static Analysis and Run-‐time Checking in Security Aspects for Distributed Tuple Spaces, Proc. COORDINATION 2011.
[68] Kwang-‐Sik Shin, Jin-‐Ha Jung, Jin Young Cheon, Sang-‐Bang Choi, “Real-‐time network monitoring scheme based on SNMP for dynamic information”, J. Network and Computer Applications (JNCA) 30(1):331-‐353 (2007)
[69] Wenxian Zeng, Yue Wang: Design and Implementation of Server Monitoring System Based on SNMP. JCAI 2009:680-‐682
[70] Eric H. Corwin: Deep Packet Inspection: Shaping the Internet and the Implications on Privacy and Security. Information Security Journal: A Global Perspective (ISJGP) 20(6):311-‐316 (2011)
[71] AspectOriented Monitoring of UML and OCL Constraints ; Mark Richters, Martin Gogolla ; In AOSD Modeling With UML Workshop, 6th International Conference on the Unified Modeling Language (UML), 2003
[72] Amjad Nusayr, Jonathan Cook: Extending AOP to Support Broad Runtime Monitoring Needs. SEKE 2009:438-‐441
[73] R. M. Hierons, J. P. Bowen, and M. Harman: Formal Methods and Testing: An Outcome of the FORTEST Network, Revised Selected Papers, Lecture Notes in Computer Science, volume 4949, Springer, 2008
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 67 / 69
[74] F. Zaïdi, A. Cavalli, and E. Bayse. Network Protocol Interoperability Testing based on Contextual Signatures and Passive Testing. In 24th Annual ACM Symposium on Applied Computing, pages 2-‐7, Hawaii, USA, March 2009. ACM
[75] P. Oehlert, “Violating assumptions with fuzzing,” IEEE Security & Privacy, 2005, pp. 58–62.
[76] J. Paul Gibson, Eric Lallet, and Jean-‐Luc Raffy. Engineering a distributed e-‐voting system architecture: Meeting critical requirements. In Holger Giese, editor, Architecting Critical Systems, First International Symposium, Prague, June 23-‐25, 2010, Proceedings, volume 6150 of Lecture Notes in Computer Science, pages 89{108. Springer, 2010.
[77] T. Storer and I. Duncan, “Polsterless remote electronic voting,” Journal of E-‐Government, vol. 1, pp. 75-‐103, October 2004.
[78] Kengo Mori and Kazue Sako. The possibility of cryptographic e-‐voting with mobile phones. Workshop On Trustworthy Elections (WOTE 2006) United Kingdom -‐ June, 2006.
[79] “The European Communications Architecture for Co-‐operative Systems – Summary Document”, European Commission, Information Society & Media DG, Brussels, 2009
[80] M. Raya, P. Papadimitratos, and J.-‐P. Hubaux. Securing vehicular communications. IEEE Wireless Communications magazine, Volume 13, Issue 5, October 2006
[81] Kargl, P. Papadimitratos, L. Buttyan, M. Mu ̈ter, B. Wiedersheim, E. Schoch, T.-‐V. Thong, G. Calandriello, A. Held, A. Kung, and J.-‐ P. Hubaux. Secure vehicular communications: Implementation, per-‐ formance, and research challenges. IEEE Communcations Magazine, November 2008.
[82] Gerardo Morales, Stéphane Maag, Ana R. Cavalli, Wissam Mallouli, Edgardo Montes de Oca, Bachar Wehbi: Timed Extended Invariants for the Passive Testing of Web Services. ICWS 2010:592-‐599
[83] A. Boukerche and Y. Ren, "A trust-‐based security system for ubiquitous and pervasive computing environments," Computer Communications, vol. 31, pp. 4343-‐4351, 2008
[84] M. Raya and J. Hubaux, “Securing vehicular ad hoc networks”, Journal of Computer Security, vol. 15, no. 1, pp. 39-‐68, 2007
[85] R. Lu, X. Lin, H. Zhu, P. Ho and X. Shen, “ECPP: Efficient con-‐ ditional privacy preservation protocol for secure vehicular communica-‐ tions”, in IEEE INFOCOM 2008, pp. 1229-‐1237, 2008
[86] J. Guo, J. P. Baugh and S. Wang, “A group signature based secure and privacy-‐preserving vehicular communication framework”, in 2007 Mobile Networking for Vehicular Environments, pp. 103-‐108, 2007.
[87] Q. Wu, Y. Mu, W. Susilo, B. Qin, J. Domingo-‐Ferrer, "Asymmetric group key agreement", in EUROCRYPT 2009, pp. 153-‐170.
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 68 / 69
[88] L. Zhang, Q. Wu, B. Qin, J. Domingo-‐Ferrer, “Practical Privacy for Value-‐Added Applications in Vehicular Ad Hoc Networks”, in IDCS 2012, pp. 43-‐56
[89] Council of Europe, Committee of Ministers, “Recommendation Rec(2004)11 of the Committee of Ministers to member states on legal, operational and technical standards for e-‐voting”, 2004.
[90] Neff, A., Adler, J. (2003). “Verifiable e-‐Voting Indisputable Electronic Elections at Polling Places”. VoteHere Inc., 2003.
[91] Puiggalí, J. and Morales-‐Rocha, V. 2007. “Remote voting schemes: a comparative analysis”. In Proceedings of the 1st international Conference on E-‐Voting and Identity (Bochum, Germany, October 04 -‐ 05, 2007). A. Alkassar and M. Volkamer, Eds. Lecture Notes In Computer Science. Springer-‐Verlag, Berlin, Heidelberg, 16-‐28.
[92] Regenscheid, A. and Hastings, N. 2008. “A Threat Analysis on UOCAVA Voting Systems. NIST”.
[93] Macintosh, A. (2004), "Characterizing E-‐Participation in Policy-‐Making", In the Proceedings of the Thirty-‐Seventh Annual Hawaii International Conference on System Sciences (HICSS-‐37), January 5 – 8, 2004, Big Island, Hawaii.
[94] Langer, L.; Schmidt, A.; Buchmann, J.; Volkamer, M.; and Stolfik, A. (2009). “Towards a Framework on the Security Requirements for Electronic Voting Protocols”. Proceedings of the 2009 First International Workshop on Requirements Engineering for e-‐Voting Systems (RE-‐VOTE '09). IEEE Computer Society, Washington, DC, USA, 61-‐68.
[95] M. Volkamer and R. Vogt, "Basic set of security requirements for Online Voting Products," Bundesamt für Sicherheit in der Informationstechnik, Bonn, Common Criteria Protection Profile BSI-‐CC-‐PP-‐0037, April 2008, http://www.bsi.de/zertifiz/ zert/reporte/pp0037b engl.pdf.
[96] Rojan Gharadaghy, Melanie Volkamer: Verifiability in Electronic Voting -‐ Explanations for Non Security Experts. Electronic Voting 2010: 151-‐162
Inter-Trust – ICT FP7 – G.A. 317731 INTER-‐TRUST-‐T2.2-‐UMU-‐DELV-‐D2.2.1-‐GapStandardsAn-‐First-‐V1.00
© INTER-TRUST Consortium 69 / 69
Annex.A ITS Applications Communication Behaviour
Use case Type Addressing Hops Emergency vehicle warning V2V/V2I Broadcast Single Slow vehicle indication V2V Broadcast Single Intersection collision warning V2V/V2I Broadcast Single Motorcycle approaching indication V2V Broadcast Single Emergency electronic brake lights V2V Broadcast Multi Wrong way driving warning I2V Broadcast Single Stationary vehicle – accident V2V/V2I Broadcast Multi Stationary vehicle – vehicle problem V2V/V2I Broadcast Multi Traffic condition warning V2V/V2I Broadcast Multi Signal violation warning I2V Broadcast Single Roadwork warning I2V Broadcast Multi Collision risk warning V2V/V2I Broadcast Single Decentralized floating car data – Hazardous location V2V/I2V Broadcast Multi Decentralized floating car data – Precipitations V2V Broadcast Multi Decentralized floating car data – Road adhesion V2V Broadcast Multi Decentralized floating car data – Visibility V2V Broadcast Multi Decentralized floating car data – Wind V2V Broadcast Multi Regulatory / contextual speed limits notification I2V Broadcast Single Traffic light optimal speed advisory I2V Broadcast Multi Traffic information and recommended itinerary I2V Unicast/
Multicast Multi
Enhanced route guidance and navigation I2V Unicast/ Multicast
Multi
In-‐vehicle signage I2V Broadcast Single Point of Interest notification I2V Multicast Single Automatic access control and parking management I2V/V2I Unicast Single Local electronic commerce I2V/V2I Unicast Single Media downloading I2V/V2I Unicast Single Insurance and financial services I2V/V2I Unicast Single Fleet management I2V/V2I Unicast Single Loading zone management I2V/V2I Unicast/
Multicast Single
Vehicle software / data provisioning and update I2V/V2I Unicast Single Vehicle and RSU data calibration I2V/V2I Unicast Single
Table 9. Detailed description of ITS applications behaviour (extracted from [15])