Introducing CAS 3.0 Protocol: Security and Performance

13
Introducing CAS 3.0 protocol: Security and Performance M. Amin Saghizadeh JUN 2015

Transcript of Introducing CAS 3.0 Protocol: Security and Performance

Introducing CAS 3.0 protocol: Security and Performance

M. Amin Saghizadeh

JUN 2015

1) Introduction In this document we review the security and performance of the Central Authentication Service (CAS)

protocol. CAS is a single-sign-on / single-sign-off (SSO) protocol for the web originally created by Yale

University to provide a trusted way for an application to authenticate a user. It permits a user to access

multiple applications while providing their credentials (such as USERID and PASSWORD) only once to a

central CAS Server application.

After a brief introduction about the protocol and its flow in section 2, we review some security aspects

of the protocol in section 3 and see how secure is it against common attacks on authentication

protocols. Reviewing performance of the protocol and some considerations about high availability is in

section 4. We then conclude this document in section 5.

2) Introducing CAS 3.0 protocol The CAS protocol is a simple and powerful ticket-based protocol developed exclusively for CAS [1]. It

involves one or many clients and one server. Clients are embedded in classified applications called CAS

Services, whereas the CAS server is a standalone component. The CAS server is responsible for

authenticating users and granting accesses to applications and the CAS clients protect the CAS

applications and retrieve the identity of the granted users from the CAS server.

2-1) Base Definitions Before we proceed any further, we should take a look at some definitions and conventions used in the

rest of the document [2]. The key words MUST, MUST NOT, REQUIRED, SHALL, SHALL NOT, SHOULD,

SHOULD NOT, RECOMMENDED, MAY, and OPTIONAL in this document are to be interpreted as

described in RFC 21191 [3].

Client. Client refers to the end user and/or the web browser.

CAS Client. CAS Client refers to the software component that is integrated with a web application and

interacts with the CAS server via CAS protocol.

Server. Server refers to the Central Authentication Service server.

Service. Service refers to the application the client is trying to access.

Back-end Service. Back-end service refers to the application a service is trying to access on behalf of a

client. This can also be referred to as the Target Service.

SSO. SSO refers to Single Sign On.

SLO. SLO refers to Single Logout.

2-2) CAS URLs CAS is an HTTP-based [4] protocol that requires each of its components to be accessible through specific

URIs. This section discusses each of these URIs briefly. Listing (1) shows the list of the URIs and what

they supposed to be.

Listing 1 – CAS URLs and their meanings

URI Description

/login credential requestor / acceptor

/logout destroy CAS session (logout)

/validate service ticket validation

/serviceValidate service ticket validation [CAS 2.0]

/proxyValidate service/proxy ticket validation [CAS 2.0]

/proxy proxy ticket service [CAS 2.0]

/p3/serviceValidate service ticket validation [CAS 3.0]

/p3/proxyValidate service/proxy ticket validation [CAS 3.0]

Detailed information about URLs and their implementation parameters are out of scope of this

document and is available in [2].

2-3) Protocol Flows This section is about how the protocol actually works.

2-3-1) Web Flow As the client goes through the life cycle of the authentication process, it should operate differently in

some situations. When the client wants to access a protected application for the first time, it must login

to the CAS server with its credentials to obtain a session and a Ticket Granting Ticket (TGT) and a session

key. However, in further access attempts to that protected application, the client will only submit its

session key to the protected application. In addition, when the client wants to access another protected

application, it will submit its TGT to the application and after the validation, it receives another session

key and session cookie to access that application.

Figure (1) illustrates the process of the first access of the client to a protected application.

Figure 1 – First access to a protected application

At first, the client requests access to the protected application. As it is not authenticated yet, the

application redirects the client to the server. Server sees that the client doesn’t have a SSO session and

presents the login form to the client.

Client submits its credentials to the server. Server authenticates the client, generates a Ticket-Granting

Ticket (TGT) and sends it as a cookie to the client. This TGT is the session key of the session between the

client and the server. The server also generates a Service Ticket (TS) and sends it to the client by the

same cookie.

The client submits TS to the service (i.e. application). Then the application sends received TS to the

server in order that the server validates it. The server validates TS and returns a XML document

containing a success indicator, authenticated subject and other attributes.

Finally, the application generates a session key and sends it to the client. Now, the client can request

access to the application by submitting the session key. The application validates the session key and

serves the request of the client.

The steps of further accesses to the same protected application is shown in Figure (2).

Figure (2) – Further accesses to the same application

Like the last step of the previous process, the client requests access to the application by submitting the

session key. The application validates the session key and serves the request of the client.

Figure (3) shows the process of accessing new protected applications when the client has a TGT prom its

previous accesses to the other protected applications.

Figure 3 – First access to the other applications

Here, the client has authenticated itself to the server before. The client requests access to a new

protected application. As it has not a session key, the application redirects it to the server.

By this redirection, the client sends its TGT as the session key of its session with the server. The server

validates TGT (i.e. session key), generates a TS and sends it to the client.

The client sends TS to the application. The application validates the TS by sending it to the server and

receiving the success indicator from it. Then the application generates a session key for the client and

sends the session key to it.

From now on, the client can submit the session key and access to the application resources. The

application validates the session key and serves the request of the client.

2-3-2) Web Proxy Flow One of the features of the CAS protocol is the ability for a CAS service to act as a proxy for another CAS

service, transmitting the user identity [1]. In this approach, the client first authenticate with the server

and setup a session with the proxy application to obtain a session key. Here, the client does not interact

with the protected application at all – it just send the request to the proxy and after protocol steps and

validations, the client receives requested resources of the application through the proxy. This approach

is consisted of two flows: load application proxy, and accessing application resources through the proxy.

Figure (4) shows the flow to setting up a session with the proxy application. This flow includes

authenticating with the server and obtaining a session key for accessing the proxy.

Figure 4 – Load application proxy

The client requests access to the proxy (not the application). As the client is not authenticated, the proxy

redirects it to the server. The server sees that the client doesn’t have a session key (TGT) and presents

the login form to it.

The client submits its credentials to the server. The server authenticates the client, generate a TGT and

sends it to the client as a cookie. The server also generates TS for the client and sends it by the same

cookie.

The client sends the ST to the proxy. The proxy validates the ST by sending it to the server. Along the ST,

the proxy sends a callback URL itself to the server. The final goal of this URL is to mitigate impersonation

attacks in the further steps.

The server validates the ST and generates a PGT. Instead of sending the PGT to the proxy, the server

sends the TGT as a PGT identifier, along with an IOU (PGTIOU) to the proxy. The proxy stores the PGT

(i.e. the identifier) and PGTIOU mapping for further use. Till now, the proxy is registered a callback URL

on the server, and received a receipt (PGT and PGTIOU) from it. From now on, by receiving the PGTIOU

from the server, the proxy can authenticate the server by finding a mapping between the PGTIOU and a

valid PGT (which in fact is a TGT of the client) and believes that the client is already authenticated to the

CAS server.

Then the proxy generates a session key for the client and sends the key to it. From now on, the client

can access the proxy by submitting that session key and finally receives a specific resource of the

application via the proxy. However, the client cannot access all other resources of the application or

resources of other applications by this session key.

This is true because the session key will be mapped to a specific PGT-PGTIOU pair, and that pair will be

mapped to a specific TGT on the CAS server, cause creation of a specific ST which is valid for accessing

specific resources of the application.

After setting up a session with the proxy, the client send its request to the proxy and receives requested

resources from the proxy application. Figure (5) illustrates this flow.

Figure 5 – Access to application resources via proxy

The client send a request to the proxy and submits session key. The proxy validates the session key and

retrieves the appropriate PGT and sends it to the server. Along the PGT, the proxy submits the target

service (i.e. application) URL to the server. The server generates a ST based on the target service URL

and sends it to the proxy.

The proxy sends ST to the application. Then, the application validates ST by sending it to the server. The

server retrieves the appropriate callback URL of the proxy and sends it to the application. Know, the

application can check whether this callback URL and the referrer URL of the previous request from the

proxy matches or not. If so, it believes that the proxy is itself and nobody is trying to impersonate it.

Then, the application generates a session key for the proxy and delivers it to the proxy. From now on,

the proxy can submits the session key to the application and it will receive the content requested by the

client from the application. The proxy then delivers the resource to the client.

3) Security Analysis CAS provides secure Web-based single sign-on to Web-based applications. Single sign-on provides a

win/win in terms of security and convenience: it reduces password exposure to a single, trusted

credential broker while transparently providing access to multiple services without repetitious logins.

However, these assertions are from the WHITE point of view – the BLACK one shows us that not only the

above points can be compromised, but also it could leads the system to a situation even less secure than

a basic authentication approach.

In this section discuss common security risks and attacks on the CAS protocol. There is a fair CAS threat

modeling and a proposal to mitigate security risks available in [5] and [6] respectively from the Apereo

Foundation [7]. Some security features of CAS protocol and its implementation is mentioned in [8], too.

Almost all security concerns of CAS is from the fact that all communication with the CAS server MUST

occur over a secure channel [8] (i.e. SSL or TLS). There are two primary justifications for this

requirement: The authentication process requires transmission of security credentials, and the CAS

ticket-granting ticket (TGT) is a bearer token.

Since the disclosure of either data would allow impersonation attacks, it’s vitally important to secure the

communication channel between CAS clients and the CAS server.

Practically, it means that all CAS URLs must use HTTPS, but it also means that all connections from the

CAS server to the application must be done using HTTPS - when the generated service ticket is sent back

to the application on the service URL, and when a proxy callback URL is called.

If the protocol doesn’t go through HTTPS, it is really easy to perform eavesdropping, reply, MITM and

impersonation attacks. Attacker can eavesdrop the protocol flows and at last, it will have the protected

resource because they also send in plain fashion. The attacker can also perform reply attack by sending

the appropriate session key in each step and it also performs an impersonation attack, because all

parties will be remembered by their session keys. The attacker can perform MITM attacks too, and it can

use this attack to bring incorrect or harmful data instead of the protected resource.

4) Performance Analysis In this section we discuss about the performance of the CAS protocol. CAS is a light-weight and easy to

implement protocol. So it is a high performance protocol in comparison with advantages it offers.

Analyzing performance of the protocol can be performed on each approach of it: Web flow and web

proxy flow. In this document, we concentrate on the performance of the communication and skip the

computational and storage performance analysis of the protocol.

4-1) Base Definitions First of all, we define some terms used in the rest of this section.

CAS Entity. Each of the client, server, service (proxy or protected application) and target service

(protected application in proxy flow) is supposed as a CAS entity.

Interaction. An interaction is defined as a pair of Request – Response sent and received from/to a CAS

entity.

First Access. The steps to access any of the protected applications for the first time in the web flow.

Second Access. Accessing a same protected application again in the web flow.

First Access to another. Accessing to another protected application, but after a successful access to a

protected application in the web flow.

Load Proxy. The process of authenticating to the server and obtaining a session with the proxy

application in the web proxy flow.

Access via Proxy. Accessing to the application resources through the proxy in the web proxy flow.

4-2) Performance of the Web Flow Web flow is consisted of three separate phases: First Access, Second Access, and First Access to Another.

We discuss about performance of each phase in this section and then take an overall look at the

performance of the web flow.

There are six interactions in the First Access as illustrated in Figure (1). Two of them are between the

client and the server, and another one of the six is between the server and the application. That is, the

server is engaged in 50% of the communication.

From the other hand, only one interaction is in the Second Access, as shown in figure (2) and it is

between client and application. So, server is completely idle in this phase and it can be a good point

because the server can dedicate its resources to handle First Access phases of the client.

Finally, there is five interaction in the First Access to Another as you see in figure (3). Server is a pair of

two of them, one with client and the other with application. Therefore, the work of server is reduced

from 50% to 40% in comparison to the First Access phase.

If the communication pattern of the system is accessing a protected application just once, then in large

scales of this pattern, the server will handle about 50% the network traffic and it becomes the

bottleneck of the whole system. Therefore, the bandwidth efficiency will be about 1/6 or 16.67%. This

situation can occur when a server is responsible for protecting many applications and there are many

clients that want to access some application just once, or logout quickly.

However, if the traffic pattern of the system is accessing the same application or few applications, most

of requests don’t impose any interaction to channel as there is only one interaction in the second Access

and it is used to retrieve requested resources too. So, it can dramatically increase the bandwidth

efficiency. In this situation, the bandwidth is highly dependent of the amount of protected resources

and data supposed to be retrieved and length of the session key. Therefore, if the requested data is very

larger than the length of the session key, the bandwidth efficiency goes very high and CAS is really a high

performance protocol in this situation.

4-3) Performance of the Web Proxy Flow Proxy flow is consisted of two separate phases: Load Proxy and Access via Proxy. We discuss about

performance of each phase in this section and then take an overall look at the performance of the proxy

flow.

Like the web flow, there are six interaction in the Load Proxy as shown in figure (4). Two of them are

between the proxy and the client, and two of them also are between the proxy and the server. That is,

the proxy is engaged in 66.67% of the communication.

From the other side, there are five interactions in the Access via Proxy phase as illustrated in figure (5)

and the proxy is engaged in four of them. That is, the proxy is engaged in 80% of the communication

here. However, the client and server are idle most of the time and the application also is engaged in 40%

of the communication.

If the communication pattern of the system is accessing a protected application just once, then in large

scales of this pattern, the proxy will handle about 66.67% the network traffic and it becomes the

bottleneck of the whole system. In this situation, the bandwidth efficiency at the best case will be about

1/6 or 16.67%. The reason for this low efficiency is that the purpose of this phase to generate a session

key for the client and the proxy which costs one effective interaction. However, six interactions is

performed to achieve that goal. As it said, 16.67% is the best efficiency here and can be achieved when

the content length of the session key is equal with all content lengths of all other interactions. So, it will

be lower than 16.67% in the real world scenarios.

However, if the traffic pattern of the system is accessing the same application or few applications, most

of requests goes through the Access via Proxy phase and it make thing worse for the proxy – the proxy

will be engaged in 80% of the traffic and becomes a more critical bottleneck in the system. From the

other hand, the bandwidth efficiency is about 20% at the worst condition. The reason is that the length

of the requested data can be large in comparison to the content length of other interactions.

So, two main results can be deducted from the discussion of this section: protocol flows (web and proxy)

show opposite performance results in the above mentioned traffic patterns, and the low performance of

the proxy flow is mainly the cost of mitigating impersonation attacks (the attacker couldn’t impersonate

the proxy) and reducing the traffic load of the server.

5) Conclusion In this document we first introduced the CAS 3.0 protocol. CAS is a single-sign-on / single-sign-off (SSO)

protocol for the web. It has two flows (web flow and proxy flow) as a CAS service can be the back-end

service or a proxy. We also presented each of the flows and showed how the protocol actually works.

Then we discussed about the security matters. Security of the CAS protocol is completely rely on secure

communications (SSL, TLS). If the communication goes through HTTP, the attacker can eavesdrop the

session keys and perform impersonation attacks. He/she also can perform information discloser and

even MITM attacks and deliver wrong resources to the client.

We also analyzed the performance of the protocol in two flows, each of which in two traffic patterns.

We saw that protocol flows (web and proxy) show opposite performance results in traffic patterns and

that the low performance of the proxy flow is mainly the cost of mitigating impersonation attacks and

reducing the traffic load of the server.

6) References

[1] CAS - CAS Protocol. 2015. CAS - CAS Protocol. [ONLINE] Available at:

http://jasig.github.io/cas/4.0.x/protocol/CAS-Protocol.html. [Accessed 25 June 2015].

[2] CAS - CAS Protocol Specification. 2015. CAS - CAS Protocol Specification. [ONLINE] Available

at: http://jasig.github.io/cas/development/protocol/CAS-Protocol-Specification.html. [Accessed

26 June 2015].

[3] Bradner, S., “Key words for use in RFCs to Indicate Requirement Levels”, RFC 2119, Harvard

University, March 1997.

[4] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., Masinter, L., Leach, P., Berners-Lee, T., “Hypertext

Transfer Protocol - HTTP/1.1”, RFC 2068, UC Irvine, Compaq/W3C, Compaq, W3C/MIT, Xerox,

Microsoft, W3C/MIT, June 1999

[5] CAS Threat Modeling - Central Authentication Service - Apereo Wiki. 2015. CAS Threat Modeling

- Central Authentication Service - Apereo Wiki. [ONLINE] Available at:

https://wiki.jasig.org/display/CAS/CAS+Threat+Modeling. [Accessed 26 June 2015].

[6] Proposals to mitigate security risks - Central Authentication Service - Apereo Wiki.

2015. Proposals to mitigate security risks - Central Authentication Service - Apereo Wiki.

[ONLINE] Available at: https://wiki.jasig.org/display/CAS/Proposals+to+mitigate+security+risks.

[Accessed 26 June 2015].

[7] Apereo Foundation | Apereo. 2015. Apereo Foundation | Apereo. [ONLINE] Available

at: https://www.apereo.org/. [Accessed 26 June 2015].

[8] CAS - Security Guide. 2015. CAS - Security Guide. [ONLINE] Available at:

http://jasig.github.io/cas/4.0.x/planning/Security-Guide.html#secure-transport-https. [Accessed

26 June 2015].