Cryptographic Tamper Evidence

37
SECURE HASH ALGORITHM The SHA (Secure Hash Algorithm) hash functions refer to five FIPS-approved algorithms for computing a condensed digital representation (known as a message digest) that is, to a high degree of probability, unique for a given input data sequence (the message). These algorithms are called “secure” because (in the words of the standard), “for a given algorithm, it is computationally infeasible 1) to find a message that corresponds to a given message digest, or 2) to find two different messages that produce the same message digest. Any change to a message will, with a very high probability, result in a different message digest.” The five algorithms, denoted SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512, are cryptographic hash functions designed by the National Security Agency (NSA) and published by the NIST as a U. S. government standard. The latter four variants are sometimes collectively referred to as SHA-2. SHA-1 is employed in several widely used security applications and protocols, including TLS and SSL, PGP, SSH, S/MIME, and IPsec. It was considered to be the successor to MD5, an earlier, widely-used hash function. The security of SHA-1 has been somewhat compromised by cryptography researchers. Although no attacks have yet been reported on the SHA-2 variants, they are algorithmically

description

Cryptographic Tamper Evidence

Transcript of Cryptographic Tamper Evidence

Page 1: Cryptographic Tamper Evidence

SECURE HASH ALGORITHM

The SHA (Secure Hash Algorithm) hash functions refer to five FIPS-

approved algorithms for computing a condensed digital representation

(known as a message digest) that is, to a high degree of probability,

unique for a given input data sequence (the message). These

algorithms are called “secure” because (in the words of the standard),

“for a given algorithm, it is computationally infeasible 1) to find a

message that corresponds to a given message digest, or 2) to find two

different messages that produce the same message digest. Any

change to a message will, with a very high probability, result in a

different message digest.”

The five algorithms, denoted SHA-1, SHA-224, SHA-256, SHA-384, and

SHA-512, are cryptographic hash functions designed by the National

Security Agency (NSA) and published by the NIST as a U. S.

government standard. The latter four variants are sometimes

collectively referred to as SHA-2.

SHA-1 is employed in several widely used security applications and

protocols, including TLS and SSL, PGP, SSH, S/MIME, and IPsec. It was

considered to be the successor to MD5, an earlier, widely-used hash

function.

The security of SHA-1 has been somewhat compromised by

cryptography researchers. Although no attacks have yet been reported

on the SHA-2 variants, they are algorithmically similar to SHA-1 and so

efforts are underway to develop improved alternative hashing

algorithms. Due to recent attacks on the SHA-1, "NIST is initiating an

effort to develop one or more additional hash algorithms through a

public competition, similar to the development process for the

Advanced Encryption Standard (AES)."

Page 2: Cryptographic Tamper Evidence

SHA-1

One iteration within the SHA-1 compression function. A, B, C, D and E are 32-bit words

of the state; F is a nonlinear function that varies; n denotes a left bit rotation by n

places; n varies for each operation. denotes addition modulo 232. Kt is a constant.

The original specification of the algorithm was published in 1993 as the

Secure Hash Standard, FIPS PUB 180, by US government standards

agency NIST (National Institute of Standards and Technology). This

version is now often referred to as SHA-0. It was withdrawn by the NSA

shortly after publication and was superseded by the revised version,

published in 1995 in FIPS PUB 180-1 and commonly referred to as SHA-

1. SHA-1 differs from SHA-0 only by a single bitwise rotation in the

message schedule of its compression function; this was done,

according to the NSA, to correct a flaw in the original algorithm which

reduced its cryptographic security. However, the NSA did not provide

any further explanation or identify what flaw was corrected.

Weaknesses have subsequently been reported in both SHA-0 and SHA-

1. SHA-1 appears to provide greater resistance to attacks, supporting

the NSA’s assertion that the change increased the security.

Page 3: Cryptographic Tamper Evidence

SHA-1 (as well as SHA-0) produces a 160-bit digest from a message

with a maximum length of 264 − 1 bits and is based on principles

similar to those used by Ronald L. Rivest of MIT in the design of the

MD4 and MD5 message digest algorithms.

Cryptanalysis of SHA-1

In the light of the results on SHA-0, some experts suggested that plans

for the use of SHA-1 in new cryptosystems should be reconsidered.

After the CRYPTO 2004 results were published, NIST announced that

they planned to phase out the use of SHA-1 by 2010 in favour of the

SHA-2 variants.

In early 2005, Rijmen and Oswald published an attack on a reduced

version of SHA-1 — 53 out of 80 rounds — which finds collisions with a

complexity of fewer than 280 operations.

In February 2005, an attack by Xiaoyun Wang, Yiqun Lisa Yin, and

Hongbo Yu was announced. The attacks can find collisions in the full

version of SHA-1, requiring fewer than 269 operations. (A brute-force

search would require 280 operations.)

The authors write: "In particular, our analysis is built upon the original

differential attack on SHA0 [sic], the near collision attack on SHA0, the

multiblock collision techniques, as well as the message modification

techniques used in the collision search attack on MD5. Breaking SHA1

would not be possible without these powerful analytical techniques".

The authors have presented a collision for 58-round SHA-1, found with

233 hash operations. The paper with the full attack description was

published in August 2005 at the CRYPTO conference.

In an interview, Yin states that, "Roughly, we exploit the following two

weaknesses: One is that the file preprocessing step is not complicated

enough; another is that certain math operations in the first 20 rounds

have unexpected security problems.".

Page 4: Cryptographic Tamper Evidence

On 17 Augrust 2005, an improvement on the SHA-1 attack was

announced on behalf of Xiaoyun Wang, Andrew Yao and Frances Yao at

the CRYPTO 2005 rump session, lowering the complexity required for

finding a collision in SHA-1 to 263.

In academic cryptography, any attack that has less computational

complexity than a brute force search is considered a break. This does

not, however, necessarily mean that the attack can be practically

exploited. It has been speculated that finding a collision for SHA-1 is

within reach of massive distributed Internet search.

In terms of practical security, the major concern about this new attack

is that it might pave the way to more efficient ones. Whether this is the

case has yet to be seen, but a migration to stronger hashes is believed

to be prudent. A collision attack does not present the same kinds of

risks that a prreimage attack would. Many of the applications that use

cryptographic hashes, such as password storage or document signing,

are only minimally affected by a collision attack. In the case of

document signing, for example, an attacker could not simply fake a

signature from an existing document—the attacker would have to fool

the private key holder into signing a preselected document. Reversing

password "encryption" (e.g. to obtain a password to try against a

user's account elsewhere) is not made possible by the attacks.

Constructing a password that works for a given account requires a

preimage attack, and access to the hash of the original password

(typically in the shadow file) which may or may not be trivial.

At the Rump Session of CRYPTO 2006, Christian Rechberger and

Christophe De Cannière claimed to have discovered a collision attack

on SHA-1 that would allow an attacker to select at least parts of the

message.

SHA-1 algorithm

Page 5: Cryptographic Tamper Evidence

Pseudocode for the SHA-1 algorithm follows:

Note: All variables are unsigned 32 bits and wrap modulo 232 when calculating

Initialize variables:

h0 := 0x67452301

h1 := 0xEFCDAB89

h2 := 0x98BADCFE

h3 := 0x10325476

h4 := 0xC3D2E1F0

Pre-processing:

append the bit '1' to the message

append k bits '0', where k is the minimum number >= 0 such that the resulting

message

length (in bits) is congruent to 448 (mod 512)

append length of message (before pre-processing), in bits, as 64-bit big-endian

integer

Process the message in successive 512-bit chunks:

break message into 512-bit chunks

for each chunk

break chunk into sixteen 32-bit big-endian words w[i], 0 ≤ i ≤ 15

Extend the sixteen 32-bit words into eighty 32-bit words:

for i from 16 to 79

w[i] := (w[i-3] xor w[i-8] xor w[i-14] xor w[i-16]) leftrotate 1

Initialize hash value for this chunk:

a := h0

b := h1

c := h2

d := h3

e := h4

Main loop:

for i from 0 to 79

if 0 ≤ i ≤ 19 then

Page 6: Cryptographic Tamper Evidence

f := (b and c) or ((not b) and d)

k := 0x5A827999

else if 20 ≤ i ≤ 39

f := b xor c xor d

k := 0x6ED9EBA1

else if 40 ≤ i ≤ 59

f := (b and c) or (b and d) or (c and d)

k := 0x8F1BBCDC

else if 60 ≤ i ≤ 79

f := b xor c xor d

k := 0xCA62C1D6

temp := (a leftrotate 5) + f + e + k + w[i]

e := d

d := c

c := b leftrotate 30

b := a

a := temp

Add this chunk's hash to result so far:

h0 := h0 + a

h1 := h1 + b

h2 := h2 + c

h3 := h3 + d

h4 := h4 + e

Produce the final hash value (big-endian):

digest = hash = h0 append h1 append h2 append h3 append h4

Instead of the formulation from the original FIPS PUB 180-1 shown, the

following equivalent expressions may be used to compute f in the main

loop above:

(0 ≤ i ≤ 19): f := d xor (b and (c xor d)) (alternative)

(40 ≤ i ≤ 59): f := (b and c) or (d and (b or c)) (alternative 1)

(40 ≤ i ≤ 59): f := (b and c) or (d and (b xor c)) (alternative 2)

(40 ≤ i ≤ 59): f := (b and c) + (d and (b xor c)) (alternative 3)

Page 7: Cryptographic Tamper Evidence

RIVEST SHAMIR AND ADLEMAN ALGORITHM

In cryptology, RSA is an algorithm for public-key encryption. It was the

first algorithm known to be suitable for signing as well as encryption,

and one of the first great advances in public key cryptography. RSA is

still widely used in electronic commerce protocols, and is believed to

be secure given sufficiently long keys and the use of up-to-date

implementations.

History

The algorithm was publicly described in 1977 by Ron Rivest, Adi

Shamir and Leonard Adleman at MIT; the letters RSA are the initials of

their surnames. Apocryphally, it was invented at a Passover Seder in

Schenectady, N.Y.

Clifford Cocks, a British mathematician working for the UK intelligence

agency GCHQ, described an equivalent system in an internal document

in 1973, but given the relatively expensive computers needed to

implement it at the time, it was mostly considered a curiosity and, as

far as is publicly known, was never deployed. His discovery, however,

was not revealed until 1997 due to its top-secret classification, and

Rivest, Shamir, and Adleman devised RSA independently of Cocks'

work.

MIT was granted US patent 4405829 for a "Cryptographic

communications system and method" that used the algorithm in 1983.

The patent expired on 21 Serptember 2000. Since a paper describing

the algorithm had been published in August 1977, prior to the

December 1977 filing date of the patent application, regulations in

Page 8: Cryptographic Tamper Evidence

much of the rest of the world precluded patents elsewhere and only

the US patent was granted. Had Cocks' work been publicly known, a

patent in the US would not have been possible either.

Operation

RSA involves a public and private key. The public key can be known to

everyone and is used for encrypting messages. Messages encrypted

with the public key can only be decrypted using the private key. The

keys for the RSA algorithm are generated the following way:

1. Choose two large random prime numbers and

2. Compute

o is used as the modulus for both the public and private keys

3. Compute the totient: .

4. Choose an integer such that 1 < < , and is coprime to ie: and

share no factors other than 1; gcd( , ) = 1.

o is released as the public key exponent

5. Compute to satisfy the congruence relation ie:

for some integer .

o is kept as the private key exponent

Notes on the above steps:

Step 1: Numbers can be probabilistically tested for primality.

Step 3: changed in PKCS#1 v2.0 to instead of

.

Step 4: A popular choice for the public exponents is = 216 + 1 = 65537. Some

applications choose smaller values such as = 3, 5, or 35 instead. This is done to

make encryption and signature verification faster on small devices like smart

cards but small public exponents may lead to greater security risks.

Page 9: Cryptographic Tamper Evidence

Steps 4 and 5 can be performed with the extended Euclidean algorithm; see

modular arithmetic.

The public key consists of the modulus and the public (jor

encryption) exponent .

The private key consists of the modulus and the private (or

decryption) exponent which must be kept secret.

For efficiency a different form of the private key jcan be stored:

o and : the primes from the key generation,

o and : often called dmp1 and

dmq1.

o : often called iqmp

All parts of the private key must be kept secret in this form. and are sensitive

since they are the factors of , and allow computation of given . If and are

not stored in this form of the private key then they are securely deleted along with

other intermediate values from key generation.

Although this form allows faster decryption and signing by using the Chinese

Remainder Theorem, it is considerably less secure since it enables side channrel

attacks. This is a particular problem if implemented on smart cards, which benefit

most from the improved efficiency. (Start with y = xemodn and let the card

decrypt that. So it computes yd(mod p) or yd(mod q) whose results give some

value z. Now, induce an error in one of the computations. Then gcd(z − x,n) will

reveal p or q.)

Encrypting messages

Alice transmits her public key ( & ) to Bob and keeps the private key

secret. Bob then wishes to send message M to Alice.

Page 10: Cryptographic Tamper Evidence

He first turns M into a number < by using an agreed-upon

reversible protocol known as a padding scheme. He then computes the

ciphertext corresponding to:

This can be done quickly using the method of exponentiation by

squaring. Bob then transmits to Alice.

Decrypting messages

Alice can recover from by using her private key in the following

procedure:

Given , she can recover the original message M.

The decryption procedure works because first

.

Now, since

and

Fermat's little theorem yields

and

.

Since and are distinct prime numbers, applying the Chinese

remainder theorem to these two congruences yields

.

Thus,

.

Padding schemes

Page 11: Cryptographic Tamper Evidence

When used in practice, RSA must be combined with some form of

padding scheme, so that no values of M result in insecure ciphertexts.

RSA used without padding may suffer from a number of potential

problems:

The values m = 0 or m = 1 always produce ciphertexts equal to 0 or 1

respectively, due to the properties of exponentiation.

When encrypting with low encryption exponents (e.g., e = 3) and small values of

the m, the (non-modular) result of me may be strictly less than the modulus n. In

this case, ciphertexts may be easily decrypted by taking the eth root of the

ciphertext with no regard to the modulus.

Because RSA encryption is a deterministic encryption algorithm – i.e., has no

random component – an attacker can successfully launch a chosen plaintext attack

against the cryptosystem, building a dictionary by encrypting likely plaintexts

under the public key, and storing the resulting ciphertexts. When matching

ciphertexts are observed on a communication channel, the attacker can use this

dictionary in order to learn the content of the message.

In practice, the first two problems might arise when sending short

ASCII messages, where m is the concatenation of one or more ASCII-

encoded character(s). A message consisting of a single ASCII NUL

character (whose numeric value is 0) would be encoded as m = 0,

which produces a ciphertext of 0 regardless of what e and N are used.

Likewise, a single ASCII SOH (whose numeric value is 1) would always

produce a ciphertext of 1. For systems which conventionally use small

values of e, such as 3, all single character ASCII messages encoded

using this scheme would be insecure, since the largest m would have a

value of 255, and 2553 is less than any reasonable modulus. Such

plaintexts could be recovered by simply taking the cube root of the

ciphertext.

To avoid these problems, practical RSA implementations typically

embed some form of structured, randomized padding into the value m

Page 12: Cryptographic Tamper Evidence

before encrypting it. This padding ensures that m does not fall into the

range of insecure plaintexts, and that a given message, once padded,

will encrypt to one of a large number of different possible ciphertexts.

The latter property can increase the cost of a dictionary attack beyond

the capabilities of a reasonable attacker.

Standards such as PKCS have been carefully designed to securely pad

messages prior to RSA encryption. Because these schemes pad the

plaintext m with some number of additional bits, the size of the un-

padded message M must be somewhat smaller. RSA padding schemes

must be carefully designed so as to prevent sophisticated attacks

which may be facilitated by a predictable message structure. Early

versions of the PKCS standard used ad-hoc constructions, which were

later found vulnerable to a practical adaptive chosen ciphertext attack.

Modern constructions use secure techniques such as Optimal

Asymmetric Encryption Padding (OAEP) to protect messages while

preventing these attacks. The PKCS standard also incorporates

processing schemes designed to provide additional security for RSA

signatures, e.g., the Probabilistic Signature Scheme for RSA (RSA-PSS).

Signing messages

Suppose Alice uses Bob's public key to send him an encrypted

message. In the message, she can claim to be Alice but Bob has no

way of verifying that the message was actually from Alice since anyone

can use Bob's public key to send him encrypted messages. So, in order

to verify the origin of a message, RSA can also be used to sign a

message.

Suppose Alice wishes to send a signed message to Bob. She produces

a hash value of the message, raises it to the power of d mod n (as she

does when decrypting a message), and attaches it as a "signature" to

the message. When Bob receives the signed message, he raises the

Page 13: Cryptographic Tamper Evidence

signature to the power of e mod n (as he does when encrypting a

message), and compares the resulting hash value with the message's

actual hash value. If the two agree, he knows that the author of the

message was in possession of Alice's secret key, and that the message

has not been tampered with since.

Note that secure padding schemes such as RSA-PSS are as essential

for the security of message signing as they are for message

encryption, and that the same key should never be used for both

encryption and signing purposes.

Security

The security of the RSA cryptosystem is based on two mathematical

problems: the problem of factoring large numbers and the RSA

problem. Full decryption of an RSA ciphertext is thought to be

infeasible on the assumption that both of these problems are hard, i.e.,

no efficient algorithm exists for solving them. Providing security

against partial decryption may require the addition of a secure padding

scheme.

The RSA problem is defined as the task of taking eth roots modulo a

composite n: recovering a value m such that c=me mod n, where (e, n)

is an RSA public key and c is an RSA ciphertext. Currently the most

promising approach to solving the RSA problem is to factor the

modulus n. With the ability to recover prime factors, an attacker can

compute the secret exponent d from a public key (e, n), then decrypt c

using the standard procedure. To accomplish this, an attacker factors n

into p and q, and computes (p-1)(q-1) which allows the determination

of d from e. No polynomial-time method for factoring large integers on

a classical computer has yet been found, but it has not been proven

that none exists. See integer factorization for a discussion of this

problem.

Page 14: Cryptographic Tamper Evidence

As of 2005, the largest number factored by a general-purpose factoring

algorithm was 663 bits long, using a state-of-the-art distributed

implementation. RSA keys are typically 1024–2048 bits long. Some

experts believe that 1024-bit keys may become breakable in the near

term (though this is disputed); few see any way that 4096-bit keys

could be broken in the foreseeable future. Therefore, it is generally

presumed that RSA is secure if n is sufficiently large. If n is 256 bits or

shorter, it can be factored in a few hours on a personal computer,

using software already freely available. If n is 512 bits or shorter, it can

be factored by several hundred computers as of 1999. A theoretical

hardware device named TWIRL and described by Shamir and Tromer in

2003 called into question the security of 1024 bit keys. It is currently

recommended that n be at least 2048 bits long.

In 1994, Peter Shor published Shor's algorithm, showing that a

quantum computer could in principle perform the factorization in

polynomial time, rendering RSA and related algorithms obsolete.

However, quantum computation is not expected to be developed to

such a level for many years.

Practical considerations

Key generation

Finding the large primes p and q is usually done by testing random

numbers of the right size with probabilistic primality tests which

quickly eliminate virtually all non-primes.

p and q should not be 'too close', lest the Fermat factorization for n be

successful. Furthermore, if either p-1 or q-1 has only small prime

factors, n can be factored quickly by Pollard's p-1 algorithm, and these

values of p or q should therefore be discarded as well.

Page 15: Cryptographic Tamper Evidence

One should not employ a prime search method which gives any

information whatsoever about the primes to the attacker. In particular,

a good random number generator for the start value needs to be

employed. Note that the requirement here is both 'random' and

'unpredictable'. These are not the same criteria; a number may have

been chosen by a random process (ie, no pattern in the results), but if

it is predictable in any manner (or even partially predictable), the

method used will result in loss of security. For example, the random

number table published by the Rand Corp in the 1950s might very well

be truly random, but it has been published and thus can serve an

attacker as well. If the attacker can guess half of the digits of p or q, he

can quickly compute the other half (shown by Coppersmith in 1997).

It is important that the secret key d be large enough. Michael J. Wiener

showed in 1990 that if p is between q and 2q (which is quite typical)

and d < n1/4/3, then d can be computed efficiently from n and e. There

is no known attack against small public exponents such as e=3,

provided that proper padding is used. However, when no padding is

used or when the padding is improperly implemented then small public

exponents have a greater risk of leading to an attack, such as for

example the unpadded plaintext vulnerability listed above. 65537 is a

commonly used value for e. This value can be regarded as a

compromise between avoiding potential small exponent attacks and

still allowing efficient encryptions (or signature verification). The NIST

draft FIPS PUB 186-3 (March 2006) does not allow public exponents e

smaller than 65537, but does not state a reason for this restriction.

Speed

RSA is much slower than DES and other symmetric cryptosystems. In

practice, Bob typically encrypts a secret message with a symmetric

algorithm, encrypts the (comparatively short) symmetric key with RSA,

Page 16: Cryptographic Tamper Evidence

and transmits both the RSA-encrypted symmetric key and the

symmetrically-encrypted message to Alice.

This procedure raises additional security issues. For instance, it is of

utmost importance to use a strong random number generator for the

symmetric key, because otherwise Eve (an eavesdropper wanting to

see what was sent) could bypass RSA by guessing the symmetric key.

Key distribution

As with all ciphers, how RSA public keys are distributed is important to

security. Key distribution must be secured against a man-in-the-middle

attack. Suppose Eve has some way to give Bob arbitrary keys and

make him believe they belong to Alice. Suppose further that Eve can

intercept transmissions between Alice and Bob. Eve sends Bob her own

public key, which Bob believes to be Alice's. Eve can then intercept any

ciphertext sent by Bob, decrypt it with her own secret key, keep a copy

of the message, encrypt the message with Alice's public key, and send

the new ciphertext to Alice. In principle, neither Alice nor Bob would be

able to detect Eve's presence. Defenses against such attacks are often

based on digital certificates or other components of a public key

infrastructure.

Timing attacks

Kocher described a new attack on RSA in 1995: if the attacker Eve

knows Alice's hardware in sufficient detail and is able to measure the

decryption times for several known ciphertexts, she can deduce the

decryption key d quickly. This attack can also be applied against the

RSA signature scheme. In 2003, Boneh and Brumley demonstrated a

more practical attack capable of recovering RSA factorizations over a

network connection (e.g., from a Secure Socket Layer (SSL)-enabled

webserver). This attack takes advantage of information leaked by the

Page 17: Cryptographic Tamper Evidence

Chinese remainder theorem optimization used by many RSA

implementations.

One way to thwart these attacks is to ensure that the decryption

operation takes a constant amount of time for every ciphertext.

However, this approach can significantly reduce performance. Instead,

most RSA implementations use an alternate technique known as

cryptographic blinding. RSA blinding makes use of the multiplicative

property of RSA. Instead of computing cd mod n, Alice first chooses a

secret random value r and computes (rec)d mod n. The result of this

computation is r m mod n and so the effect of r can be removed by

multiplying by its inverse. A new value of r is chosen for each

ciphertext. With blinding applied, the decryption time is no longer

correlated to the value of the input ciphertext and so the timing attack

fails.

Adaptive chosen ciphertext attacks

In 1998, Daniel Bleichenbacher described the first practical adaptive

chosen ciphertext attack, against RSA-encrypted messages using the

PKCS #1 v1 padding scheme (a padding scheme randomizes and adds

structure to an RSA-encrypted message, so it is possible to determine

whether a decrypted message is valid.) Due to flaws with the PKCS #1

scheme, Bleichenbacher was able to mount a practical attack against

RSA implementations of the Secure Socket Layer protocol, and to

recover session keys. As a result of this work, cryptographers now

recommend the use of provably secure padding schemes such as

Optimal Asymmetric Encryption Padding, and RSA Laboratories has

released new versions of PKCS #1 that are not vulnerable to these

attacks.

Branch Prediction Analysis (BPA) attacks

Page 18: Cryptographic Tamper Evidence

Many processors use a Branch predictor to determine whether a

conditional branch in the instruction flow of a program is likely to be

taken or not. Usually these processors also implement Simultaneous

multithreading (SMT). Branch Prediction Analysis attacks use a spy

process to discover (statistically) the private key when processed with

these processors.

Simple Branch Prediction Analysis (SBPA) claims to improve BPA in a

non-statistical way. In their paper, "On the Power of Simple Branch

Prediction Analysis", the authors of SBPA (Onur Aciicmez, Cetin Kaya

Koc and Jean-Pierre Seifert) claim to have discovered 508 out of 512

bits of an RSA key in 10 iterations.

This kind of attack requires the spying process to run on the same

machine as the cryptographic process processing the private key but

the spying process does not need any special privilege on the attacked

system.

Requirements Analysis Document

Template

Purpose

The results of the requirements elicitation and the analysis activities are

documented in the Requirements Analysis Document (RAD). This document

completely describes the system in terms of functional and nonfunctional

requirements and serves as a contractual basis between the client and the

developers.

Page 19: Cryptographic Tamper Evidence

Audience

The audience for the RAD includes the client, the users, the project

management, the system analysts (i.e., the developers who participate in the

requirements), and the system designers (i.e., the developers who participate in

the system design). The first part of the document, including use cases and

nonfunctional requirements, is written during requirements elicitation. The

formalization of the specification in terms of object models is written during

analysis. We use an example template for a RAD introduced in the book.

Template

Outline Description

1. Introduction

    1.1 Purpose of the system

    1.2 Scope of the system

    1.3 Objectives and success criteria of

the project

    1.4 Definitions, acronyms, and

abbreviations

    1.5 References

    1.6 Overview

The first section of the RAD is an

Introduction. Its purpose is to

provide a brief overview of the

function of the system and the

reasons for its development, its

scope, and references to the

development context (e.g.,

reference to the problem

statement written by the client,

references to existing systems,

feasibility studies). The

introduction also includes the

objectives and success criteria of

the project.

 

Page 20: Cryptographic Tamper Evidence

2. Current system The second section, Current

system, describes the current state

of affairs. If the new system will

replace an existing system, this

section describes the functionality

and the problems of the current

system. Otherwise, this section

describes how the tasks supported

by the new system are

accomplished now.

 

3. Proposed system The third section documents the

requirements elicitation and the

analysis model of the new system.

 

    3.1 Overview The overview presents a functional

overview of the system.

 

    3.2 Functional requirements Functional requirements describes

the high-level functionality of the

system.

 

    3.3 Nonfunctional requirements

        3.3.1 Usability

Nonfunctional requirements

describes user-level requirements

Page 21: Cryptographic Tamper Evidence

        3.3.2 Reliability

        3.3.3 Performance

        3.3.4 Supportability

        3.3.5 Implementation

        3.3.6 Interface

        3.3.7 Packaging

        3.3.8 Legal

that are not directly related to

functionality. This includes

usability, reliability, performance,

supportability, implementation,

interface, operational, packaging,

and legal requirements.

 

    3.4 System models

        3.4.1 Scenarios

        3.4.2 Use case model

        3.4.3 Object model

        3.4.4 Dynamic model

        3.4.5 User interface—navigational

paths and screen mock-ups

System models describes the

scenarios, use cases, object model,

and dynamic models for the

system. This section contains the

complete functional specification,

including mock-ups illustrating the

user interface of the system and

navigational paths representing

the sequence of screens. The

subsections Object model and

Dynamic model are written during

the Analysis activity.

 

4. Glossary A glossary of important terms, to ensure

consistency in the specification and to

ensure that we use the client’s terms.

System Design Document Template

Page 22: Cryptographic Tamper Evidence

Purpose

System design is documented in the System Design Document (SDD). It

describes design goals set by the project, subsystem decomposition (with UML

class diagrams), hardware/software mapping (with UML deployment diagrams),

data management, access control, control flow mechanisms, and boundary

conditions. The SDD is used to define interfaces between teams of developers

and serve as a reference when architecture-level decisions need to be revisited.

Audience

The audience for the SDD includes the project management, the system

architects (i.e., the developers who participate in the system design), and the

developers who design and implement each subsystem.

Template

Outline Description

1. Introduction

    1.1 Purpose of the system

    1.2 Design goals

    1.3 Definitions, acronyms and

abbreviations

    1.4 References

    1.5 Overview

The purpose of the Introduction is

to provide a brief overview of the

software architecture and the

design goals. It also provides

references to other documents and

traceability information (e.g.,

related requirements analysis

document, references to existing

systems, constraints impacting the

software architecture).

 

Page 23: Cryptographic Tamper Evidence

2. Current software architecture The second section describes the

architecture of the system being

replaced. If there is no previous

system, this section can be

replaced by a survey of current

architectures for similar systems.

The purpose of this section is to

make explicit the background

information that system architects

used, their assumptions, and

common issues the new system

will address.

 

3. Proposed software architecture The third section documents the

system design model of the new

system.

 

    3.1 Overview The overview presents a bird’s-eye

view of the software architecture

and briefly describes the

assignment of functionality to each

subsystem.

 

    3.2 Subsystem decomposition Subsystem decomposition

describes the decomposition into

subsystems and the

Page 24: Cryptographic Tamper Evidence

responsibilities of each. This is the

main product of system design.

 

    3.3 Hardware/software mapping Hardware/software mapping describes

how subsystems are assigned to

hardware and off-the-shelf components.

It also lists the issues introduced by

multiple nodes and software reuse.

 

    3.4 Persistent data management Persistent data management describes

the persistent data stored by the system

and the data management infrastructure

required for it. This section typically

includes the description of data

schemes, the selection of a database,

and the description of the encapsulation

of the database.

 

    3.5 Access control and security Access control and security describes

the user model of the system in terms of

anaccess matrix. This section also

describes security issues, such as the

selection of an authentication

mechanism, the use of encryption, and

the management of keys.

Page 25: Cryptographic Tamper Evidence

 

    3.6 Global software control Global software control describes how

the global software control is

implemented. In particular, this section

should describe how requests are

initiated and how subsystems

synchronize. This section should list

and address synchronization and

concurrency issues.

 

    3.7 Boundary conditions Boundary conditions describes the start-

up, shutdown, and error behavior of the

system. (If new use cases are

discovered for system administration,

these should be included in the

requirements analysis document, not in

this section.)

 

4. Subsystem services

Glossary

The fourth section, Subsystem services,

describes the services provided by each

subsystem in terms of operations. Although

this section is usually empty or incomplete

in the first versions of the SDD, this section

serves as a reference for teams for the

boundaries between their subsystems. The

interface of each subsystem is derived from

this section and detailed in the Object

Page 26: Cryptographic Tamper Evidence

Design Document.

OBJECT ORIENTED ANALYSIS AND DESIGN (OOAD)

Object-oriented analysis and design (OAD) is often part of the

development of large scale systems and programs often using the

Unified Modeling Language (UML). OAD applies object-modeling

techniques to analyze the requirements for a context — for example, a

system, a set of system modules, an organization, or a business unit —

and to design a solution. Most modern object-oriented analysis and

design methodologies are use case driven across requirements,

design, implementation, testing, and deployment.

Use cases were invented with object oriented programming, but

they're also very well suited for systems that will be implemented in

the procedural paradigm.

The Unified Modeling Language (UML) has become the standard

modeling language used in object-oriented analysis and design to

graphically illustrate system concepts.

Part of the reason for OAD is its use in

developing programs that will have an

extended lifetime.

Object Oriented Systems

An object-oriented system is composed of objects. The behavior of

the system is achieved through collaboration between these objects,

and the state of the system is the combined state of all the objects in

it. Collaboration between objects involves them sending messages to

Page 27: Cryptographic Tamper Evidence

each other. The exact semantics of message sending between objects

varies depending on what kind of system is being modeled. In some

systems, "sending a message" is the same as "invoking a method". In

other systems, "sending a message" might involve sending data via a

socket.

Object Oriented Analysis

Object-Oriented Analysis (OOA) aims to model the problem domain,

the problem we want to solve by developing an object-oriented (OO)

system.

The source of the analysis is a written requirement statements, and/or

written use cases, UML diagrams can be used to illustrate the

statements.

An analysis model will not take into account implementation

constraints, such as concurrency, distribution, persistence, or

inheritance, nor how the system will be built.

The model of a system can be divided into multiple domains each of

which are separately analysed, and represent separate business,

technological, or conceptual areas of interest.

The result of object-oriented analysis is a description of what is to be

built, using concepts and relationships between concepts, often

expressed as a conceptual model. Any other documentation that is

needed to describe what is to be built, is also included in the result of

the analysis. That can include a detailed user interface mock-up

document.

The implementation constraints are decided during the object-oriented

design (OOD) process.

Object Oriented Design

Page 28: Cryptographic Tamper Evidence

Object-Oriented Design (OOD) is an activity where the designers

are looking for logical solutions to solve a problem, using objects.

Object-oriented design takes the conceptual model that is the result of

object-oriented analysis, and adds implementation constraints imposed

by the environment, the programming language and the chosen tools,

as well as architectural assumptions chosen as basis of design.

The concepts in the conceptual model are mapped to concrete classes,

to abstract interfaces in APIs and to roles that the objects take in

various situations. The interfaces and their implementations for stable

concepts can be made available as reusable services. Concepts

identified as unstable in object-oriented analysis will form basis for

policy classes that make decisions, implement environment-specific or

situation specific logic or algorithms.

The result of the object-oriented design is a detail description how the

system can be built, using objects.

Object-oriented software engineering (OOSE) is an object

modeling language and methodology

OOSE was developed by Ivar Jacobson in 1992 while at Objectory AB. It

is the first object-oriented design methodology to employ use cases to

drive software design. It also uses other design products similar to

those used by OMT.

The tool Objectory was created by the team at Objectory AB to

implement the OOSE methodology. After success in the marketplace,

other tool vendors also supported OOSE.

After Rational bought Objectory AB, the OOSE notation, methodology,

and tools became superseded.

Page 29: Cryptographic Tamper Evidence

As one of the primary sources of the Unified Modeling Language (UML),

concepts and notation from OOSE have been incorporated into UML.

The methodology part of OOSE has since evolved into the Rational Unified

Process (RUP).

The OOSE tools have been replaced by tools supporting UML and RUP.

OOSE has been largely replaced by the UML notation and by the RUP

methodology.